• Complain

Peter S. Pacheco - An Introduction to Parallel Programming

Here you can read online Peter S. Pacheco - An Introduction to Parallel Programming full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2021, publisher: Elsevier Inc., genre: Computer. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

No cover

An Introduction to Parallel Programming: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "An Introduction to Parallel Programming" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Peter S. Pacheco: author's other books


Who wrote An Introduction to Parallel Programming? Find out the surname, the name of the author of the book and a list of all author's works by series.

An Introduction to Parallel Programming — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "An Introduction to Parallel Programming" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
An Introduction to Parallel Programming Second edition Peter S Pacheco - photo 1
An Introduction to Parallel Programming

Second edition

Peter S. Pacheco

University of San Francisco

Matthew Malensek

University of San Francisco

Table of Contents List of tables Tables in Chapter 2 Tables in Chapter 3 - photo 2

Table of Contents
List of tables
  1. Tables in Chapter 2
  2. Tables in Chapter 3
  3. Tables in Chapter 4
  4. Tables in Chapter 5
  5. Tables in Chapter 6
  6. Tables in Chapter 7
List of figures
  1. Figures in Chapter 1
  2. Figures in Chapter 2
  3. Figures in Chapter 3
  4. Figures in Chapter 4
  5. Figures in Chapter 5
  6. Figures in Chapter 6
  7. Figures in Chapter 7
Landmarks
Copyright

Morgan Kaufmann is an imprint of Elsevier

50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

Copyright 2022 Elsevier Inc. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher's permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

Cover art: seven notations, nickel/silver etched plates, acrylic on wood structure, copyright Holly Cohn

Notices

Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

Library of Congress Cataloging-in-Publication Data

A catalog record for this book is available from the Library of Congress

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library

ISBN: 978-0-12-804605-0

For information on all Morgan Kaufmann publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Katey Birtcher

Acquisitions Editor: Stephen Merken

Content Development Manager: Meghan Andress

Publishing Services Manager: Shereen Jameel

Production Project Manager: Rukmani Krishnan

Designer: Victoria Pearson

Typeset by VTeX

Printed in United States of America

Last digit is the print number:987654321

Dedication To the memory of Robert S Miller Preface Parallel hardware has been - photo 3

Dedication

To the memory of Robert S. Miller

Preface

Parallel hardware has been ubiquitous for some time now: it's difficult to find a laptop, desktop, or server that doesn't use a multicore processor. Cluster computing is nearly as common today as high-powered workstations were in the 1990s, and cloud computing is making distributed-memory systems as accessible as desktops. In spite of this, most computer science majors graduate with little or no experience in parallel programming. Many colleges and universities offer upper-division elective courses in parallel computing, but since most computer science majors have to take a large number of required courses, many graduate without ever writing a multithreaded or multiprocess program.

It seems clear that this state of affairs needs to change. Whereas many programs can obtain satisfactory performance on a single core, computer scientists should be made aware of the potentially vast performance improvements that can be obtained with parallelism, and they should be able to exploit this potential when the need arises.

Introduction to Parallel Programming was written to partially address this problem. It provides an introduction to writing parallel programs using MPI, Pthreads, OpenMP, and CUDA, four of the most widely used APIs for parallel programming. The intended audience is students and professionals who need to write parallel programs. The prerequisites are minimal: a college-level course in mathematics and the ability to write serial programs in C.

The prerequisites are minimal, because we believe that students should be able to start programming parallel systems as early as possible. At the University of San Francisco, computer science students can fulfill a requirement for the major by taking a course on which this text is based immediately after taking the Introduction to Computer Science I course that most majors take in the first semester of their freshman year. It has been our experience that there really is no reason for students to defer writing parallel programs until their junior or senior year. To the contrary, the course is popular, and students have found that using concurrency in other courses is much easier after having taken this course.

If second-semester freshmen can learn to write parallel programs by taking a class, then motivated computing professionals should be able to learn to write parallel programs through self-study. We hope this book will prove to be a useful resource for them.

The Second Edition

It has been nearly ten years since the first edition of Introduction to Parallel Programming was published. During that time much has changed in the world of parallel programming, but, perhaps surprisingly, much also remains the same. Our intent in writing this second edition has been to preserve the material from the first edition that continues to be generally useful, but also to add new material where we felt it was needed.

The most obvious addition is the inclusion of a new chapter on CUDA programming. When the first edition was published, CUDA was still very new. It was already clear that the use of GPUs in high-performance computing would become very widespread, but at that time we felt that GPGPU wasn't readily accessible to programmers with relatively little experience. In the last ten years, that has clearly changed. Of course, CUDA is not a standard, and features are added, modified, and deleted with great rapidity. As a consequence, authors who use CUDA must present a subject that changes much faster than a standard, such as MPI, Pthreads, or OpenMP. In spite of this, we hope that our presentation of CUDA will continue to be useful for some time.

Another big change is that Matthew Malensek has come onboard as a coauthor. Matthew is a relatively new colleague at the University of San Francisco, but he has extensive experience with both the teaching and application of parallel computing. His contributions have greatly improved the second edition.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «An Introduction to Parallel Programming»

Look at similar books to An Introduction to Parallel Programming. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «An Introduction to Parallel Programming»

Discussion, reviews of the book An Introduction to Parallel Programming and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.