• Complain

Yosi Ben-Asher - Issues in multicore programming using the ParC language

Here you can read online Yosi Ben-Asher - Issues in multicore programming using the ParC language full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: London, publisher: Springer London, genre: Home and family. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Yosi Ben-Asher Issues in multicore programming using the ParC language
  • Book:
    Issues in multicore programming using the ParC language
  • Author:
  • Publisher:
    Springer London
  • Genre:
  • City:
    London
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

Issues in multicore programming using the ParC language: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Issues in multicore programming using the ParC language" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Yosi Ben-Asher: author's other books


Who wrote Issues in multicore programming using the ParC language? Find out the surname, the name of the author of the book and a list of all author's works by series.

Issues in multicore programming using the ParC language — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Issues in multicore programming using the ParC language" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Yosi Ben-Asher Undergraduate Topics in Computer Science Multicore Programming Using the ParC Language 2012 10.1007/978-1-4471-2164-0_1 Springer-Verlag London 2012
1. Basic Concepts in Parallel Algorithms and Parallel Programming
Yosi Ben-Asher 1
(1)
Department of Computer Science, University of Haifa, Haifa, Israel
Yosi Ben-Asher
Email:
Abstract
The world of parallel processing is complex and combines many different ideas together. We first consider the question what is a parallel machine? We answer this question by presenting a model to build parallel machines. Separately, we consider the need to define what parallel programs are. We use partial orders to define the notion of parallel programs and show how they can be potentially executed on parallel machines.
The world of parallel processing is complex and combines many different ideas together. We first consider the question what is a parallel machine? We answer this question by presenting a model to build parallel machines. Separately, we consider the need to define what parallel programs are. We use partial orders to define the notion of parallel programs and show how they can be potentially executed on parallel machines.
1.1 Parallel Machines
Parallel machines can be viewed as a collection of sequential machines (processing elements or processors) that can communicate with one another. The processing elements are also called processors and correspond to a regular computer/CPU capable of executing a sequential program. Thus, from a hardware perspective, a parallel machine is a collection of independent, sequential machines/processors that are capable of communicating with one another. From a software perspective, a parallel machine executes parallel programs which, for the time being, can be regarded as a dynamic set of sequential programs (called threads) that can communicate with one another.
It follows that the most important aspect of both parallel machines and parallel programs is the ability of the processors/threads to communicate. It is this ability that glues a set of sequential-machines or a set of sequential-programs to a single coherent parallel-machine/parallel-program. Thus, from both hardware and software perspective, in order to understand what parallel machines are we need to define what we mean by communication. Basically we first consider the question of what is communication between two entities and consider how to extend to a larger set. In general there can be two forms of communication between two processors/threads:
  • Message passing where one processors sends a data that is received by the other processor.
  • Reading and writing to a shared memory. For example, a parallel computation between two processors using a shared memory can occur if one processor executes t 1= read ( x ); t 2= read ( y ); write ( x = t 1+ t 2); and the other processor executes t 1= read ( x ); t 2= read ( y ); write ( y = t 1 t 2); where x , y are stored in a shared memory.
Message passing is a more basic operation than reading/writing to a shared memory. Technically we only need the two processors to be connected by a wire to exchange messages. Basically we can simulate shared memory by message passing. The above example of updating x and y can be simulate if the first processor hold x and executes:
while the second processor hold y and executes This is clearly only one - photo 1
while the second processor hold y and executes
This is clearly only one possible way to simulate shared memory via message - photo 2
This is clearly only one possible way to simulate shared memory via message passing and thus our goal is first to show how to build parallel machine with message passing and then to define how shared memory can be used on top of it.
In order for processors to communicate with each other, they must be connected to each other via a communication network that can direct messages from one processor to the other. This configuration is similar to a telephone network that allows users to call each other, or to mail services that allow us to send and receive letters. The performance of the underlying communication network can definitely affect the execution time of a given program on a parallel machine. Obviously, the more the processors need to communicate with each other, the more the performance of the communication network will affect the overall execution time.
It is thus appropriate to consider several types of schematic communication networks to see how they affect performance. In general a communication network is a collection of processors that can be connected via two types of lines:
Links
connecting two processors to each other, allowing one processor to send a message to another. Only one message can pass through a link during a given time unit.
Bus
is a communication line that connects several processors to each other. A bus allows one processor to broadcast a message to all the other processors that are connected to a given bus. Only one message can be broadcast on a bus at a given time. In the case that several messages are broadcast we can assume that a special signal is generated indicating that more than one message has been broadcast.
Each processor can be connected via several buses and links but can use only one of them during a given time unit (either to send or to receive a message). Figure depicts a possible configuration for a parallel machine using both buses and point-to-point links.
Fig 11 Example of a communication network We consider three possible - photo 3
Fig. 1.1
Example of a communication network
We consider three possible factors that can be used to compare and classify communication networks:
Degree
is the maximal number of links/buses that connects a processor to its neighbors. The degree of each node in a communication network determines how many messages should be processed (sent/received) in every step by each node. Communication networks with smaller degrees are therefore preferable (e.g., a network with N processors and degree =4 is preferable to a network with degree =log N ).
Latency (or diameter)
is the number of links/buses through which a message sent from one processor to another may need to pass in the worst case scenario (e.g., for N processors, a latency of log N is preferable to a latency of N 1).
Bandwidth
is the maximal number of messages that have to pass through some links/buses if half of the processors need to exchange messages with the remaining half of the processors. In order to compute the bandwidth of a given network, we should determine the worst partition of the processors into two halves that maximizes the number of messages that have to pass through any link or a bus passing through the cut that separates these two halves. Figure illustrates three cuts:
  • W 1 with # edges _ between ( H 1, H 2)=6 yielding B =16.
  • W 1 with # edges _ between ( H 1, H 2)=16 yielding B =6.
  • W 1 with # edges _ between ( H 1, H 2)=20 yielding B =4.8.
The first choice W 1 is the worst-possible increasing the bandwidth to 16. Note that the minimal bandwidth will always be obtained had we used H 1 to be all the odd elements in every row/column of the grid and H 2 to contain all the even elements in every row/column of the grid.
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Issues in multicore programming using the ParC language»

Look at similar books to Issues in multicore programming using the ParC language. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Issues in multicore programming using the ParC language»

Discussion, reviews of the book Issues in multicore programming using the ParC language and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.