Supplemental files and examples for this book can be found at http://examples.oreilly.com/9781565921153/. Please use a standard desktop web browser to access these files, as they may not be accessible from all ereader devices.
All code files or examples referenced in the book will be available online. For physical books that ship with an accompanying disc, whenever possible, weve posted all CD/DVD content. Note that while we provide as much of the media content as we are able via free download, we are sometimes limited by licensing restrictions. Please direct any questions or concerns to .
Preface
Its been quite a while since the people from whom we get our project assignments accepted the excuse Gimme a break! I can only do one thing at a time! It used to be such a good excuse, too, when things moved just a bit slower and a good day was measured in written lines of code. In fact, today we often do many things at a time. We finish off breakfast on the way into work; we scan the Internet for sports scores and stock prices while our application is building; wed even read the morning paper in the shower if the right technology were in place!
Being busy with multiple things is nothing new, though. (Well just give it a new computer-age name, like multitasking , because computers are happiest when we avoid describing them in anthropomorphic terms.) Its the way of the natural worldwe wouldnt be able to write this book if all the body parts needed to keep our fingers moving and our brains engaged didnt work together at the same time. Its the way of the mechanical worldwe wouldnt have been able to get to this lovely prefabricated office building to do our work if the various, clanking parts of our automobiles didnt work together (most of the time). Its the way of the social and business worldthree authoring tasks went into the making of this book, and the number of tasks, all happening at once, grew exponentially as it went into its review cycles and entered production.
Computer hardware and operating systems have been capable of multitasking for years. CPUs using a RISC (reduced instruction set computing) microprocessor break down the processing of individual machine instructions into a number of separate tasks. By pipelining each instruction through each task, a RISC machine can have many instructions in progress at the same time. The end result is the heralded speed and throughput of RISC processors. Time-sharing operating systems have been allowing users nearly simultaneous access to the processor for longer than we can remember. Their ability to schedule different tasks (typically called processes ) really pays off when separate tasks can actually execute simultaneously on separate CPUs in a multiprocessor system.
Although real user applications can be adapted to take advantage of a computers ability to do more than one thing at once, a lot of operating system code must execute to make it possible. With the advent of threads weve reached an ideal statethe ability to perform multiple tasks simultaneously with as little operating system overhead as possible.
Although threaded programming styles have been around for some time now, its only recently that theyve been adopted by the mainstream of UNIX programmers (not to mention those erstwhile laborers in the vineyards of Windows NT and other operating systems). Software sages swear at the lunchroom table that transaction processing monitors and real-time embedded systems have been using thread-like abstractions for more than twenty years. In the mid-to-late eighties, the general operating system community embarked on several research efforts focused on threaded programming designs, as typified by the work of Tom Doeppner at Brown University and the Mach OS developers at Carnegie-Mellon. With the dawn of the nineties, threads became established in the various UNIX operating systems, such as USLs System V Release 4, Sun Solaris, and the Open Software Foundations OSF/1. The clash of platform-specific threads programming libraries advanced the need of some portable, platform-independent threads interface. The IEEE has just this year met this need with the acceptance of the IEEE Standard for Information Technology Portable Operating System Interface (POSIX) Part 1: System Application Programming Interface (API) Amendment 2: Threads Extension [C Language]the Pthreads standard, for short.
This book is about Pthreadsa lightweight, easy-to-use, and portable mechanism for speeding up applications.
Organization
Well start off , by introducing you to multithreading as a way of performing the many tasks of a program with greater efficiency and speed than would be possible in a serial or multiprocess design. Well then examine the pitfalls of serial and multiprocess programming, and discuss the concept of potential parallelism, the cornerstone of any decision to write a multitasking program. Well introduce you to your first Pthreads call pthread_create and look at those structures by which a thread is uniquely identified. Well briefly examine the ways in which multiple threads in the same process exchange data, and well highlight some synchronization issues.
. Here, well look at the types of applications that can benefit most from multithreading. Well present the three classic methods for distributing work among threadsthe boss/worker model, the peer model, and the pipeline model. Well also compare two strategies for creating threadscreation on demand versus thread pools. After a brief discussion of thread data-buffering techniques, well introduce the ATM server application example that well use as the proving ground for thread concepts well examine throughout the rest of the book.
In , well look at the tools that the Pthreads library provides to help you ensure that threads access shared data in an orderly manner. This chapter includes lengthy discussions of mutex variables and condition variables, the two primary Pthreads synchronization tools. It also describes reader/writer locks, a more complex synchronization tool built from mutexes and condition variables. By the end of the chapter, we will have added synchronization to our ATM server example and presented most of what youll need to know to write a working multithreaded program.
Well look at the special characteristics of threads and the more advanced features of the Pthreads library in . Well cover some large topics, such as keys (a very handy way for threads to maintain private copies of shared data) and cancellation (a practical method for allowing your threads to be terminated asynchronously without disturbing the state of your programs data and locks). Well cover some smaller topics, such as thread attributes, including the one that governs the persistence of a threads internal state. (When you get to this chapter, we promise that youll know what this means, and you may even value it!) A running theme of this chapter are the various tools that, when combined, allow you to control thread scheduling policies and priorities. Youll find these discussions especially important if your program includes one or more real-time threads.