Preface
Chapter 1, Concurrent and Parallel Programming - An Advanced Introduction
This chapter provides an advanced overview of concurrency in computer science. The readers will gain a deep theoretical understanding of concurrent programming and relevant concepts.
Chapter 2, Amdahl's Law
This chapter goes over the concept of Amdahl's Law. It also analyzes the formula, and covers how it can be used in sample Python concurrent programs. The chapter also briefly discusses the relation between Amdahl's Law and the law of diminishing returns.
Chapter 3, Working with Threads in Python
This chapter introduces the readers to the formal definition of a thread and Pythons threading module, while comparing the differences between it and the old thread module. It also covers various ways to work with threads in a Python program, such as creating new threads, synchronizing threads, and working with multithreaded priority queues. This chapter touches the concepts of a lock while discussing thread synchronization and of a queue in multithreaded priority queues.
Chapter 4, Using the 'with' Statement in Threads
This chapter explains the idea behind the 'with' statement as a context manager and its usage in concurrent programming, specifically while using locks in the threading module. It also provides specific samples of how the statement is most commonly used.
Chapter 5, Concurrent Web Scraping
This chapter analyzes the process of web scraping through concurrent programming. Similar to image processing, data from different websites is also independent from one another, concurrent programming gives web scraping a significant speedup. This chapter not only illustrates the improvement concurrent programming provides but also goes through common techniques used in web scraping applications. It additionally discusses best practices for web scraping techniques in general.
Chapter 6, Working with Processes in Python
This chapter introduces the readers to the formal definition of a process and Pythons multiprocessing module. It covers most common ways of working with processes through the API of the multiprocessing module such as the Process class, locks, logging, and the Pool class. This chapter also focuses on the differences between options the threading and multiprocessing modules provide in concurrent programs.
Chapter 7, The Reduction Operation in Processes
The concept of a reduction operation is closely associated with parallel programming, to reduce the elements of an array into one single result. Because of its associative and commutative nature of the operation, concurrency can be applied to greatly improve the execution time of the operation. This chapter discusses the theoretical concurrent approach to writing a reduction operator as the perspective of programmers and developers interested in concurrency, and from that makes connections to similar problems that can be solved using concurrency.
Chapter 8, Concurrent Image Processing
This chapter analyzes the process of downloading and processing images through concurrent programming. Since all images being processed are independent from one another, concurrent programming gives image processing a significant speedup. This chapter not only illustrates the improvement concurrent programming provides but also goes through common techniques used in image-processing applications.
Chapter 9, Introduction to Asynchronous I/O
This chapter introduces the readers to the formal definition of asynchronous programming. It covers the idea behind this form of input/output processing, and why it provides a significant improvement in execution time in programming.
Chapter 10, Asyncio Pros and Cons
This chapter introduces the readers to the Asyncio module from Python. It covers the idea behind this new concurrency module, which is the usage of coroutines and futures to simplify asynchronous code. The Asyncio module provides API that is as readable as synchronous code since there are no callbacks. This chapter also discusses what is similar and dissimilar between Asyncio and another Python modulegevent. Finally, the chapter covers some of the most common use of Asyncio, including writing single-threaded concurrent code using coroutines, multiplexing I/O access over sockets and other resources, running network clients and servers, etc.
Chapter 11, TCP with Asyncio
This chapter covers the fundamental theories of transports, which are classes provided by asyncio in order to abstract various kinds of communication channels. It also covers a working implementation in Python of a simple TCP echo client and server to further illustrate the use of asyncio specifically and concurrent in general in communication systems. The code used in this chapter also serves as the foundation for an advanced example later on in the book.
Chapter 12, Deadlock
This chapter discusses the theoretical cause of deadlocks in concurrent programming. It covers the classical Dining philosophers problem as a real-life example, and goes through the Python code that illustrates this problem. Several methods to handle deadlock are discussed, namely ignoring deadlock, detection and prevention of deadlock. Relevant concepts such as livelock and distributed deadlock are also covered.
Chapter 13, Starvation
This chapter discusses the concept of starvation and its potential causes in concurrent programming. It also covers the relationship between deadlock and starvation. The chapter mentions a number of readers-writers problems which are prime examples of starvation, and goes through sample Python code to illustrate the problem. Potential solutions to starvation are also discussed.
Chapter 14, Race Conditions
This chapter discusses the concept of race conditions and their potential causes in concurrent programming. It also covers various theoretical forms of race conditions: static, dynamic, and essential forms. The chapter additionally discusses the definition of critical sections and its relationship with race conditions. Sample Python code is discussed, together with real-life examples in computer security, file systems, and networking.
Chapter 15, The Global Interpreter Lock
One of the major players in Python concurrent programming is the Global Interpreter Lock (GIL). This chapter covers the definition and purposes of the GIL, and how it affects concurrent Python programs. It also explores the problems that the GIL poses for some concurrent systems, and the controversy around its implementation. Lastly, the chapter discusses some thoughts regarding how programmers and developers should think about and interact with the GIL.
Chapter 16, Designing Lock-Free and Lock-Based Concurrent Data Structures
This chapter analyzes the process of designing the two common data structures in concurrent programming using locks: lock-free and lock-based concurrent. This chapter not only discusses the principal differences between the two data structures, and their respective usage to improve execution time in programs.
Chapter 17, Memory Models and Operations on Atomic Types
This chapter analyzes a more complicated application of concurrent programming: building a performant non-blocking server from scratch. It covers various complex techniques on the topic such as isolating the users business logic in callbacks, writing the callback logic in-line with generators, and scheduling timed events. The chapter also discusses the use of await and yield statements in the example.