Basic Thread Mechanisms

Basic Thread Mechanisms


Now that we know what threads are, what is it that we need in order to support them? First, we must have some data structure that will allow us to distinguish a thread from a process. This data structure should allow us to identify a specific thread and to keep track of their resource usage. Then we must have some mechanisms to create and to manage threads. In addition, we also need mechanisms to allow threads to coordinate amongst each other. Especially when there are certain dependencies between their execution when these threads are executing concurrently. For instance, we need to make sure that threads that execute concurrently don’t overwrite each other’s inputs or each other’s results. Or we need mechanisms that allow one thread to wait on results that should be produced by some other thread. Well, when thinking about the type of coordination that’s needed between threads, we must first think about the issues associated with concurrent execution. Let’s first start by looking at processes. When processes run concurrently, they each operate within their own address space. The operating system together with the underlying hardware will make sure that no access from one address space is allowed to be performed on memory that belongs to another. Memory is, or state that belongs to the other address space. For instance, consider a physical address x that belongs to the process one address space. In that case, the mapping between the virtual address for p1 in process one and the physical address of x will be valid. Since the operating system is the one that establishes these mappings, the operating system will not have a valid mapping for any address from the address space of p2 to x. So from the p2 address space, we simply will not be able to perform a valid access to this physical location. Threads, on the other hand, share the same virtual-to-physical address mappings. So both T1 and T2, concurrently running as part of an address space, can both legally perform access to the same physical memory. And using the same virtual address on top of that. But this introduces some problems. If both T1 and T2 are allowed to access the data at the same time and modify it at the same time, then this could end up with some inconsistencies. One thread may try to read the data, while the other one is modifying it, so we just read some garbage. Or both threads are trying to update the data at the same time and their updates sort of overlap. This type of data race problem where multiple threads are accessing the same data at the same time is common in multithreaded environments, where threads execute concurrently. To deal with these concurrency issues, we need mechanisms for threads to execute in an exclusive manner. We call this mutual exclusion. Mutual exclusion is a mechanism where only one thread at a time is allowed to perform an operation. The remaining threads, if they want to perform the same operation, must wait their turn. The actual operation that must be performed in mutual exclusion may include some update to state or, in general, access to some data structure that’s shared among all these threads. For this, Birrell and other threading systems, use what, what’s called mutexes. In addition, it is also useful for threads to have a mechanism to wait on one another. And to exactly specify what are they waiting for. For instance a thread that’s dealing with shipment processing must wait on all the items in a certain order to be processed before that order can be shipped. So it doesn’t make sense to repeatedly check whether the remaining threads are done filling out the order. The thread just might as well wait until it’s explicitly notified that the order is finalized so that it can at that point get up, pick up the order, and ship the package. Birrell talks about using so-called condition variables to handle this type of inter-thread coordination. We refer to both of these mechanisms as synchronization mechanisms. For completeness, Birrell also talks about mechanisms for waking up other threads from a wait state, but in this lesson, we will focus mostly on thread creation and these two synchronization mechanisms, mutexes and condition variables. We will discuss this issue a little bit more in following lessons.

2 thoughts on “Basic Thread Mechanisms

Leave a Reply

Your email address will not be published. Required fields are marked *