To understand threads more deeply, it’s important to have a clear picture of a traditional process and its address space. Typically, at the lower addresses, a process will have the instructions followed by literals, that is values in the code, then statically allocated memory, really, global variables, and then the heap. Which will grow upwards. And then at the top of the address space and working its way down, we have the current stack of a process. Remember that the stack includes all the information local to a procedure. Parameters, local variables, and the address of the instruction to return to when the current procedure is over. In a multithreaded environment, each thread has it’s own stack, but it shares everything else with the original main thread of the process. So, from a certain perspective, a thread acts like an entire process unto itself, and hence threaded programming isn’t much different from a traditional programming. The difference comes from the fact that part of the memory is shared. On the one hand, this makes it very easy for threads to communicate. But it also means that extra care is needed to make sure that the threads coordinate as they use this memory. I should point out as well, that one can do parallel programming without having multiple threads. Each parallelizable task could have it’s own process with it’s own address space. And it could communicate with the other processes through message passing. In fact if we wanted to take advantage of a distributed system, where each processor has it’s own separate physical memory, we would be forced to use this approach. As we think about multi-threaded programming, however, the paradigm explored in this lesson, is more appropriate to think of a shared memory system, where we have multiple cores sharing a common piece of memory.