Announcements. Threads. Cooperating Processes. Case for Parallelism - PDF

Please download to get full document.

View again

of 5
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Music

Published:

Views: 0 | Pages: 5

Extension: PDF | Download: 0

Share
Related documents
Description
Announcements Threads Cooperating Processes Processes can be independent or work cooperatively Cooperating processes can be used: to gain speedup by overlapping activities or working in parallel to better
Transcript
Announcements Threads Cooperating Processes Processes can be independent or work cooperatively Cooperating processes can be used: to gain speedup by overlapping activities or working in parallel to better structure an application as set of cooperating processes to share information between jobs Sometimes processes are structured as a pipeline each produces work for the next stage that consumes it read_data() for(all data) compute(); write_data(); endfor read_data() for(all data) compute(); CreateProcess(write_data()); endfor Consider the following code fragment for(k = 0; k n; k++) a[k] = b[k] * c[k] + d[k] * e[k]; CreateProcess(fn, 0, n/2); CreateProcess(fn, n/2, n); fn(l, m) for(k = l; k m; k++) a[k] = b[k] * c[k] + d[k] * e[k]; Consider a Web server create a number of processes, and for each process do: get network message from client get URL data from disk compose response send response 1 Processes and Threads A full process includes numerous things: an address space (defining all the code and data pages) OS resources and accounting information a thread of control, defines where the process is currently executing That is the PC and registers Creating a new process is costly all of the structures (e.g., page tables) that must be allocated Communicating between processes is costly most communication goes through the OS Lightweight Processes Idea: why don t we separate the idea of process (address space, accounting, etc.) from that of the minimal thread of control (PC, SP, registers)? Like our heavyweight processes: Each has its own PC, registers, and stack pointer Unlike our heavyweight processes: They all share the same code and data (address space) They all share the same privileges They share almost everything in the process Threads and Processes Multithreaded Processes Most operating systems therefore support two entities: the process, which defines the address space and general process attributes the thread, which defines a sequential execution stream within a process A thread is bound to a single process. For each process, however, there may be many. Threads are the unit of scheduling Processes are containers in which execute Threads vs. Processes How OSes support? A thread has no data segment or heap A thread cannot live on its own, it must live within a process There can be more than one thread in a process, the first thread calls main & has the process s stack Inexpensive creation Inexpensive context switching If a thread dies, its stack is reclaimed A process has code/data/heap & other segments There must be at least one thread in a process Threads within a process share code/data/heap, share I/O, but each has its own stack & registers Expensive creation Expensive context switching If a process dies, its resources are reclaimed & all die = address space = thread example: MS/DOS example: Unix example: Small device OS s example: Windows, Linux, Mach 2 Cooperative Threads Each thread runs until it decides to give up the CPU { tid t1 = CreateThread(fn, arg); Yield(t1); fn(int arg) { Yield(any); Cooperative Threads Cooperative use non pre-emptive scheduling Advantages: Simple Small, real-time OSs Disadvantages: For badly written code Scheduler gets invoked only when Yield is called A thread could yield the processor when it blocks for I/O Non-Cooperative Threads No explicit control passing among Rely on a scheduler to decide which thread to run A thread can be pre-empted at any point Often called pre-emptive Most modern thread packages use this approach Kernel Threads Also called Lightweight Processes () Kernel still suffer from performance problems Operations on kernel are slow because: a thread operation still requires a system call kernel may be overly general to support needs of different users, languages, etc. the kernel doesn t trust the user there must be lots of checking on kernel calls User-Level Threads For speed, implement at the user level A thread is managed by the run-time system code that is linked with your program Each thread is represented simply by: PC Registers Stack Small control block All thread operations are at the : Creating a new thread switching between synchronizing between User-Level Threads User-level the thread scheduler is part of a library, outside the kernel thread context switching and scheduling is done by the library Can either use cooperative or pre-emptive cooperative are implemented by: CreateThread(), DestroyThread(), Yield(), Suspend(), etc. pre-emptive are implemented with a timer (signal) where the timer handler decides which thread to run next 3 Example User Thread Interface Key Data Structures t = thread_fork(initial context) create a new thread of control thread_stop() stop the calling thread, sometimes called thread_block thread_start(t) start the named thread thread_yield() voluntarily give up the processor thread_exit() terminate the calling thread, sometimes called thread_destroy your program: for i (1, 10, I++) thread_fork(i);. thread code: proc thread_fork() proc thread_block() proc thread_exit()... your process address space your data (shared by all your ): queue of thread control blocks per-thread stacks Multiplexing User-Level Threads User-Level vs. Kernel Threads The thread package sees a virtual processor(s) it schedules on these virtual processors each virtual processor is implemented by a kernel thread The big picture: Create as many kernel as there are processors Create as many as the application needs Multiplex on top of the kernel-level Why not just create as many kernel-level as app needs? Context switching Resources User-Level Managed by application Kernel not aware of thread Context switching cheap Create as many as needed Must be used with care Kernel-Level Managed by kernel Consumes kernel resources Context switching expensive Number limited by kernel resources Simpler to use Key issue: kernel provide virtual processors to, but if all of k block, then all will block even if the program logic allows them to proceed Many-to-One Model One-to-one Model Thread creation, scheduling, synchronization done in user space. Mainly used in language systems, portable libraries Fast - no system calls required Few system dependencies; portable No parallel execution of - can t exploit multiple CPUs All block when one uses synchronous I/O Thread creation, scheduling, synchronization require system calls Used in Linux Threads, Windows More concurrency Better multiprocessor performance Each user thread requires creation of kernel thread Each thread requires kernel resources; limits number of total 4 Many-to-Many Model Two-level Model If U L? No benefits of multithreading If U L, some may have to wait for an to run Active thread - executing on an Runnable thread - waiting for an A thread gives up control of under the following: synchronization, lower priority, yielding, time slicing Combination of one-to-one + strict many-to-many models Supports both bound and unbound Bound - permanently mapped to a single, dedicated Unbound - may move among s in set Thread creation, scheduling, synchronization done in user space Flexible approach, best of both worlds Used in Solaris implementation of P and several other Unix implementations (IRIX, HP-UX) Multithreading Issues Semantics of fork() and exec() system calls Thread cancellation Asynchronous vs. Deferred Cancellation Signal handling Which thread to deliver it to? Thread pools Creating new, unlimited number of Thread specific data Scheduler activations Maintaining the correct number of scheduler Thread Hazards int a = 1, b = 2, w = 2; { CreateThread(fn, 4); CreateThread(fn, 4); while(w) ; fn() { int v = a + b; w--; Concurrency Problems A statement like w-- in C (or C++) is implemented by several machine instructions: Now, imagine the following sequence, what is the value of w? 5
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks