100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

CSC 246 Exam 2 Terms in this set (76) thread Sequential flow of action executed by a process; executes only one task at a time, while a multithreaded process can execute a task per thread. Multithread Multiple tasks can be implemented by separate

Rating
-
Sold
-
Pages
5
Grade
A+
Uploaded on
03-08-2024
Written in
2024/2025

CSC 246 Exam 2 Terms in this set (76) thread Sequential flow of action executed by a process; executes only one task at a time, while a multithreaded process can execute a task per thread. Multithread Multiple tasks can be implemented by separate threads; more efficient than creating multiple processes Benefits Responsiveness (continues execution if part of process is blocked), Resource Sharing (Threads share resources of process), Economy (cheaper than process creation), Scalability (process can take advantage of multiprocessor architectures) Parallelism System can perform more than one task simultaneously Concurrency executing multiple tasks simultaneously in an overlapping manner What threads share and don't Threads share code, data, and files in multithreaded process, but not stacks, registers, and state. Amdahl's Law speedup is less than or equal to 1 / S + (1-S)/N. S is serial portion, N is processing cores. Identifies performance gains from adding additional cores to an application. User threads Threads implemented by user software Kernel threads Supported by the Kernel Many-to-One Many user-level threads mapped to single kernel thread; one thread block blocks all One-to-One Each user-level thread maps to kernel thread (more concurrent than many-to-one) (creating a user-level thread creates a kernel thread) Many-to-Many Allows many user level threads to be mapped to many kernel threads Allows the operating system to create a sufficient number of kernel threads Two-level Similar to many-to-many; except it allows a user thread to be bound to kernel thread Thread pools Create a number of threads in a pool where they await work (slightly faster than creating a new thread to service request) Threading issues Signal handling, Fork() and exec() calls, thread cancellation of target thread Signal handler User-defined or default; Used to process signals; generated by a particular event or delivered to a process; default handler run by kernel when handling signal (can be overridden by user-defined signal handler) Target thread Thread that is to be canceled Asynchronous cancellation Terminates target thread immediately Deferred cancellation allows the target thread to periodically check if it should be cancelled Cancellation point Cancellation occurs when thread reaches this point, then cleanup handler is invoked Thread-local storage Allows each thread to have its own copy of data Context of thread register set, stacks, and private storage area A thread contains Thread id, register set, separate user and kernel stacks, private data storage area Cooperating process A process that can be affected or affect other processes executing in the system. Concurrent access to shared data may result in data inconsistency. Consumer-Producer problem Producer produces item in buffer; consumer removes/decrements item from buffer. Concurrent execution of statements can lead to inconsistent data (count++ and count-- when count is equal to 5 results in 4 and 6) Race condition Situation where two threads are concurrently trying to change a variable's value Process Synchronization Coordination of access to data by two or more threads or processes. Critical Section Problem Consider a system consisting of n processes (p0, p1, ..., pn-1). Each process has a segment of code called a critical section (A section of code responsible for changing data that must only be executed by one thread or process at a time to avoid a race condition). When one process is executing in its critical section, no other process is allowed to execute in its critical section at the same time. Entry section Section of code requesting permission to enter the critical section Exit section The section of code within a process that cleanly exits the critical section Remainder section Whatever code remains to be processed after critical and exit sections Solution to Critical Section Problem Must satisfy: mutual exclusion (if one process is executing in its critical section, no other process can execute in their critical sections) progress (If no process is executing in its critical section and some processes wish to enter their critical sections, then those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next bounded waiting (there exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request and before that request is granted) Kernel is preemptive Allows preemption of process when running in kernel mode Kernel is non-preemptive Runs until exits kernel mode, blocks, or yields CPU CSC 246 Exam 2 Peterson's solution Two process solution: Assume load and store instructions are atomic (uninterruptable) These two processes share two variables: int turn Boolean flag[2] turn indicates whose turn it is to enter critical section Flag array indicates if a process is ready to enter critical section flag[process number] = true implies process is ready Locking Protecting critical regions via locks. Lock Ensures that only one thread/process can access the critical section (part of code where shared resources are accessed) at a time. Mutex lock Mutual exclusion lock; first acquire() a lock then release() it Spinlock Busy wait; checks the lock in a loop until it becomes available Semaphore Tool that provides more sophisticated ways for processes to synchronize their activities. Integer variable that is accessed through two operations wait() and signal() Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1 (same as mutex lock) Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. Priority Inversion A scheduling problem where a lower-priority process holds a lock needed by higher- priority process (solved by priority inheritance protocol) Busy waiting While a process is in its critical section, any other process that tries to enter its critical section must loop continuously Semaphore with no Busy Waiting Each semaphore has a waiting queue. Each entry in queue has int value and pointer to next record Two operations block - place the process invoking this on the waiting queue wakeup - remove one of processes in waiting queue and place it in ready queue Monitors A high-level abstraction that provides a convenient and effective mechanism for process synchronization. Contains internal variables only accessible by code within procedures. Only one process may be active within the monitor at a time. Condition variables condition x, y Two operations are allowed on a condition variable () - a process that invokes the operation is suspended until l() l() - resumes one of processes (if any) that invoked () Basic Concepts of CPU Scheduling Maximum CPU Utilization obtained with multiprogramming. CPU-I/O burst cycle - process execution consists of a cycle of CPU execution and I/O wait. CPU burst followed by I/O burst. Short term scheduler Selects from among processes in ready queue and allocates CPU to one of them. Scheduling under 1 and 4 is non-preemptive. Dispatch latency time it takes for the dispatcher to stop one process and start another running Preemptive vs Nonpreemptive Scheduling Scheduling is preemptive if process is involuntarily moved to ready state. Scheduling is nonpreemptive if process is terminated or switched to waiting state. Dispatcher Gives control of CPU to process selected by short-term scheduler Scheduling criteria Max CPU Utilization (keep CPU busy), Max throughput (# of processes that complete their execution per time unit) Min turnaround time (time to execute a process) Min waiting time (time spent in ready queue) Min response time (time from submission of request to first response produced) First come first serve Thread that requests core first gets it, and other threads get their cores in the order of their requests (non-preemptive) Convoy effect short process behind long process; threads are waiting for one thread to get off core Determining length of next CPU burst tn = actual length of nth CPU burst tau(n+1) = predicted value for next CPU burst alpha, 0 <= alpha <= 1 tau(n+1) = alpha*tn + (1-alpha)tau(n) alpha set to 1/2 Multilevel queue Each queue can be partitioned into several queues (ready queue into background and foreground). Each of those queues have their own scheduling algorithm. Process can be upgraded/demoted (move up and down queues) Aging as time progresses increase the priority of the process Process contention scope On systems implementing the many-to-one and many-to-many models, the thread library schedules user-level threads to run on an available LWP (lightweight process) System contention scope Kernel thread is scheduled onto available CPU Asymmetric multiprocessing only one processor accesses the system data structures, alleviating the need for data sharing Symmetric multiprocessing each processor is self-scheduling Processor affinity process has affinity for processor on which it is currently running. Soft affinity - OS will try to keep process running on same processor but can't guarantee it. Hard affinity - OS allows process's threads to run on same processor at all times Load balancing Attempts to keep workload evenly distributed Pull migration idle processor pulls waiting task from busy processor Push migration periodic task checks load on each processor, and if imbalance found pushes task from overloaded CPU to other CPUs Soft real-time systems No guarantee as to when critical real-time process will be scheduled Hard real-time systems Task must be serviced by its deadline Rate Montonic Scheduling Priority is assigned based on inverse of its period (Period is from start to deadline) Little's Formula (n = λ × W) n = average queue length lambda = average arrival rate into queue W = average waiting time in queue CSC 246 Exam 2 Deadlock characterization Simultaneously: mutual exclusion, hold and wait, no preemption, circular wait Hold and wait: Process holding at least one resource is waiting to acquire additional resources held by other processes No preemption: Resource can be voluntarily released after process completes its task Circular wait: set of waiting processes such that P0 is waiting for a resource from P1, P1 is waiting for P2 and so on Resource Allocation Graph Set of vertices V and edges E V: P (P1 - Pn) the set consisting of all processes in a system R(R1,Rn) set consisting of all resource types in system Request edge directed edge P->R Assignment edge directed edge R->P Cycle indicates possibility of deadlock Handling deadlocks Deadlock Prevention or Avoidance

Show more Read less
Institution
CGFO - Certified Government Finance Officer
Course
CGFO - Certified Government Finance Officer









Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
CGFO - Certified Government Finance Officer
Course
CGFO - Certified Government Finance Officer

Document information

Uploaded on
August 3, 2024
Number of pages
5
Written in
2024/2025
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

Content preview

8/3/24, 2:02 PM




CSC 246 Exam 2
Jeremiah




Terms in this set (76)

Sequential flow of action executed by a process; executes only one task at a time, while
thread
a multithreaded process can execute a task per thread.

Multiple tasks can be implemented by separate threads; more efficient than creating
Multithread
multiple processes

Responsiveness (continues execution if part of process is blocked), Resource Sharing
Benefits (Threads share resources of process), Economy (cheaper than process creation),
Scalability (process can take advantage of multiprocessor architectures)

Parallelism System can perform more than one task simultaneously

Concurrency executing multiple tasks simultaneously in an overlapping manner

Threads share code, data, and files in multithreaded process, but not stacks, registers,
What threads share and don't
and state.

speedup is less than or equal to 1 / S + (1-S)/N. S is serial portion, N is processing cores.
Amdahl's Law
Identifies performance gains from adding additional cores to an application.

User threads Threads implemented by user software

Kernel threads Supported by the Kernel

Many-to-One Many user-level threads mapped to single kernel thread; one thread block blocks all

Each user-level thread maps to kernel thread (more concurrent than many-to-one)
One-to-One
(creating a user-level thread creates a kernel thread)

Allows many user level threads to be mapped to many kernel threads

Many-to-Many
Allows the operating system to create a sufficient number
of kernel threads

Two-level Similar to many-to-many; except it allows a user thread to be bound to kernel thread

Create a number of threads in a pool where they await work (slightly faster than
Thread pools
creating a new thread to service request)

Threading issues Signal handling, Fork() and exec() calls, thread cancellation of target thread




Signals Notify a process that a particular event has occurred

1/5

, 8/3/24, 2:02 PM
User-defined or default; Used to process signals; generated by a particular event or
Signal handler delivered to a process; default handler run by kernel when handling signal (can be
overridden by user-defined signal handler)

Target thread Thread that is to be canceled

Asynchronous cancellation Terminates target thread immediately

Deferred cancellation allows the target thread to periodically check if it should be cancelled

Cancellation point Cancellation occurs when thread reaches this point, then cleanup handler is invoked

Thread-local storage Allows each thread to have its own copy of data

Context of thread register set, stacks, and private storage area

A thread contains Thread id, register set, separate user and kernel stacks, private data storage area

A process that can be affected or affect other processes executing in the system.
Cooperating process
Concurrent access to shared data may result in data inconsistency.

Producer produces item in buffer; consumer removes/decrements item from buffer.
Consumer-Producer problem Concurrent execution of statements can lead to inconsistent data (count++ and count--
when count is equal to 5 results in 4 and 6)

Race condition Situation where two threads are concurrently trying to change a variable's value

Process Synchronization Coordination of access to data by two or more threads or processes.

Consider a system consisting of n processes (p0, p1, ..., pn-1). Each process has a
segment of code called a critical section (A section of code responsible for changing
Critical Section Problem data that must only be executed by one thread or process at a time to avoid a race
condition). When one process is executing in its critical section, no other process is
allowed to execute in its critical section at the same time.

Entry section Section of code requesting permission to enter the critical section

Exit section The section of code within a process that cleanly exits the critical section

Remainder section Whatever code remains to be processed after critical and exit sections

Must satisfy:
mutual exclusion (if one process is executing in its critical section, no other process can
execute in their critical sections)


progress (If no process is executing in its critical section and some processes wish to
Solution to Critical Section Problem enter their critical sections, then those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section next


bounded waiting (there exists a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request and before
that request is granted)

Kernel is preemptive Allows preemption of process when running in kernel mode

Kernel is non-preemptive Runs until exits kernel mode, blocks, or yields CPU




CSC 246 Exam 2
https://quizlet.com/925837246/csc-246-exam-2-flash-cards/ 2/5

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
Denyss Teachme2-tutor
View profile
Follow You need to be logged in order to follow users or courses
Sold
24
Member since
1 year
Number of followers
3
Documents
6307
Last sold
1 week ago
Classic Writers

I am a professional writer/tutor. I help students with online class management, exams, essays, assignments and dissertations. Improve your grades by buying my study guides, notes and exams or test banks that are 100% graded

5.0

2 reviews

5
2
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions