تكليف/واجب مقارنة بين الخيوط والعمليات ومعالجة التزامن باستخدام الـ Semaphore – تقرير أنظمة تشغيل|Thread vs Process & Synchronization in Shared Variables using S

Eng. Yasser Al-Bahri

Administrator
طاقم الإدارة
Assignment Answer
Question 1: What resources are used when a thread is created? How do they differ from those used when a process is created?
- Answer:

When a thread is created, it utilizes only a minimal set of resources compared to a full process.
A thread, often referred to as a "lightweight process", shares many of its resources with the parent process and other threads within the same process.
Resources used when a thread is created:
  • A unique Thread ID (TID).
  • A separate program counter, to track the execution point.
  • A register set for holding intermediate data.
  • A stack for local function calls and variables.
  • Thread-local storage.
  • Shared access to the parent’s:
    • Code section,
    • Data section,
    • Heap memory,
    • Open files.
Resources used when a process is created:
  • A unique Process ID (PID).
  • A new memory address space.
  • A separate code and data section.
  • Its own heap and stack.
  • A Process Control Block (PCB).
  • Independent file descriptors.
  • Additional OS-level scheduling and resource allocation.
Main Differences:

Feature
Thread
Process
Address Space​
Shared with parent and siblings​
Independent memory space​
Overhead​
Low​
High​
Communication​
Easy (via shared memory)​
Complex (via IPC mechanisms)​
Creation Speed​
Fast​
Slower​
Dependency​
Threads depend on process lifecycle​
Processes are independent​
Threads provide a more efficient mechanism for concurrency due to lower overhead and shared memory. However, this sharing also introduces complexity in synchronization and error isolation.


Question 2: Shared Variable and Synchronization
Problem Recap:

  • Shared variable account is initialized to 100.
  • Thread A performs: account = account + 200
  • Thread B performs: account = account - 50


2.1) What are the possible values of the variable without synchronization?
Without synchronization, race conditions may occur due to concurrent access to the shared variable by multiple threads. The order of read/write operations becomes non-deterministic.
Scenario Explanation:
Let’s simulate the interleaving of instructions from both threads:
Let:
  • A1 = Thread A reads account (100)
  • A2 = A computes 100 + 200 = 300
  • A3 = A writes 300
  • B1 = Thread B reads account (100)
  • B2 = B computes 100 - 50 = 50
  • B3 = B writes 50
Possible Interleavings and Outcomes:
1. Thread A executes fully, then B:
o A: account = 100 + 200 = 300
o B: account = 300 - 50 = 250
o ✅ Final value: 250
2. Thread B executes fully, then A:
o B: account = 100 - 50 = 50
o A: account = 50 + 200 = 250
o ✅ Final value: 250
3. Interleaved Execution (Race Condition):
o A1: read 100
o B1: read 100
o A2: compute 300
o B2: compute 50
o A3: write 300
o B3: write 50 (overwrites)
o ❌ Final value: 50
4. Alternate Interleaving:
o B1: read 100
o A1: read 100
o B2: compute 50
o A2: compute 300
o B3: write 50
o A3: write 300 (overwrites)
o ❌ Final value: 300
- Possible Final Values without Synchronization:

  • 250 (correct result if serialized)
  • 50 (lost update)
  • 300 (lost update)


2.2) Use semaphore to address the critical section problem
To solve this critical section problem, we can use a binary semaphore (mutex) to ensure mutual exclusion. A semaphore guarantees that only one thread can access the critical section at any moment.
- C Code using POSIX Semaphores:
C:
#include <stdio.h>

#include <pthread.h>

#include <semaphore.h>

 

int account = 100;

sem_t mutex;

 

void* threadA(void* arg) {

    sem_wait(&mutex);

    account = account + 200;

    sem_post(&mutex);

    return NULL;

}

 

void* threadB(void* arg) {

    sem_wait(&mutex);

    account = account - 50;

    sem_post(&mutex);

    return NULL;

}

 

int main() {

    pthread_t t1, t2;

    sem_init(&mutex, 0, 1);

 

    pthread_create(&t1, NULL, threadA, NULL);

    pthread_create(&t2, NULL, threadB, NULL);

 

    pthread_join(t1, NULL);

    pthread_join(t2, NULL);

 

    printf("Final account value = %d\n", account);

    sem_destroy(&mutex);

    return 0;

}
Explanation:
  • sem_init(&mutex, 0, 1): Initializes a binary semaphore.
  • sem_wait(&mutex): Acquires the lock (blocks if already held).
  • sem_post(&mutex): Releases the lock, allowing another thread to proceed.
  • This ensures that only one thread at a time can read-modify-write the account variable, preventing data inconsistency.


References:
1. Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating System Concepts (10th ed.). Wiley.
2. Tanenbaum, A. S., & Bos, H. (2014). Modern Operating Systems (4th ed.). Pearson Education.
3. IEEE POSIX Threads Documentation.
 
التعديل الأخير:
عودة
أعلى