Reputation: 1421
Found a question in Pradeep K Sinha's book
From my understanding it is safe to assume howsoever number of threads are available. But how could we compute the time?
Upvotes: 1
Views: 1050
Reputation: 13267
Single-threaded:
We want to figure out many requests per second the system can support. This is represented as n
below.
1 second
= 1000 milliseconds
= 0.7n(20) + 0.3n(100)
Since 70% of the requests hit the cache, we represent the time spent handling requests that hit the cache with 0.7n(20)
. We represent the requests that miss the cache with 0.3n(100)
. Since the thread goes to sleep when there is a cache miss and it contacts the file server, we don't need to worry about interleaving the handling for the next request with the current one.
Solving for n:
1000
= 0.7n(20) + 0.3n(100)
= 0.7n(20) + 1.5n(20)
= 2.2n(20)
= 44n
=> n = 100/44 = 22.73
.
Therefore, a single thread can handle 22.73 requests per second.
Multi-threaded:
The question does not give much detail about the multi-threaded state, apart from the context switch cost. The answer to this question depends on several factors:
I am going to make the following assumptions:
I can now solve for n:
1000 milliseconds
= 0.7n(20) + 0.3n(20)
.
On a cache miss, a thread spends 20 milliseconds doing work and 80 milliseconds sleeping. When the thread is sleeping, another thread can run and do useful work. Thus, on a cache miss, the thread only uses the CPU for 20 milliseconds, whereas when the process was single-threaded, the next request was blocked from being serviced for 100 milliseconds.
Solving for n:
1000 milliseconds
= 0.7n(20) + 0.3n(20)
= 1.0n(20)
= 20n
=> n = 1000/20 = 50
.
Therefore, a multi-threaded process can handle 50 requests per second given the assumptions above.
Upvotes: 2