Shivam Aggarwal
Shivam Aggarwal

Reputation: 774

Doesn't a cache miss time overrule the time taken to directly access the data from ram

Suppose there is a computer with three cache's L1,L2(inside the processor) and L3(external to the cpu) and a RAM.
Now suppose CPU need to access data "ABC" for some operation which is definitely available in the RAM but due to the architecture-

The CPU will first check for "ABC" inside L1 cache, where for instance it is not found(hence a cache miss --> some time wasted checking),
then L2 cache is checked and again data is not found(again cache miss --> some time wasted), similarly L3 is checked and data is not found(cache miss --> some time wasted again),
now finally RAM is checked where data is found and used.

Now here isn't much time wasted just in checking various memories, won't it be much more efficient to directly access the ram for such operations without checking any cache memory.
Keeping in mind that as the CPU progresses towards L2 from L1, L3 from L2, and RAM from L3 the physical distance of these memory units increase from the cpu and hence the access time.

Upvotes: 1

Views: 437

Answers (1)

Nachiket Kate
Nachiket Kate

Reputation: 8571

Think it in reverse way, if data found in L1 then it will save comparatively more time than in RAM same is case with L2 and L3. As Harold pointed out L3 is not external to CPU nowadays.

In terms of size L1 < L2 < L3 <<< RAM

This shows searching in RAM will take much more time than searching in L* caches. Caches were introduced for this purpose i.e. To save search time.

As you said, sometimes yes if data is not present in L1,L2,L3 then you need to access RAM with some penalty of caches but that data will be saved in cache for future access. Thus advantages of having cache hits outweighs this penalty.

Generally cache hit ratio should be/ is 90%, if it is not then you need to tune caching policy.

I hope this was helpful.

Upvotes: 3

Related Questions