Reputation: 309
When handling virtual memory you often use a TLB (I'm asking about software-managed TLB's) to make things faster. Instead of plugging your virtual address into a page table to get the physical address it maps to you can use a TLB in between to store recently used pages.
So, what you do is that you take your virtual address and look if the TLB contains your address, if it does it will return that physical address the virtual address maps to, but if it doesn't get a hit it will go to the page table, load that page into the TLB and return the physical address.
My question is: How can the TLB be so fast if it has to search for your virtual addresses all the time? The entries seemingly come in no certain order, so going through them would be very slow, I'd assume.
For context, this is the image of a TLB-lookup I have in my head (picture is taken from Wikipedia).
Upvotes: 0
Views: 592
Reputation: 64904
The entries seemingly come in no certain order, so going through them would be very slow, I'd assume.
That's not how things usually work in hardware, "going through" entries isn't really a thing. Well you can do it that way, but then it's not fast anymore so it misses the point.
All entries can be compared simultaneously, and the matching entry (if it exists) can be extracted with a log-depth circuit (constant-depth if you allow arbitrary fan-in, but that has its own problems).
Upvotes: 3
Reputation: 1177
The TLB is just a cache. The idea behind a cache is that it's easier to search a small number of recent references rather than a larger, more "distant", number of unprecendened references.
If I understand, your question is more: "why is caching fast?"
Perhaps someone can explain from an architecture standpoint better than I?
Upvotes: 0