nineee
nineee

Reputation: 63

Does new/malloc or delete/free occupy or invalidate cache lines?

I am curious about the cache behavior. Here are some questions related to the cache below :

  1. Does write operation bring the data into the cache? Considering an assignment like A[i] = B[i], will A[i] be loaded into the cache?Since I just write something into A[i] instead of reading its value.

  2. When allocating large memory, the memory may come from the OS. And the OS will initialize the data to zero for safety reason (Reference ). If assignments will bring data into cache(question 1) , will this mechanism occupy the cache ?

  3. Assume that there is an allocated array B, and the entire B is now in the cache. Will the cache lines occupied by B become invalid(available) right after I free array B?

Could somebody gives me a hint?

Upvotes: 6

Views: 925

Answers (3)

user184968
user184968

Reputation:

From here https://people.freebsd.org/~lstewart/articles/cpumemory.pdf

--

  1. Does write operation bring the data into the cache?

From the article:

By default all data read or written by the CPU cores is stored in the cache. There are memory regions which cannot be cached but this is something only the OS implementers have to be concerned about; it is not visible to the application programmer. There are also instructions which allow the programmer to deliberately bypass certain caches. This will be discussed in section 6.

--

  1. When allocating large memory, the memory may come from the OS. will this mechanism occupy the cache?

Possibly not. It will occupy the cache only after reading or writing data. From the article:

On operating systems like Linux with demand-paging support, an mmap call only modifies the page tables ... No actual memory is allocated at the time of the mmap call.

The allocation part happens when a memory page is first accessed, either by reading or writing data, or by executing code. In response to the ensuing page fault, the kernel takes control and determines, using the page table tree, the data which has to be present on the page. This resolution of the page fault is not cheap, but it happens for every single page which is used by a process.

--

3 .Assume that there is an allocated array B, and the entire B is now in the cache. Will the cache lines occupied by B become invalid(available) right after I free array B?

from the article invalidation of a cache line happens only when there has been a write operation on another CPU

What developed over the years is the MESI cache coherency protocol (Modified, Exclusive, Shared, Invalid). The protocol is named after the four states a cache line can be in when using the MESI protocol. ... If the second processor wants to write to the cache line the first processor sends the cache line content and marks the cache line locally as Invalid.

And also cache line can be evicted:

Another detail of the caches which is rather uninteresting to programmers is the cache replacement strategy. Most caches evict the Least Recently Used (LRU) element first.

And from my experience with TCMalloc free() is not a compelling reason to evict the memory from a cache. On the contrary it might be harmful to performance. On free() TCMalloc just puts a freed block of memory in its cache. And this block of memory will be returned by malloc() when an application asks for a block of memory next time. That is the essence of a caching alliocator like TCMalloc. And if this block of memory is still in cachethen it is even better for performance!

Upvotes: 3

MSalters
MSalters

Reputation: 179981

To answer the title question, no. These operations do not invalidate caches, and quite intentionally so. What happens to freed memory? There are two important cases. One, the memory is immediately recycled for your programs next allocation. Having the address still in cache is efficient in this case, as this reduces the number of writes to main memory.

But even if the memory isn't directly reused, the memory allocator may merge free blocks behind the scenes or perform other operations. This often involves writing to the space formerly allocated by your data. After all, freeing memory doesn't physically destroy the memory. It merely sets the owner. After delete, ownership is transferred from your program to the runtime or the OS.

Upvotes: 1

marom
marom

Reputation: 5230

This is an interesting article where you will find more info (probably too much) about what you are asking about:

What every programmer should know about memory

Regarding your question every operation you do on memory will be cached. As a programmer you have no control whatsoever about it (and eves the OS has not). Keep in mind that if you need to implement a space(memory) consuming algorithm then try to increase memory locality.

So assuming you have to deal with 1GB of data try to split up computation into clusters (sections of data) and try to do all operations one section at a time. This way you will actually use data from cache and not need to get to outer memory every time. This may give you a performance boost.

Upvotes: 2

Related Questions