robertchen
robertchen

Reputation: 461

Why ARM SMP Linux kernel forces cachepolicy to writealloc?

Is there an architectural reason to use writealloc cache policy in ARM SMP Linux kernel? Can we change it to writeback cache policy?

Kernel boot log :

[ 0.000000] Forcing write-allocate cache policy for SMP
[ 0.000000] Memory policy: Data cache writealloc

Upvotes: 3

Views: 981

Answers (1)

artless-noise-bye-due2AI
artless-noise-bye-due2AI

Reputation: 22430

Is there an architectural reason to use writealloc cache policy in ARM SMP Linux kernel?

First, it is much faster for most work loads. Second, the spin_locks and other Linux synchronization primitives use LDREX and STREX and probably need to have a write allocate policyXilinx W/A and exclusive or at least would complicate the code using exclusive access, which is a large benefit for SMP systems.

Write allocate implies a write-back cache; no-write allocate implies a write-through cache (or basically no caching of writes). It is probably much harder to get exlusive locking to work with a write-through cache (because you will have to duplicate the write-back cache to implement the exclusive lock).

Can we change it to writeback cache policy?

It looks like NO. At least not without modifying the source, which is what I think you mean. The kernel parameter cachepolicy can be one of,

  • uncached
  • buffered
  • writethrough
  • writeback
  • writealloc

build_mem_type_table forces this to "write-allocate" for an SMP system. At the very least you need to change this code. However, if you naively remove it, it will have consequences. See for instance, ca8f0b0a545f55b.


Source: Wikipedia

There are two basic cache writing approaches:

  • Write-through: write is done synchronously both to the cache and to the backing store.
  • Write-back (also called write-behind): initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block.

...

Since no data is returned to the requester on write operations, a decision needs to be made on write misses, whether or not data would be loaded into the cache. This is defined by these two approaches:

  • Write allocate (also called fetch on write): data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses.
  • No-write allocate (also called write-no-allocate or write around): data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only.

...

  • A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to the same location, which is now cached.
  • A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.

Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.


ARM cpus typically have a write buffer so multiple writes (say 32bit) will be ganged into 128bit (AXI bus size) or even large for SDRAM devices.

Upvotes: 2

Related Questions