user1922
user1922

Reputation: 51

What is the difference between shared memory and L1 cache in GPU?

I have noticed that the latency access to both cache and shared memory are the same in CUDA. Given that fact how are they different? How do we use them in different ways?

Upvotes: 1

Views: 1060

Answers (1)

Florent DUGUET
Florent DUGUET

Reputation: 2916

CUDA shared memory usage is explicit with the __shared__ keyword. You have full control on it. L1 cache on the other hand is managed by hardware. Performance and caching strategy of L1 cache depends on hardware architecture.

Upvotes: 2

Related Questions