오승택
오승택

Reputation: 45

cuda global and shared memory access time

Recently I am studying about CUDA. I want to know about CUDA memory access times.

In, CUDA Programming Guide written memory access times:

I think that Cycle is same as clock. Is this correct ? If that is Correct, so I examined memory access times. The host is fixed but the kernel code has 3 versions. This is My Code:


Host code:

float* H1  = (float*)malloc(sizeof(float)*100000);
float* D1;
    
for( int i = 0 ; i < 100000 ; i++ ){
    H1[i]  = i;
}
        
cudaMalloc( (void**)&D1,   sizeof(float)*100000);
cudaMemcpy( D1, H1,    sizeof(float)*100000, cudaMemcpyHostToDevice );
            
    
cudaPrintfInit();
test<<<1,1>>>( D1 );
cudaPrintfDisplay(stdout, true);

cudaPrintfEnd();

Kernel version 1:

float Global1;
float Global2;
float Global3;

int Clock;

Clock = clock();
Global1 = Dev_In1[1];
Clock = clock() - Clock;
cuPrintf("Global Memory Access #1 : %d\n", Clock );

Clock = clock();
Global2 = Dev_In1[2];
Clock = clock() - Clock;
cuPrintf("Global Memory Access #2 : %d\n", Clock );

Clock = clock();
Global3 = Dev_In1[3];
Clock = clock() - Clock;
cuPrintf("Global Memory Access #3 : %d\n", Clock );

It`s result

Global Memory Access #1 : 882       
Global Memory Access #2 : 312       
Global Memory Access #3 : 312       

I think that first access not cache so took 800 Cycle but 2nd access 3rd access took 312 Cycle because, Dev_In[2] and Dev_In[3] are cached...


Kernel version 2:

int Global1, Global2, Global3;              
int Clock;              
            
Clock = clock();                
Global1 = Dev_In1[1];               
Clock = clock() - Clock;                
cuPrintf("Global Memory Access #1 : %d\n", Clock );             
            
Clock = clock();                
Global2 = Dev_In1[50000];               
Clock = clock() - Clock;                
cuPrintf("Global Memory Access #2 : %d\n", Clock );             
            
Clock = clock();                
Global3 = Dev_In1[99999];               
Clock = clock() - Clock;                
cuPrintf("Global Memory Access #3 : %d\n", Clock );             

It`s result

Global Memory Access #1 : 872       
Global Memory Access #2 : 776       
Global Memory Access #3 : 782       

I think that not cached Dev_In1[50000] and Dev_In2[99999] at 1st access time

So... #1,#2,#3 is late...


Kernel version 3:

int Global1, Global2, Global3;                  
int Clock;                  
                
Clock = clock();                    
Global1 = Dev_In1[1];                   
Clock = clock() - Clock;                    
cuPrintf("Global Memory Access #1 : %d\n", Clock );                 
                
Clock = clock();                    
Global1 = Dev_In1[50000];                   
Clock = clock() - Clock;                    
cuPrintf("Global Memory Access #2 : %d\n", Clock );                 
                
Clock = clock();                    
Global1 = Dev_In1[99999];                   
Clock = clock() - Clock;                    
cuPrintf("Global Memory Access #3 : %d\n", Clock );                 

Result:

Global Memory Access #1 : 168       
Global Memory Access #2 : 168       
Global Memory Access #3 : 168       

I don`t understand this result.

Dev_In[50000] and Dev_In[99999] are not cached, but access time is very fast! Just, I used one variable...

So... My question is if GPU cycle == GPU clock ?

And why memory access time is very fast in result three?

Upvotes: 3

Views: 4015

Answers (1)

lashgar
lashgar

Reputation: 5430

For the reason stated by @phoad your evaluations are not valid. After memory access and before clock-stop you should reuse the memory read value to make instruction dependency to the outstanding load. Otherwise, GPU issues independent instructions one after the other and the clock-end get executed immediately after clock-start and the load. I suggest you to try the microbenchmarking suit prepared by Henry Wong at here. Using this suit you can retrieve various microarchitecture details including memory access latency. If you only need memory latency, it is easier to try CUDA latency which is developed by Sylvain Collange.

Upvotes: 1

Related Questions