Ufuk Can Bicici
Ufuk Can Bicici

Reputation: 3649

How does CUDA exactly synchronize threads in a warp at barriers and conditional expressions?

I recently asked a question about synchronization issues among the threads of a block in CUDA, here: Does early exiting a thread disrupt synchronization among CUDA threads in a block? One of the comments to my question gave a link to a similar thread, which quoted the following about the CUDA barrier (__syncthreads()) instruction from the PTX guide:

Barriers are executed on a per-warp basis as if all the threads in a warp are active. Thus, if any thread in a warp executes a bar instruction, it is as if all the threads in the warp have executed the bar instruction. All threads in the warp are stalled until the barrier completes, and the arrival count for the barrier is incremented by the warp size (not the number of active threads in the warp). In conditionally executed code, a bar instruction should only be used if it is known that all threads evaluate the condition identically (the warp does not diverge). Since barriers are executed on a per-warp basis, the optional thread count must be a multiple of the warp size.

I am still a bit confused about the mechanism explained in this quote. It says that if we use barriers in a conditional code and if some of the threads fail to reach the barrier instruction by taking a different path at the conditional code, then it could cause undefined behavior and even deadlocks. The thing is, I don't understand how this mechanism can cause deadlocks. (Even thread numbers, which are not a multiple of the warp size, are dangerous.) The document says that if even a single thread executes a bar instruction it is treated as if all the threads in the warp executed the warp instruction and the arrival counter is updated by the number of the threads in the warp. Probably the CUDA architecture determines whether all the threads have been sycnhronized by checking this arrival counter; by comparing it to the actual number of threads in the block. If it was updated as a per-thread basis, then this could cause a deadlock since the counter will never reach to the max. num. of threads since some of them took conditional paths which don't contain the bar instruction. But here, the number is updated with the number of threads in the warp. So, I don't exactly understand the underlying mechanism here.

My other question is about conditional statements overall. I know that all threads in a warp executes the same instruction at a given time and in the case of an if clause, the threads which take if and else branches wait for each other by staying idle and synchronizing again at the end of the conditional. So there is an implicit synchronization mechanism for such conditional codes. Now, how will this work in a code like the following:

int foundCount=0;
for(int i=0;i<array1_length;i++)
{
    for(j=0;j<array0_length;j++)
    {
        if(i == array0[j])
        {
            array1[i] = array1[i] + 1;
            foundCount++;
            break;
        }
    }

     if(foundCount == foundLimit)
        break;
}

This is a piece of code from my current task; for each member of the array1 I need to check whether the current array1 index is contained by the array0. If it is so, I increment the element for the current index in the array1 and since it is already contained by the array0, I exit from the inner loop with a break statement. If the total contained number of indices for the array1 reaches a limit, we don't need to continue the outer loop and we can exit from it as well. This is straightforward for a CPU code but I want to know how the CUDA's warp mechanism handles such a nested conditional case. Imagine a warp of 32 threads are processing this code, some of them can process the inner loop while some of them has already exited it and some of them could even have exited from the outer loop. How does the architecture organize the working of the threads in this case, does it hold a list of current "waiting points" of the threads? How it ensures in such complex situations that the threads in the same warp do process the same line of code?

Upvotes: 2

Views: 1933

Answers (1)

talonmies
talonmies

Reputation: 72342

Conditional branching is implemented by having all threads in the warp execute all branches. Those threads which don't follow the branch execute the equivalent of a null op. This is usually referred to as masked execution, and it is also how partial warps can be accomodated: partial warps contain permenantly masked threads. There is also direct conditional execution instructions available for implementing things like ternary operators without branching.

These mechanisms do not apply to the standard bar PTX instruction. As you note, that is implemented using a simple counter decrement scheme, and if all threads in the block don't decrement the counter to zero, a deadlock will result.

Upvotes: 3

Related Questions