Pengcheng
Pengcheng

Reputation: 459

Why is atomic_thread_fence(memory_order_seq_cst) needed in a lock-free queue that already uses seq_cst CAS?

A lock-free queue, only one thread execute push and pop, others execute steal.

However, I can't understand why steal() needs std::atomic_thread_fence(std::memory_order_seq_cst).

In my opinion, steal() only has one store operation, that is _top.compare_exchange_strong, and it has memory_order_seq_cst. So, why does it need a seq_cst fence as well?

template <typename T>
class WorkStealingQueue {
public:
    WorkStealingQueue() : _bottom(1), _top(1) { }
    ~WorkStealingQueue() { delete [] _buffer; }

    int init(size_t capacity) {
        if (capacity & (capacity - 1)) {
            LOG(ERROR) << "Invalid capacity=" << capacity
                       << " which must be power of 2";
            return -1;
        }

        _buffer = new(std::nothrow) T[capacity];
        _capacity = capacity;
        return 0;
    }

    // Steal one item from the queue.
    // Returns true on stolen.
    // May run in parallel with push() pop() or another steal().
    bool steal(T* val) {
        size_t t = _top.load(std::memory_order_acquire);
        size_t b = _bottom.load(std::memory_order_acquire);
        if (t >= b) {
            // Permit false negative for performance considerations.
            return false;
        }

        do {
            std::atomic_thread_fence(std::memory_order_seq_cst);
            b = _bottom.load(std::memory_order_acquire);
            if (t >= b) {
                return false;
            }
            *val = _buffer[t & (_capacity - 1)];
        } while (!_top.compare_exchange_strong(t, t + 1,
                                               std::memory_order_seq_cst,
                                               std::memory_order_relaxed));
        return true;
    }

    // Pop an item from the queue.
    // Returns true on popped and the item is written to `val'.
    // May run in parallel with steal().
    // Never run in parallel with push() or another pop().
    bool pop(T* val) {
        const size_t b = _bottom.load(std::memory_order_relaxed);
        size_t t = _top.load(std::memory_order_relaxed);
        if (t >= b) {
            // fast check since we call pop() in each sched.
            // Stale _top which is smaller should not enter this branch.
            return false;
        }

        const size_t newb = b - 1;
        _bottom.store(newb, std::memory_order_relaxed);
        std::atomic_thread_fence(std::memory_order_seq_cst);

        t = _top.load(std::memory_order_relaxed);
        if (t > newb) {
            _bottom.store(b, std::memory_order_relaxed);
            return false;
        }

        *val = _buffer[newb & (_capacity - 1)];
        if (t != newb) {
            return true;
        }

        // Single last element, compete with steal()
        const bool popped = _top.compare_exchange_strong(
            t, t + 1, std::memory_order_seq_cst, std::memory_order_relaxed);
        _bottom.store(b, std::memory_order_relaxed);
        return popped;
    }

    // Push an item into the queue.
    // Returns true on pushed.
    // May run in parallel with steal().
    // Never run in parallel with pop() or another push().
    bool push(const T& x) {
        const size_t b = _bottom.load(std::memory_order_relaxed);
        const size_t t = _top.load(std::memory_order_acquire);
        if (b >= t + _capacity) { // Full queue.
            return false;
        }

        _buffer[b & (_capacity - 1)] = x;
        _bottom.store(b + 1, std::memory_order_release);
        return true;
    }

private:
    DISALLOW_COPY_AND_ASSIGN(WorkStealingQueue);

    std::atomic<size_t> _bottom;
    size_t _capacity;
    T* _buffer;
    std::atomic<size_t> BAIDU_CACHELINE_ALIGNMENT _top;
};

enter image description here

Upvotes: 2

Views: 538

Answers (1)

mpoeter
mpoeter

Reputation: 2949

You do not have to use a seq-cst-fence, but then you would have to make the operations on _bottom sequentially consistent. The reason is that it must be guaranteed that the load operation in steal sees the updated value written in pop. Otherwise you could have a race condition where the same item could be returned twice (once from pop and once from steal).

For comparison you can take a look at my implementation of the Chase-Lev-Deque: https://github.com/mpoeter/xenium/blob/master/xenium/chase_work_stealing_deque.hpp

Upvotes: 1

Related Questions