Matt Mc
Matt Mc

Reputation: 9297

Why would pthread_mutex_lock() hang if pthread_mutex_unlock() has been called?

I am using a Swift library for mutable observables that uses a mutex to control reads/writes of the observable value, like so:

import Foundation

class MutObservable<T> {

    lazy var m: pthread_mutex_t = {
        var m = pthread_mutex_t()
        pthread_mutex_init(&m, nil)
        return m
    }()

    var value: T {
        get {
            return _value
        }
        set {
            pthread_mutex_lock(&m)
            _value = newValue
            pthread_mutex_unlock(&m)
        }
    }

}

I'm experiencing a deadlock with this code. By inserting breakpoints and stepping through, I observe the following:

  1. pthread_mutex_lock(&m)
  2. pthread_mutex_unlock(&m)
  3. pthread_mutex_lock(&m)
  4. pthread_mutex_unlock(&m)
  5. pthread_mutex_lock(&m) deadlock

This happens every time this code sequence runs, at least the 30+ times I have tried it. Two lock/unlock sequences, then a deadlock.

Based on my experience with mutexes in other languages, (Go) I would not expect equal lock/unlock calls to produce a deadlock, but I gather that this is a straight C mutex, so there may be rules here I am not familiar with. There could also be Swift/C interop factors in the mix.

My best guess is that this is some kind of memory issue, i.e. the lock is being deallocated somehow, but the deadlock is occurring literally in a setter of the object which I believe the owns the lock from a memory perspective, so I'm not sure how that would be the case.

Is there a reason why a pthread_mutex_lock() call would deadlock if the mutex in question has been previously locked and then unlocked?

Upvotes: 1

Views: 1451

Answers (2)

Rob
Rob

Reputation: 438102

FWIW, in WWDC 2016 video Concurrent Programming with GCD they point out that while you might have historically used pthread_mutex_t, that they now discourage us from using it. They show how you can use traditional locks (recommending os_unfair_lock as a more performant solution, but one that doesn’t suffer the power problems with the old deprecated spin lock), but if you want to do that, they advise that you derive an Objective-C base class with struct-based locks as ivars. But they warn us that we just can’t just use the old C struct-based locks safely directly from Swift.

But there’s no need for pthread_mutex_t locks any more. I personally find that the simple NSLock is quite performant solution, so I personally use an extension (based upon a pattern Apple used in their “advanced operations” example):

extension NSLocking {
    func synchronized<T>(_ closure: () throws -> T) rethrows -> T {
        lock()
        defer { unlock() }
        return try closure()
    }
}

Then I can define a lock and use this method:

class Synchronized<T> {
    private var _value: T

    private var lock = NSLock()

    var value: T {
        get { lock.synchronized { _value } }
        set { lock.synchronized { _value = newValue } }
    }

    init(value: T) {
        _value = value
    }
}

That video (being about GCD), shows how you might do it with GCD queues. A serial queue is the easiest solution, but you can also use reader-writer pattern on concurrent queue, with the reader using sync, but the writer using async with a barrier:

class Synchronized<T> {
    private var _value: T

    private var queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".synchronizer", attributes: .concurrent)

    var value: T {
        get { queue.sync { _value } }
        set { queue.async(flags: .barrier) { self._value = newValue } }
    }

    init(value: T) {
        _value = value
    }
}

I’d suggest benchmarking the various alternatives for your use case and see which is best for you.


Please note that I’m synchronizing both reads and writes. Only using synchronization on the writes will guard against simultaneous writes, but not against simultaneous reads and writes (where the read might therefore yield an invalid result).

Make sure to synchronize all interaction with the underlying object.


All of this having been said, doing this at the accessor-level (like you’ve done and like I’ve shown above) is almost always insufficient to achieve thread safety. Invariably synchronization must be at a higher level of abstraction. Consider this trivial example:

let counter = Synchronized(value: 0)

DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in
    counter.value += 1
}

This will almost certainly not return 1,000,000. It’s because the synchronization is at the wrong level. See Swift Tip: Atomic Variables for a discussion of what’s wrong.

You can fix this by adding a synchronized method to wrap whatever needs synchronization (in this case, the retrieval of the value, the incrementing of it, and the storing of the result):

class Synchronized<T> {
    private var _value: T

    private var lock = NSLock()

    var value: T {
        get { lock.synchronized { _value } }
        set { lock.synchronized { _value = newValue } }
    }

    func synchronized(block: (inout T) throws -> Void) rethrows {
        try lock.synchronized { try block(&_value) }
    }

    init(value: T) {
        _value = value
    }
}

And then:

let counter = Synchronized(value: 0)

DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in
    counter.synchronized { $0 += 1 }
}

Now, with the whole operation synchronized, we get the correct result. This is a trivial example, but it shows why burying the synchronization in the accessors is frequently insufficient, even in a trivial example like the above.

Upvotes: 4

Andreas Oetjen
Andreas Oetjen

Reputation: 10209

The problem is that you are using a local (e.g. stack) variable as the mutex. This is never a good idea, because the stack is highly and unpredictably mutable.

Also, using lazy does not work very well, because might return different addresses (from my testing). Therefore I would suggest using init to initialize the mutex:

class MutObservable<T> {
    private var m = pthread_mutex_t()

    var _value:T

    var value: T {
        get {
            return _value
        }
        set {
            pthread_mutex_lock(&m)
            setCount += 1
            _value = newValue
            pthread_mutex_unlock(&m)
        }
    }

    init(v:T) {
        _value = v
        pthread_mutex_init(&m, nil)
    }
}

Upvotes: 3

Related Questions