Reputation: 185
Supposed I have a class that represent some data structure called foo:
class foo{
public:
foo(){
attr01 = 0;
}
void f(){
attr01 += 5;
}
private:
int attr01;
};
class fooSingleThreadUserClass{
void usefoo(){
fooAttr.f();
}
foo fooAttr;
}
Now supposed later in software construction, I found out that I need multithreading. Should I add the mutex in foo?
class foo{
public:
foo(){
attr01 = 0;
}
void f(){
attr01Mutex.lock();
attr01 += 5;
attr01Mutex.unlock();
}
private:
int attr01;
std::mutex attr01Mutex;
};
class fooMultiThreadUserClass{
void usefoo(){
std::thread t1(&fooMultiThreadUserClass::useFooWorker, this);
std::thread t2(&fooMultiThreadUserClass::useFooWorker, this);
std::thread t3(&fooMultiThreadUserClass::useFooWorker, this);
std::thread t4(&fooMultiThreadUserClass::useFooWorker, this);
t1.join();
t2.join();
t3.join();
t4.join();
}
void useFooWorker(){
fooAttr.f();
}
foo fooAttr;
}
I know that fooMultiThreadUserClass will now be able to run foo without races in high performance, but will fooSingleThreadUserClass loose performance due to mutex overhead? I would be very intrested to know. Or should I derive fooCC from foo for concurrency purposes so fooSingleThreadUserClas can keep using foo without mutex, and fooMultiThreadUserClass use fooCC with mutexes, as shown below
class fooCC : public foo{
public:
foo(){
attr01 = 0;
}
void f(){ // I assume that foo::f() is now a virtual function.
attr01Mutex.lock();
foo::f();
attr01Mutex.unlock();
}
private:
std::mutex attr01Mutex;
};
Also assume that compiler optimization already took care of virtual dispatches. I would like an opinion wether I should use inhertance or simply put the mutex lock in the original class.
I've search through Stackoverflow already, but I guess my question is a little too specific.
Edit: Note, there doesn’t have to be just one argument, the question is meant to be abstract with a class of n argument.
Upvotes: 6
Views: 13495
Reputation: 9413
Mutex per object is a good idea sometimes, but this approach is not modular. Consider this example:
using namespace std;
struct LimitCounter {
int balance = 1000;
mutex lock;
bool done() const {
lock_guard<mutex> g(lock);
return balance == 0;
}
void dec() {
lock_guard<mutex> g(lock);
balance--;
}
};
And user of this limit counter:
LimitCounter counter; // global context
// JobRunner run some job no more than 1000 times
struct JobRunner {
motex lock;
void do_the_job() {
lock_guard<mutex> g(lock);
if (!counter.done()) {
...actually do the job...
}
counter.dec();
}
};
This code is thread safe but incorrect (in multithreaded environment balance can become negative and job will be executed more than 1000 times). Combination of two correctly synchronized objects doesn't give us correct result.
To make it correct, you must share a lock between all instances of the JobRunner class. It must be locked just before counter.done() check and unlocked after counter.dec(). In another words - lock hierarchy must be decoupled from object hierarchy. Where to place this lock is a matter of a personal preference. You can lock on LimitCounter::lock inside JobRunner::do_the_job, or you can make JobRunner::lock a static variable, you can pass your mutex to the JobRunner::do_the_job as an argument.
Another case is when you have very large amount of objects. In this case you can't just add a mutex for each object because it is too expensive (every mutex is a kernel object, you can run out of handles). In this case you can shard your objects and lock each shard with the same mutex. For example:
mutex mutexes[0x1000];
....
struct UbiquitousResource {
int unique_id;
void do_some_job() {
auto& m = mutexes[hash(unique_id) & 0xFFF];
lock_guard<mutex> g(m);
...do the job...
}
};
I believe that Java and C# do something like this when you use synchronized methods (in Java) or lock objects (in C#).
Upvotes: 0
Reputation:
Use an std::lock_guard
. The lock_guard
takes a mutex
in its constructor. During construction, the lock_guard
locks the mutex
. When the lock_guard
goes out of scope, its destructor automatically releases the lock.
class foo
{
private:
std::mutex mutex;
int attr01;
public:
foo() {
attr01 = 0;
}
void f(){
std::lock_guard<std::mutex> lock (mutex);
attr01 += 5;
}
};
You can put mutable
on the mutex
if you need to be able to lock or unlock the mutex
from const
functions. I usually leave mutable
off the mutex
until I specifically need it.
Will it lose performance? It depends. If you are calling the function a million times then maybe the overhead of creating the mutex
will become a problem (they are not cheap). If the function takes a long time to execute and it is called frequently by many threads, then perhaps the rapid blocking will hinder performance. If you can't pinpoint a specific concern, just use std::lock_guard
.
Hans Passant brings up a valid concern that is out-of-scope of your question. I think Herb Sutter (?) wrote about this in one of his website articles. Unfortunately I can't find it right now. To understand why multi-threading is so hard, and why locks on single data fields is "not enough", read a book on multi-threaded programming like C++ Concurrency in Action: Practical Multithreading
Upvotes: 5