Reputation: 265
I'm trying to implement a UDP-based server that maintains two sockets, one for controlling(ctrl_sock
) and the other for data transmission(data_sock
). The thing is, ctrl_sock
is always uplink and data_sock
is downlink. That is, clients will request data transmission/stop via the ctrl_sock
and data will be sent to them via data_sock
.
Now the problem is, since the model is connection-less, the server will have to maintain a list of registered clients' information( I call it peers_context
) such that it can "blindly" push data to them until they ask to stop. During this blind transmission, clients may send controlling messages to the server via the ctrl_sock
asynchronously. These information, besides initial Request and Stop, can also be, for example, preferences of file parts. Therefore, the peers_context
has to be updated asynchronously. However, the transmission over the data_sock
relies on this peers_context
structure, hence raises a synchronization problem between ctrl_sock
and data_sock
. My question is, what can I do to safely maintain these two socks and the peers_context
structure such that the asynchronous update of peers_context
won't cause a havoc. By the way, the update of peers_context
wouldn't be very frequent, that is why I need to avoid the request-reply model.
My initial consideration of the implementation is, to maintain ctrl_sock in the main thread(listener thread), and transmission over data_sock
is maintained in the other thread(worker thread). However, I found it is difficult to synchronize in this case. For example, if I use mutexes in peers_context
, whenever the worker thread locks peers_context
, the listener thread wouldn't have access to it anymore when it needs to modify peers_context
, because the worker thread works endlessly. On the other hand, if the listener thread holds the peers_context
and writes to it, the worker thread would fail to read peers_context
and terminates. Can anybody give me some suggestions?
By the way, the implementation is done in Linux environment in C. Only the listener thread would need to modify peers_context
occasionally, the worker thread only needs to read. Thanks sincerely!
Upvotes: 0
Views: 136
Reputation: 22261
If there is strong contention for your peers_context
then you need to need to shorten your critical sections. You talked about using a mutex. I assume you've already considered changing to a reader+writer lock and rejected it because you don't want the constant readers to starve a writer. How about this?
Make a very small structure that is an indirect reference to a peers_context
like this:
struct peers_context_handle {
pthread_mutex_t ref_lock;
struct peers_context *the_actual_context;
pthread_mutex_t write_lock;
};
Packet senders (readers) and control request processors (writers) always access the peers_mutex
through this indirection.
Assumption: the packet senders never modify the peers_context
, nor do they ever free it.
Packer senders briefly lock the handle, obtain the current version of the peers_context
and unlock it:
pthread_mutex_lock(&(handle->ref_lock));
peers_context = handle->the_actual_context;
pthread_mutex_unlock(&(handle->ref_lock));
(In practice, you can even do away with the lock if you introduce memory barriers, because a pointer dereference is atomic on all platforms that Linux supports, but I wouldn't recommend it since you would have to start delving into memory barriers and other low-level stuff, and neither C nor POSIX guarantees that it will work anyway.)
Request processors don't update the peers_context
, they make a copy and completely replace it. That's how they keep their critical section small. They do use write_lock
to serialize updates, but updates are infrequent so that's not a problem.
pthread_mutex_lock(&(handle->write_lock));
/* Short CS to get the old version */
pthread_mutex_lock(&(handle->ref_lock));
old_peers_context = handle->the_actual_context;
pthread_mutex_unlock(&(handle->ref_lock));
new_peers_context = allocate_new_structure();
*new_peers_context = *old_peers_context;
/* Now make the changes that are requested */
new_peers_context->foo = 42;
new_peers_context->bar = 52;
/* Short CS to replace the context */
pthread_mutex_lock(&(handle->ref_lock));
handle->the_actual_context = new_peers_context;
pthread_mutex_unlock(&(handle->ref_lock));
pthread_mutex_unlock(&(handle->write_lock));
magic(old_peers_context);
What's the catch? It's the magic in the last line of code. You have to free the old copy of the peers_context
to avoid a memory leak but you can't do it because there might be packet senders still using that copy.
The solution is similar to RCU, as used inside the Linux kernel. You have to wait for all of the packet sender threads to have entered a quiescent state. I'm leaving the implementation of this as an exercise for you :-) but here are the guidelines:
magic()
function adds old_peers_context
so a to-be-freed queue (which has to be protected by a mutex).peer_contexts
.Upvotes: 1