Reputation: 4505
I use the same socket in my udp server in order to receive data from clients on some port, and later after processing of requests respond to to clients using ip::ud::socket ::async_send_to
Receive is done async with async_receive_from also. The socket uses same ioService (it's the same socket after all) The documentation does not state clearly if one can have at a moment the same udp socket receive datagrams from client A (in async way) and possibly send another datagram to client B (async sent) at the same time I suspect this could lead to problems. I ended up using same socket for reply because I could not bind another socket to the same server port while replying to another client.
How can I bind another socket to the same server port?
EDIT. I try to bind the second udp socket to the same UDP port with:
socket(ioService, boost::asio::ip::udp::endpoint(boost::asio::ip::udp::v4(), port))
When I do this first time (binding for server "receive" socket) it's OK but trying to create another socket the second time like that it reports error at bind (asio throws exception)
Upvotes: 7
Views: 13468
Reputation: 51871
It is possible to have a UDP socket concurrently receiving from one remote endpoint and sending to a different remote endpoint. However, per the Boost.Asio Threads and Boost.Asio documentation, it is generally unsafe to make concurrent calls on a single object.
Thus, this is safe:
thread_1 | thread_2 --------------------------------------+--------------------------------------- socket.async_receive_from( ... ); | socket.async_send_to( ... ); |
and this is safe:
thread_1 | thread_2 --------------------------------------+--------------------------------------- socket.async_receive_from( ... ); | | socket.async_send_to( ... );
but this is specified as not being safe:
thread_1 | thread_2 --------------------------------------+--------------------------------------- socket.async_receive_from( ... ); | socket.async_send_to( ... ); |
Be aware that some functions, such as boost::asio::async_read
, are a composed operation, and have additional thread safety restrictions.
If either of the following are true, then no additional synchronization needs to occur, as the flow will be implicitly synchronous:
io_service::run()
is only invoked from a single thread.async_receive_from
and async_send_to
are only invoked within the same chain of asynchronous operations. For example, the ReadHandler
passed to async_receive_from
invokes async_send_to
, and the WriteHandler
passed to async_send_to
invokes async_receive_from
.
void read()
{
socket.async_receive_from( ..., handle_read ); --.
} |
.-----------------------------------------------'
| .----------------------------------------.
V V |
void handle_read( ... ) |
{ |
socket.async_send_to( ..., handle_write ); --. |
} | |
.-------------------------------------------' |
| |
V |
void handle_write( ... ) |
{ |
socket.async_receive_from( ..., handle_read ); --'
}
On the other hand, if there are multiple threads potentially making concurrent calls to the socket, then synchronization needs to occur. Consider performing synchronization by either invoking the functions and handlers through a boost::asio::io_service::strand, or using other synchronization mechanisms, such as Boost.Thread's mutex.
In addition to thread safety, the management of object lifetimes must be considered. If the server needs to process multiple request concurrently, then be careful about ownership of the buffer
and endpoint
for each request->process->response chain. Per async_receive_from
's documentation, the caller retains ownership of both the buffer and endpoint. As such, it may be easier to manage the lifetime of the objects via boost::shared_ptr. Otherwise, if the chain is quick enough that concurrent chains are not required, then it simplifies management, allowing the same buffer and endpoint to be used per request.
Finally, the socket_base::reuse_address
class allows a socket to be bound to an address that is already in use. However, I do not think it is an applicable solution here, as it is generally used:
TIME_WAIT
state.Upvotes: 16