Reputation: 5511
I'm using a C
library which rips PDF data and provides me with that data via callbacks. Two callbacks are used, one which provides me with the job header and another which provides me with the the ripped data ranging from 1 - 50MB chunks.
I'm then taking that data and sending it across the wire via TCP
to someone who cares.
I'm using the boost async_write
to send that data across the wire. I want to synchronize access to the async_write
until it's done sending the previous chunk of data.
The C
callback functions:
void __stdcall HeaderCallback( void* data, int count )
{
// The Send function is a member of my AsyncTcpClient class.
// This is how I'm currently providing my API with the PDF data.
client.Send( data, count );
}
void __stdcall DataCallback( void* data, int count )
{
client.Send( data, count );
}
I receive the provided data in my AsyncTcpClient
class's Send
method.
void AsyncTcpClient::Send( void* buffer, size_t length )
{
// Write to the remote server.
boost::asio::async_write( _session->socket,
boost::asio::buffer( ( const char* )buffer, length ),
[ this ]( boost::system::error_code const& error, std::size_t bytesTransfered )
{
if ( error )
{
_session->errorCode = error;
OnRequestComplete( _session );
return;
}
std::unique_lock<std::mutex> cancelLock( _session->cancelGuard );
if ( _session->cancelled )
{
OnRequestComplete( _session );
return;
}
} );
}
How can I synchronize access to the async_write
function?
Using a mutex
at the start of the Send
function would be pointless as the async_write
returns immediately.
It's also pointless to store the mutex
in a unique_lock
member variable and attempt to unlock it in the async_write
callback lambda as that'll blow up.
How can I synchronize access to the async_write
function without using strand
?
The first iteration of the program wont use strand
for synchronization, I will be implementing that later.
Upvotes: 2
Views: 442
Reputation: 2240
You should use an io_context::strand.
One example from many others, but that answer will help you.
Upvotes: 1