Reputation: 2458
I am designing a tcp server which takes information from a request and puts everything in a queue to be processed. I am using a asio web server to handle all web interaction. I am looking for an effective way to queue everything up to be processes. I am using boost signals and global vector to do this right now similar to this.
void request_handler::handle_request(request &req, reply &rep)
{
std::string parsedInfo = parse_request(req);
shared_queue.push_back(parsedInfo);
new_entry();
}
new_entry is a boost signal
boost::signal<void ()> new_entry;
Right now I have a signal handler class to catch the signal.
void sig_handler::process_next()
{
boost::try_mutex::scoped_try_lock lock(guard);
if(!lock)
return;
while(!shared_queue.empty())
{
... //Do Stuff
std::string cur_entry = shared_queue.at(0);
shared_queue.erase(shared_queue.begin());
... //Do more stuff
}
}
My goal is to clear out the vector queue when there is information in it, and every time something is pushed on the vector. I would like to avoid polling as much as possible. I believe this part is working how I expect it too. However I am getting an occasional crash from, what I believe based on my backtrace, pushing information on the shared queue. This is only happening when I trying to do 1000's of transactions a second, which makes it hard to debug in a multi-threaded environment. My back trace is here:
Error: signal 11:
./UpdateServer/build/UpdateServer(_Z7handleri+0x18)[0x469f68]
/lib/x86_64-linux-gnu/libc.so.6(+0x364a0)[0x7fbbdd67a4a0]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSsC1ERKSs+0xb)[0x7fbbddfb4f2b]
./UpdateServer/build/UpdateServer[0x4749d0]
./UpdateServer/build/UpdateServer(_ZNSt6vectorISsSaISsEE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPSsS1_EERKSs+0x111)[0x476521]
./UpdateServer/build/UpdateServer(_ZN15request_handler14handle_requestERK7requestR5reply+0x3d3)[0x475873]
./UpdateServer/build/UpdateServer(_ZN10connection11handle_readERKN5boost6system10error_codeEm+0x234)[0x46c774]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail14strand_service8dispatchINS1_7binder2INS_3_bi6bind_tIvNS_4_mfi3mf2Iv10connectionRKNS_6system10error_codeEmEENS5_5list3INS5_5valueINS_10shared_ptrIS9_EEEEPFNS_3argILi1EEEvEPFNSK_ILi2EEEvEEEEESB_mEEEEvRPNS2_11strand_implET_+0xcd)[0x47216d]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail15wrapped_handlerINS0_10io_service6strandENS_3_bi6bind_tIvNS_4_mfi3mf2Iv10connectionRKNS_6system10error_codeEmEENS5_5list3INS5_5valueINS_10shared_ptrIS9_EEEEPFNS_3argILi1EEEvEPFNSK_ILi2EEEvEEEEEEclISB_mEEvRKT_RKT0_+0xd9)[0x472439]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail18completion_handlerINS1_17rewrapped_handlerINS1_7binder2INS1_15wrapped_handlerINS0_10io_service6strandENS_3_bi6bind_tIvNS_4_mfi3mf2Iv10connectionRKNS_6system10error_codeEmEENS8_5list3INS8_5valueINS_10shared_ptrISC_EEEEPFNS_3argILi1EEEvEPFNSN_ILi2EEEvEEEEEEESE_mEESV_EEE11do_completeEPNS1_15task_io_serviceEPNS1_25task_io_service_operationESG_m+0x1e5)[0x4726f5]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail14strand_service8dispatchINS1_17rewrapped_handlerINS1_7binder2INS1_15wrapped_handlerINS0_10io_service6strandENS_3_bi6bind_tIvNS_4_mfi3mf2Iv10connectionRKNS_6system10error_codeEmEENS9_5list3INS9_5valueINS_10shared_ptrISD_EEEEPFNS_3argILi1EEEvEPFNSO_ILi2EEEvEEEEEEESF_mEESW_EEEEvRPNS2_11strand_implET_+0x2ad)[0x472aed]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail19asio_handler_invokeINS1_7binder2INS1_15wrapped_handlerINS0_10io_service6strandENS_3_bi6bind_tIvNS_4_mfi3mf2Iv10connectionRKNS_6system10error_codeEmEENS7_5list3INS7_5valueINS_10shared_ptrISB_EEEEPFNS_3argILi1EEEvEPFNSM_ILi2EEEvEEEEEEESD_mEES6_SU_EEvRT_PNS4_IT0_T1_EE+0x15f)[0x472d3f]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail23reactive_socket_recv_opINS0_17mutable_buffers_1ENS1_15wrapped_handlerINS0_10io_service6strandENS_3_bi6bind_tIvNS_4_mfi3mf2Iv10connectionRKNS_6system10error_codeEmEENS7_5list3INS7_5valueINS_10shared_ptrISB_EEEEPFNS_3argILi1EEEvEPFNSM_ILi2EEEvEEEEEEEE11do_completeEPNS1_15task_io_serviceEPNS1_25task_io_service_operationESF_m+0xce)[0x472ede]
./UpdateServer/build/UpdateServer(_ZN5boost4asio6detail15task_io_service3runERNS_6system10error_codeE+0x79a)[0x47ceea]
./UpdateServer/build/UpdateServer(_ZN5boost4asio10io_service3runEv+0x25)[0x47d1d5]
/usr/lib/libboost_thread.so.1.48.0(+0xdda9)[0x7fbbdecd4da9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7fbbdd42ee9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fbbdd737cbd]
The line ./UpdateServer/build/UpdateServer(_ZNSt6vectorISsSaISsEE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPSsS1_EERKSs+0x111)[0x476521]
seems to demangle to std::vector insert iterator, which is why I believe my program is crashing on the shared vector insert(I do not believe I have any other vectors of strings in my program), however I am fairly positive I am using my vector in a safe way, on both the insert and the read.
So I guess my question is when I am pushing information onto a shared vector, are there race condition problems I have to worry about that would cause my crash? And is the approach I am taking a feasible approach, or should I rethink my design in someway? Please let me know if you need any more information, I will be glad to provide anything I can.
Thank you
Upvotes: 0
Views: 1196
Reputation: 5300
std
data structures are not thread safe (for the most part) and therefore require additional synchronization if accessed by multiple threads simultaneously. In your case, one thread could be calling push_back
while another thread is calling erase
. This will produce undefined behavior. To fix this, both the push_back
and the erase
need to be protected by the same lock. I recommend you google for thread safety in the c++ standard library and read more about it.
vector
here is probably not the best choice. You should look into std::queue instead. When you erase
the first element of the vector
is has to copy all of the strings later in the vector down one, which can be very expensive. queue
does not suffer from this problem.
Upvotes: 3