Reputation: 6221
So i came across Intel TBB and asked myself it it is suitable for some kind of DBMS?
So for example i have a task called process query which executes a statement in my system and returns something by a return parameter. This would look for example like this:
class ProcessQuery : public task {
private:
int a;
public:
ProcessQuery(int n, char* result) :a(n){};
task* execute() {
//do something and write the result
}
};
To execute this i would for example do this(it is just an example!):
tbb::task_scheduler_init init(tbb::task_scheduler_init::automatic);
//init of the parameter for the tasks
ProcessQuery &q1 = *new(tbb::task::allocate_root()) ProcessQuery(1, r1);
ProcessQuery &q2 = *new(tbb::task::allocate_root()) ProcessQuery(2, r2);
ProcessQuery &q3 = *new(tbb::task::allocate_root()) ProcessQuery(3, r3);
tbb::task::spawn(q1);
tbb::task::spawn(q2);
tbb::task::spawn(q3);
Moreover i would need some task which loops and checks if there is a result and send that back to the query client. So there would be a task which is root and has those ProcessQuery
task as children. Or even the task get the client passed as reference and send the result when hes done.
So is this suitable or is there some better solution which comes more or less out of the box and has high capability? (Maybe i am even wrong with the taskscheduler of tbb there is even more inside of the lib i know)
Upvotes: 1
Views: 394
Reputation: 6537
Let me first fix your ancient-styled example. Since tbb::task
is low-level API and task_scheduler_init
is optional, so I'd not recommend to start with them. Use high-level API instead, e.g. task_group
:
tbb::task_group tg;
int a;
tg.run([=a]{ /*do something and write the result*/ }); a++;
tg.run([=a]{ /*do something and write the result*/ }); a++;
tg.run([=a]{ /*do something and write the result*/ }); a++;
// ...
tg.wait(); // recommended before program termination
As for your question, TBB is designed primarily for parallel computing and it has no sufficient support for blocking operations like file I/O and networking. Because these operations will block worker threads in OS and cause underutilization of CPU resources since TBB limits the number of worker threads in order to prevent oversubscription.
But TBB is good with asynchronous I/O when the blocking operation is limited to one single thread and TBB workers process the events it produces. There is one minor problem with excessive utilization of master thread when there are no yet incoming events but it can be worked-around or even fixed in TBB scheduler.
A simple high-level approach for such a producer-consumers pattern is to use the parallel_pipeline:
void AsyncIO() {
parallel_pipeline( /*max_number_of_live_token=*/
4*task_scheduler_init::default_num_threads(),
make_filter<void, event_t>(
filter::serial, // only one thread can get events
[](flow_control& fc)-> event_t {
event_t e;
if( !get_event(e) ) {
fc.stop(); // finish pipeline
return event_t(); // empty event
}
return e;
}
) &
make_filter<event_t, void>(
filter::parallel, // events can be processed in parallel
[&](event e) {
process_event(e);
enqueue_response(e); // do not block when write/send back
}
)
);
}
To summarize, if you can split your operations as blocking and non-blocking and separate all the blocking operations to dedicated thread(s), TBB can help with organizing scalable computing and reducing latency per one request by processing it in parallel.
Upvotes: 1