Reputation: 5733
I am totally a novice in Multi-Core Programming, but I do know how to program C++.
Now, I am looking around for Multi-Core Programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB.
For anyone who have experienced with any of these 3 API (or any other API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?
Upvotes: 5
Views: 7807
Reputation: 91
As a starting point I'd suggest OpenMP. With this you can very simply do three basic types of parallelism: loops, sections, and tasks.
These allow you to split loop iterations over multiple threads. For instance:
#pragma omp parallel for
for (int i=0; i<N; i++) {...}
If you were using two threads, then the first thread would perform the first half of the iteration. The second thread would perform the second half.
These allow you to statically partition the work over multiple threads. This is useful when there is obvious work that can be performed in parallel. However, it's not a very flexible approach.
#pragma omp parallel sections
{
#pragma omp section
{...}
#pragma omp section
{...}
}
Tasks are the most flexible approach. These are created dynamically and their execution is performed asynchronously, either by the thread that created them, or by another thread.
#pragma omp task
{...}
OpenMP has several things going for it.
Directive-based: the compiler does the work of creating and synchronizing the threads.
Incremental parallelism: you can focus on just the region of code that you need to parallelise.
One source base for serial and parallel code: The OpenMP directives are only recognized by the compiler when you run it with a flag (-fopenmp
for gcc). So you can use the same source base to generate both serial and parallel code. This means you can turn off the flag to see if you get the same result from the serial version of the code or not. That way you can isolate parallelism errors from errors in the algorithm.
You can find the entire OpenMP spec at http://www.openmp.org/
Upvotes: 9
Reputation: 13182
Another interesting library is OpenCL. It basically allows you to use all your hardware (CPU, GPU, DSP, ...) in the best way.
It has some interesting features, like the possibility to create hundreds of threads without performance penalties.
Upvotes: 1
Reputation: 78364
Under the hood OpenMP is multi-threaded programming but at a higher level of abstraction than TBB and its ilk. The choice between the two, for parallel programming on a multi-core computer, is approximately the same as the choice between any higher and lower level software within the same domain: there is a trade off between expressivity and controllability.
Intel vs AMD is irrelevant I think.
And your choice ought to depend on what you are trying to achieve; for example, if you want to learn TBB then TBB is definitely the way to go. But if you want to parallelise an existing C++ program in easy steps, then OpenMP is probably a better first choice; TBB will still be around later for you to tackle. I'd probably steer clear of MPI at first unless I was certain that I would be transferring from shared-memory programming (which is mostly what you do on a multi-core) to distributed-memory programming (on clusters or networks). As ever , the technology you choose ought to depend on your requirements.
Upvotes: 8
Reputation: 6208
Depends on your focus. If you are mainly interested in multi threaded programming go with TBB. If you are more interested in process level concurrency then MPI is the way to go.
Upvotes: 2
Reputation: 17119
I'd suggest you to play with MapReduce for sometime. You can install several virtual machines instances on the same machine, each of which runs a Hadoop instance (Hadoop is a Yahoo! open source implementation of MapReduce). There are a lot of tutorials online for setting up Hadoop.
btw, MPI and OpenMP are not the same thing. OpenMP is for shared memory programming, which generally means, multi-core programming, not parallel programming on several machines.
Upvotes: 2