Reputation: 27370
In general, can Mathematica automatically (i.e. without writing code specifically for this) exploit GPU hardware and/or parallelize built-in operations across multiple cores?
For example, for drawing a single very CPU-intensive plot or solving a very CPU-intensive equation, would upgrading the graphics hardware result in speed-up? Would upgrading to a CPU with more cores speed things up? (I realize that more cores mean I could solve more equations in parallel but I'm curious about the single-equation case)
Just trying to get a handle on how Mathematica exploits hardware.
Upvotes: 13
Views: 4502
Reputation: 1133
I wouldn't say Mathematica does automatically GPU or Paralell-CPU computing, at least in general. Since you need do something with paralell kernels, then you should initialize more kernels and/or upload CUDALink or OpenCLLink and use specific Mathematica functionality to exploit the potential of CPU and/or GPU.
For example, I haven't got very powerful graphics card (NVIDIA GeForce 9400 GT) but we can test how CUDALink works. First I have to upload CUDALink
:
Needs["CUDALink`"]
I am going to test multiplication of large matrices. I choose a random matrix 5000 x 5000
of real numbers in range (-1,1)
:
M = RandomReal[{-1,1}, {5000, 5000}];
Now we can check the computing times without GPU support
In[4]:= AbsoluteTiming[ Dot[M,M]; ]
Out[4]= {26.3780000, Null}
and with GPU support
In[5]:= AbsoluteTiming[ CUDADot[M, M]; ]
Out[5]= {6.6090000, Null}
In this case we obtained a performance speed-up roughly of factor 4, by using CUDADot instead of Dot.
Edit
To add an example of parallel CPU acceleration (on a dual-core machine) I choose all prime numbers in range [2^300, 2^300 +10^6]
.
First without parallelizing :
In[139]:= AbsoluteTiming[ Select[ Range[ 2^300, 2^300 + 10^6], PrimeQ ]; ]
Out[139]= {121.0860000, Null}
while using Parallelize[expr]
, which evaluates expr using automatic parallelization
In[141]:= AbsoluteTiming[ Parallelize[ Select[ Range[ 2^300, 2^300 + 10^6], PrimeQ ] ]; ]
Out[141]= {63.8650000, Null}
As one could expect we've got almost two times faster evaluation.
Upvotes: 15
Reputation: 24336
Generally no, a faster GPU will not accelerate normal Mathematica computations.
You must be using Cuda/OpenCL supported functions to use the GPU. You can get an overview of the options and some samples of their use here: CUDA and OpenCL Support.
Upvotes: 8
Reputation: 25703
I can't comment much on how Mathematica uses the GPU (as I never had the chance to try), but I don't believe it does it by default (i.e without you writing code specifically to exploit the GPU)
Adding more cores will help if you explicitly parallelize your calculations (see Parallelize
and related functions).
If you don't parallelize explicitly, I believe there are still certain numerical calculations that take advantage of multiple cores. I'm not sure which one, but I do know that some linear algebra related functions (LinearSolve
, Det
, etc.) use multiple cores by default.
Upvotes: 4