Reputation: 1
I want to understand if I can speed up the execution of a loop that works with the MPIR library using the GPU processor and c ++ amp for this?
Here is the code I would like to speed up:
#include <mpirxx.h>
#include <iostream>
#include <cstdlib>
#include <fstream>
#include <string>
#include <ctime>
#include <cstdio>
int main()
{
srand(time(0));
using namespace std;
mpz_class i("0");
mpz_class l("9999999999999999999999999999");
for (i = 1; i <= l; i++)
{
}
std::cout << "runtime =" << clock() / 1000.0 << std::endl;
system("pause");
return 0;
}
If I run the program on the CPU, it runs extremely slowly, I want to increase the speed of the program using the video card. As far as I understand, I can use almost any modern graphics card with DirectX 11 support, even embedded in the processor, for example intel HD Graphics 510
I think c ++ amp is the most suitable technology for implementing my task if it can interact with the MPIR library.
What should I change in the submitted code to run it on the GPU?
Upvotes: 0
Views: 148