Reputation: 12185
I am writing a class to model big integers. I am storing my data as unsigned ints in a vector pointer called data. What this function does is it adds n to the current big integer. What seems to be slowing down my performance is having to use long long values. Do any of you know a way around this. I currently have to make sum a long long or else it will overflow.
void Integer::u_add(const Integer & n)
{
std::vector<unsigned int> & tRef = *data;
std::vector<unsigned int> & nRef = *n.data;
const int thisSize = tRef.size();
const int nSize = nRef.size();
int carry = 0;
for(int i = 0; i < nSize || carry; ++i)
{
bool readThis = i < thisSize;
long long sum = (readThis ? (long long)tRef[i] + carry : (long long)carry) + (i < nSize ? nRef[i] : 0);
if(readThis)
tRef[i] = sum % BASE; //Base is 2^32
else
tRef.push_back(sum % BASE);
carry = (sum >= BASE ? 1 : 0);
}
}
Also just wondering if there is any benefit to using references to the pointers over just using the pointers themselves? I mean should I use tRef[i] or (*data)[i] to access the data.
Upvotes: 0
Views: 92
Reputation: 657
Instead of using base 2^32, use base 2^30. Then when you add two values the largest sum will be 2^31-1 which fits into an ordinary long
(signed or unsigned).
Or better still, use base 10^9 (roughly equal to 2^30), then you don't need a lot of effort to print the large numbers in decimal format.
If you really need to work in base 2^32, you could try a kludge like the following, provided unsigned int
s don't throw overflow exceptions:
sum = term1 + term2
carry = 0
if (sum < term1 || sum < term2)
carry = 1
Upvotes: 1