Reputation: 88
I would like to allocate memory in order to read (whole) very large files (15-25 Gb) in my C++ program. I have used off64_t
type to get the beginning and ending positions of the file that I want to read, which are obtained from my file using function ftello64
. I would now like to get the difference between the two in a variable called size and then allocate this to a char array. However, and despite compiling successfully, my code doesn't work properly.
I tried size = (off64_t)(end_of_mem-start_of_mem);
where start_of_mem
and end_of_mem
are both off64_t
numbers but when I run my program the output is
Starting position in memory (start_of_mem):152757
Ending position in memory (end_of_mem):15808475159
Size: -1371546782
What variant type should I be using to get this right? Thanks in advance for your help and suggestions.
Upvotes: 0
Views: 371
Reputation: 16070
You are experiencing int down-casting from 64bit to 32bit side-effects.
Take a look at following experiment:
#include <cstdint>
#include <iostream>
int main(){
int64_t int64 = 15808475159 - 152757;
int32_t int32 = int64;
std::cout << "int64_t: " << int64 << "\n";
std::cout << "int32_t: " << int32 << "\n";
}
Output:
int64_t: 15808322402
int32_t: -1371546782
Problem is the result of 15808475159 - 152757
is bigger than range of 32-bit integer. Thus due overflow it gets truncated (or mod 2^32). Then because its declared as signed 32-bit it apparently falls into the a negative range of interpretation.
Upvotes: 2