bluebyte
bluebyte

Reputation: 560

How to write data in >4GB file with C++?

I'm trying to write a large file, but ran into a problem.

I use long long for seeking a place to write, but can't write file more than 4,2Gb. What I forgot?

More details: I open 4Gb-file:

ifstream ifs(db_name.c_str(), ios_base::binary);
if (!ifs) 
    throw Bad_archive();
ifs.read(as_bytes(seg_size), sizeof(int));
ifs.read(as_bytes(last_idx), sizeof(int));
ifs.read(as_bytes(free_segs), sizeof(int));
if (free_segs > 0)
{
    long long seek_g = 3 * sizeof(int) + (long long)last_idx * seg_size;
    ifs.seekg(seek_g, ios_base::beg);
    for (int i = 0; i < free_segs; i++)
    {
        int tmp = 0;
        ifs.read(as_bytes(tmp), sizeof(int));
        free_spaces.push_back(tmp);
    }
}
ifs.close();

After that, I read 400Mb-file, which I want to add to the db. And write (here is the short code):

// write object
ofstream ofs(db_name.c_str(), ios_base::binary | ios_base::in | ios_base::ate);
for (int i = 0; ; i++)
{
    // set stream position
    long long write_position = sizeof(int) * 3;

    ...

    write_position += (long long) seg_size * last_idx;
    ofs.seekp(write_position, ios::beg);

            ...

    // write sizeof object
    if (i == 0)
        ofs.write(as_bytes(object_size), sizeof(object_size)); 
    else
    {
        int null = 0;
        ofs.write(as_bytes(null), sizeof(null));
    }

    // write data
    for (int j = 0; j < (seg_size - sizeof(int) * 2); j++)
    {
        ofs.write(as_bytes(obj_content[j + i * (seg_size - sizeof(int) * 2)]), 
        sizeof(char));

    }
    if (write_new_seg)
        last_idx++;

    ...

    ofs.write(as_bytes(cont_seg), sizeof(cont_seg));
}
ofs.close();

After that, I save db info:

if (last_idx == 0)
{
    ofstream ofs(db_name.c_str());
    ofs.close();
}
ofstream ofs(db_name.c_str(), ios_base::binary | ios_base::in | 
            ios_base::out | ios_base::ate);
long long seek_p = 0;
ofs.seekp(seek_p, ios_base::beg);
ofs.write(as_bytes(seg_size), sizeof(seg_size));
ofs.write(as_bytes(last_idx), sizeof(last_idx));
ofs.write(as_bytes(free_segs), sizeof(free_segs));
ofs.close();

But this code works:

ofstream ofs2("huge2");
ofs2.close();
ofstream ofs("huge2", ios_base::in | ios_base::ate);
long long sp = 10000000000;
ofs.seekp(10000000000, ios_base::beg);

ofs.write(as_bytes(sp), sizeof(sp));

ofs.close();

Upvotes: 7

Views: 2673

Answers (4)

Steve-o
Steve-o

Reputation: 12866

There is a nice summary for Linux here:

http://www.suse.de/~aj/linux_lfs.html

And more specific detail from RedHat targetting RHEL, these problems are generally for 32-bit applications accessing 64-bit sized files.

http://people.redhat.com/berrange/notes/largefile.html

Wikipedia actually has an artile on Large File Support but not really that informative.

Upvotes: 0

Miketelis
Miketelis

Reputation: 138

    fseeko64 (Filename, 0, SEEK_END);
    long size = ftello64 (Filename);
    fclose (Filename);

For accessing the large files , fseek cannot be used , instead have to go for fseeko64();

Upvotes: 4

Seva Alekseyev
Seva Alekseyev

Reputation: 61351

There's a function called _lseeki64() on Win32, and lseek64() on *nix. The iostream library, however, does not wrap them directly, you'll have to retrieve the file descriptor somehow, and do something about buffering.

Upvotes: 1

Lightness Races in Orbit
Lightness Races in Orbit

Reputation: 385144

Many filesystems do not support files larger than 4GB. In addition, you should consider that your int type may have a range that ends at 2^32, i.e. 4.29497e+09.

Upvotes: 0

Related Questions