CheeHow
CheeHow

Reputation: 915

Truncating the first 100MB of a file in linux

I am referring to How can you concatenate two huge files with very little spare disk space?

I'm in the midst of implementing the following:

  1. Allocate a sparse file of the combined size.
  2. Copy 100Mb from the end of the second file to the end of the new file.
  3. Truncate 100Mb of the end of the second file
  4. Loop 2&3 till you finish the second file (With 2. modified to the correct place in the destination file).
  5. Do 2&3&4 but with the first file.

I would like to know if is there anyone there who are able to "truncate" a given file in linux? The truncation is by file size, for example if the file is 10GB, I would like to truncate the first 100MB of the file and leave the file with remaining 9.9GB. Anyone could help in this?

Thanks

Upvotes: 36

Views: 35026

Answers (9)

Its not blank
Its not blank

Reputation: 3095

Option 1 -- cut -b SIZE_TO_TRUNCATE_KB- <file_name>

Option 2 -- echo "$(tail -<NO_OF_LINES> <file_name>)" > <file_name>

Upvotes: 1

Joni
Joni

Reputation: 111269

Chopping off the beginning of a file is not possible with most file systems and there's no general API to do it; for example the truncate function only modifies the ending of a file.

You may be able to do it with some file systems though. For example the ext4 file system recently got an ioctl that you may find useful: http://lwn.net/Articles/556136/


Update: About a year after this answer was written, support for removing blocks from beginning and middle of files on ext4 and xfs file systems was added to the fallocate function, by way of the FALLOC_FL_COLLAPSE_RANGE mode. It's more convenient than using the low level iotcl's yourself.

There's also a command line utility with the same name as the C function. Assuming your file is on a supported file system, this will delete the first 100MB:

fallocate -c -o 0 -l 100M yourfile

delete the first 1GB:

fallocate -c -o 0 -l 1G yourfile

Upvotes: 36

Sunding Wei
Sunding Wei

Reputation: 2214

Answer, now this is reality with Linux kernel v3.15 (ext4/xfs)

Read here http://man7.org/linux/man-pages/man2/fallocate.2.html

Testing code

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdlib.h>
#include <fcntl.h>

#ifndef FALLOC_FL_COLLAPSE_RANGE
#define FALLOC_FL_COLLAPSE_RANGE        0x08
#endif

int main(int argc, const char * argv[])
{
    int ret;
    char * page = malloc(4096);
    int fd = open("test.txt", O_CREAT | O_TRUNC | O_RDWR, 0644);

    if (fd == -1) {
        free(page);
        return (-1);
    }

    // Page A
    printf("Write page A\n");
    memset(page, 'A', 4096);
    write(fd, page, 4096);

    // Page B
    printf("Write page B\n");
    memset(page, 'B', 4096);
    write(fd, page, 4096);

    // Remove page A
    ret = fallocate(fd, FALLOC_FL_COLLAPSE_RANGE, 0, 4096);
    printf("Page A should be removed, ret = %d\n", ret);

    close(fd);
    free(page);

    return (0);
}

Upvotes: 35

Peter Cordes
Peter Cordes

Reputation: 364338

Related: How do I remove the first 300 million lines from a 700 GB txt file on a system with 1 TB max disk space? on unix.SE points out that you can dd in place (conv=notrunc) to copy the data earlier in the file before truncating, getting the job done with no extra disk space needed.

That's horrible as part of a repeated process to shift data from the start of one file into the end of another. But worth mentioning for other use-cases where the purpose of truncating the front is to actually bring a specific point in the file to the front, not just to free disk space.


I would like to truncate the first 100MB of the file and leave the file with remaining 9.9GB

That's the opposite of what the list of steps says to do, from the answer on How can you concatenate two huge files with very little spare disk space? which you say you're following. @Douglas Leeder suggested copying into the middle of a sparse file so you only need to truncate at the end, which is easy and portable with a POSIX ftruncate(2) system call on the open fd you're using to read that file.


But if you want to avoid copying the first file, and just append the 2nd file to the end of the first, yes you do need to free data at the start of the 2nd file after you've read it. But note that you don't need to fully truncate it. You just need to free that space, e.g. by making the existing file sparse replacing that allocated space with a "hole".

The Linux-specific system call fallocate(2) can do that with FALLOC_FL_PUNCH_HOLE on FSes including XFS (since Linux 2.6.38), ext4 (since 3.0), BTRFS (since 3.7).

So it's available earlier than FALLOC_FL_COLLAPSE_RANGE (Linux 3.15) which shortens the file instead of leaving a hole. Linux 3.15 is pretty old by now so hopefully that's irrelevant.

Punching holes in data after you read it (and get it safely written to the other file) is perhaps simpler than shifting data within the file, in terms of being sure of the semantics for file position of a file descriptor you're reading from, if it's open while you use FALLOC_FL_COLLAPSE_RANGE.

The fallocate(1) command-line tool is built around that system call, allowing you do to either of those things on systems that support them.

Upvotes: 2

jjengel
jjengel

Reputation: 21

I found I had to use a combination of fallocate and sed before the file would shrink in size, so I had a 43MB file and I want to get it down to around 5MB

fallocate -p -o 0 -l 38m fallocate.log

I noticed this filled the first line with a bunch of "nonsense" characters but my file was still 43MB in size

I then used sed to delete the first line

sed -i 1d fallocate.log

and the file size is now 4.2MB in size.

Upvotes: 2

William Yates
William Yates

Reputation: 31

Remove all but the last 10,000 lines from a file.

sed -i 1,$( ( $(wc -l < path/to/file) -10000 ) )d path/to/file 

Upvotes: 1

Willem van Ketwich
Willem van Ketwich

Reputation: 5984

This is a pretty old question by now, but here is my take on it. Excluding the requirement for it to be done with limited space available, I would use something similar to the following to truncate the first 100mb of a file:

$ tail --bytes=$(expr $(wc -c < logfile.log) - 104857600) logfile.log > logfile.log.tmp
$ mv logfile.log.tmp logfile.log

Explanation:

  • This outputs the last nn bytes of the file (tail --bytes).
  • The number of bytes in the file to output is calculated as the size of the file (wc -c < logfile.log) minus 100Mb (expr $( ... ) - 104857600). This would leave us with 100Mb less than the size of the file to take the tail of (eg. 9.9Gb)
  • This is then output to a temp file and then moved back to the original file name to leave the truncated file.

Upvotes: 1

lyderic
lyderic

Reputation: 440

If you can work with ASCII lines and not bytes, then removing the first n lines of a file is easy. For example to remove the first 100 lines:

sed -i 1,100d /path/to/file

Upvotes: 9

Please read a good Linux programming book, e.g. Advanced Linux Programming.

You need to use Linux kernel syscalls, see syscalls(2)

In particular truncate(2) (both for truncation, and for extending a sparse file on file systems supporting it), and stat(2) to notably get the file size.

There is no (portable, or filesystem neutral) way to remove bytes from the start (or in the middle) of a file, you can truncate a file only at its end.

Upvotes: 5

Related Questions