Reputation: 155
I'm a beginner and I'm trying to copy the content of a very large text file of size around 33MB ( 33136KB precisely ) to a new file. I'm getting segmentation fault while running the program. Only 16KB is getting copied to my new file. The name of file which I'm to copy is "test_file3" and my new file's name is "newfile". I'm doing all this in CentOS-5 in virtual box. Here is the details:
[root@localhost decomp_trials]# cat read_file.c
#include <stdio.h>
#include <stdlib.h>
int main( int argc, char *argv [] )
{
FILE *ifp, *ofp;
char *ptr;
ifp = fopen ( argv [ 1 ], "r" );
ofp = fopen ( argv [ 2 ], "a" );
for ( ptr = malloc ( 10 ); fread ( ptr, 1, 10, ifp ); )
fprintf ( ofp, ptr );
fclose ( ifp );
fclose ( ofp );
return 0;
}
[root@localhost decomp_trials]# cc read_file.c -o read_file
[root@localhost decomp_trials]# ./read_file /root/sys_cl_huk_ajh/imp/copy_hook7/test_file3 newfile
Segmentation fault
[root@localhost decomp_trials]# du -s newfile
16 newfile
[root@localhost decomp_trials]# pwd
/root/sys_cl_huk_ajh/pro_jnk/decomp_trials
[root@localhost decomp_trials]# du -s ../../imp/copy_hook7/test_file3
33136 ../../imp/copy_hook7/test_file3
[root@localhost decomp_trials]#
Please tell me what I'm possibly doing wrong. Is there any better method? Please help me out
Upvotes: 1
Views: 310
Reputation: 272507
Don't use fprintf
; it treats its second argument as a format string. Use fwrite
.
As to why it seg-faults, consider what happens if your input data happens to contain e.g. %s
. fprintf
will then start walking through the stack, reading random data until it finds a 0-valued byte (a null terminator). This could easily end up walking into memory that isn't owned by the application.
Upvotes: 2