Reputation: 4325
I find fwrite fails when I am trying to write somewhat big data as in the following code.
#include <stdio.h>
#include <unistd.h>
int main(int argc, char* argv[])
{
int size = atoi(argv[1]);
printf("%d\n", size);
FILE* fp = fopen("test", "wb");
char* c = "";
int i = fwrite(c, size, 1, fp);
fclose(fp);
printf("%d\n", i);
return 0;
}
The code is compiled into binary tw
When I try ./tw 10000
it works well. But when I try something like ./tw 12000
it fails.(fwrite() returns 0 instead of 1)
What's the reason of that? In what way can I avoid this?
EDIT: When I do fwrite(c, 1, size, fp) it returns 8192 instead of larger size I give.
2nd EDIT: When I write a loop that runs for size times, and fwrite(c, 1, 1, fp)
each time, it work perfectly OK.
It seems when size is too large(as in the first EDIT) it only writes about 8192 bytes.
I guess something has limited fwrite
write up to fixed size bytes at a time.
3rd EDIT: The above is not clear.
The following fails for space - w_result != 0
when space is large, where space is determined by me and w_result is object written in total.
w_result = 0;
char* empty = malloc(BLOCKSIZE * size(char));
w_result = fwrite(empty, BLOCKSIZE, space, fp);
printf("%d lost\n", space - w_result);
While this works OK.
w_result = 0;
char* empty = malloc(BLOCKSIZE * sizeof(char));
for(i = 0; i < space; i ++)
w_result += fwrite(empty, BLOCKSIZE, 1, fp);
printf("%d lost\n", space - w_result);
(every variable has been declared.)
I corrected some errors the answers memtioned. But the first one should work according to you.
Upvotes: 3
Views: 7500
Reputation: 229058
With fwrite(c, size, 1, fp);
you state that fwrite should write 1 item that is size
big , big out of the buffer c
.
c
is just a pointer to an empty string. It has a size of 1. When you tell fwrite to go look for more data than 1 byte in c
, you get undefined behavior. You cannot fwrite more than 1 byte from c
.
(undefined behavior means anything could happen, it could appear to work fine when you try with a size of 10000 and not with a size of 12000. The implementation dependent reason for that is likely that there is some memory available, perhaps the stack, starting at c
and 10000 bytes forward, but at e.g. 11000 there is no memory and you get a segfault)
Upvotes: 2
Reputation: 121961
As has been stated by others the code is performing an invalid memory read via c
.
A possible solution would be to dynamically allocate a buffer that is size
bytes in size, initialise it, and fwrite()
it to the file, remembering to deallocate the buffer afterwards.
Remember to check return values from functions (fopen()
for example).
Upvotes: 1
Reputation: 13994
From that snippet of code, it looks like you're trying to write what's at c, which is just a single NULL byte, to the file pointer, and you're doing so "size" times. The fact that it doesn't crash with 10000 is coincidental. What are you trying to do?
Upvotes: 1
Reputation: 36423
You are reading memory that doesn't belong to your program (and writing it to a file).
Test your program using valgrind
to see the errors.
Upvotes: 2