Reputation: 3067
sorry for my bad English. I'm new to Linux system programming and new to C programming as well.
At the moment I'm trying to unlink files, to do this, I must store file path somewhere. I'm using 1024-element char array for each file. If I decrease the size of array, file descriptors become a mess. Sounds stupid, but it is.
Here's some code:
char path[1024], path2[1024];
const char *file_name = "myfile_1", *file_name2 = "myfile_2",*working_directory="/home/Alexander/lab01/";
strcpy(path, working_directory);
strcat(path, file_name);
strcpy(path2, working_directory);
strcat(path2, file_name);
then I open some files, read\write and so on. If path & path2 are 1024 bytes long, everything goes well..
but when I decrease path & path2 size to 512 or 256, some strange thing happens with memory and other file descriptors...
I can't understand what's going on, please, help.
Code, where I read file:
fdesc_input = open("/dev/urandom",O_RDONLY);
if (fdesc_input < 0 ) {
perror("Error opening /dev/urandom: ");
}
fdesc_output = open(path, O_RDWR|O_CREAT|O_TRUNC, 0777);
if (fdesc_output < 0 ) {
perror("Error opening my file: ");
}
buffer = (unsigned char*) malloc(buffer_size); // make 1kb buffer
desired_filesize = 1024*10; // 10 kilobytes
int curr_size = 0;
while (curr_size < desired_filesize) {
//AFTER NEXT LINE STRANGE HAPPENS
ssize_t result = read(fdesc_input, &buffer, buffer_size);
if (result < 0) {
perror ("Error reading /dev/urandom: ");
exit(1);
}
curr_size += result;
write(fdesc_output, &buffer, buffer_size);
}
Upvotes: 0
Views: 88
Reputation: 15413
You are passing &buffer
to read
, however, buffer
itself is a pointer. You pass the address of buffer
which is a local variable. As a result, read
reads data into the memory starting at this local variable and thus overwriting the other local variables.
Pass buffer
instead of &buffer
and you should be fine.
Upvotes: 3