Reputation: 155
I'm trying to write a linux kernel module that can dump the contents of other modules to a /proc file (for analysis). In principle it works but it seems I run into some buffer limit or the like. I'm still rather new to Linux kernel development so I would also appreciate any suggestions not concerning the particular problem.
The memory that is used to store the module is allocated in this function:
char *get_module_dump(int module_num)
{
struct module *mod = unhiddenModules[module_num];
char *buffer;
buffer = kmalloc(mod->core_size * sizeof(char), GFP_KERNEL);
memcpy((void *)buffer, (void *)startOf(mod), mod->core_size);
return buffer;
}
'unhiddenModules' is an array of module structs
Then it is handed over to the proc creation here:
void create_module_dump_proc(int module_number)
{
struct proc_dir_entry *dump_module_proc;
dump_size = unhiddenModules[module_number]->core_size;
module_buffer = get_module_dump(module_number);
sprintf(current_dump_file_name, "%s_dump", unhiddenModules[module_number]->name);
dump_module_proc = proc_create_data(current_dump_file_name, 0, dump_proc_folder, &dump_fops, module_buffer);
}
The proc read function is as follows:
ssize_t dump_proc_read(struct file *filp, char *buf, size_t count, loff_t *offp)
{
char *data;
ssize_t ret;
data = PDE_DATA(file_inode(filp));
ret = copy_to_user(buf, data, dump_size);
*offp += dump_size - ret;
if (*offp > dump_size)
return 0;
else
return dump_size;
}
Smaller Modules are dumped correctly but if the module is larger than 126,796 bytes only the first 126,796 bytes are written and this error is displayed when reading from the proc file:
*** Error in `cat': free(): invalid next size (fast): 0x0000000001f4a040 ***
I've seem to run into some limit but I couldn't find anything on it. The error seems to be related so memory leaks but the buffer should be large enough so I don't see where this actually happens.
Upvotes: 1
Views: 1952
Reputation: 1267
The procfs has a limit of PAGE_SIZE (one page) for read and write operations. Usually seq_file is used to iterate over the entries (modules in your case ?) to read and/or write smaller chunks. Since you are running into problems with only larger data, I suspect this is the case here.
Please have a look here and here if you are not familiar with seq_files.
Upvotes: 1
Reputation: 4438
A suspicious thing is that in dump_proc_read you are not using "count" parameter. I would have expected copy_to_user to take "count" as third argument instead of "dump_size" (and in subsequent calculations too). The way you do, always dump_size bytes are copied to user space, regardless the data size the application was expecting. The bigger dump_size is, the larger the user area that gets corrupted.
Upvotes: 0