Reputation:
i am creating a program that creates small files(8 kb) based on some input so it goes like this
#include <iostream>
#incclude <string.h>
#include <errno.h>
void func()
{
const char * file_name = NULL;
// we get file_name from somewhere , ex : pic.png
FILE * fp = fopen();
if (fp == NULL)
{
std::cout << "failed to open file , " << strerror(errno) << std::endl ;
return 0;
}
// some code to deal with the file
}
i created a lot of files (about 4.2 Million , i am trying to test the performance) and it always about at the same number makes an error of EFBIG (File too large) although i am creating new unique file each time.
the files are created in 8 threads .
any idea about an explanation for that ?
i want to clarify that obviously this is not the exact code , but you can think this function run in 8 threads with high frequency until that error is generated, also this error is generated for all files from that point until the program finishes
Upvotes: 1
Views: 1771
Reputation: 17454
I can't actually find this documented for Linux anywhere, but since your number of files has apparently hit the maximum 32-bit unsigned integer (4,294,967,295), it sounds like your filesystem cannot handle more than this many files in a single directory. The error "too big" really refers to the directory, not the file you're trying to open.
That's a crazy number of files in a single directory. Just split them into different directories.
Also, be careful not to keep gazillions of file handles open at one time; your OS will emplace a limit on that (which you can check with ulimit -a
). However, you're supposed to get EMFILE
when you exhaust this supply.
Upvotes: 2