Callahan
Callahan

Reputation: 474

C: Too many open files using opendir and open

I am reading about 6000 text-files into memory with the following code in a loop:

void readDocs(const char *dir, char **array){
DIR *dp = opendir(dir);;
struct dirent *ep;
struct stat st;
static uint count = 0;
if (dp != NULL){
  while (ep = readdir(dp)){    // crawl through directory
    char name[strlen(dir) + strlen(ep->d_name) + 2];
    sprintf(name, "%s/%s", dir, ep->d_name);

    if(ep->d_type == DT_REG){  // regular file
      stat(name, &st);
      array[count] = (char*) malloc(st.st_size);
      int f;
      if((f = open(name, O_RDONLY)) < 0) perror("open:  ");
      read(f, array[count], st.st_size));
      if(close(f) < 0) perror("close: ");
      ++count;
    }    

    else if(ep->d_type == DT_DIR && strcmp(ep->d_name, "..") && strcmp(ep->d_name, "."))
      // go recursive through sub directories
      readDocs(name, array);
  }
}
}

In iteration 2826 i get an "Too many open files" error when opening the 2826th file. No error occured in the close operation until this point.
Since it always hangs in the 2826th iteration i do not believe that i should wait until a file is realy closed after calling close();
I had the same issue using fopen, fread and fclose.
I don't think it has to do with the context of this snippet but if you do i will provide it.
Thanks for your time!

EDIT:
I put the program to sleep and checked /proc//fd/ (thanks to nos). Like you suspected there were exactly 1024 file descriptors which i found to be a usual limit.
+ i gave you the whole function which reads documents out of a directory and all subdirectories
+ the program runs on Linux! Sorry for forgetting that!

Upvotes: 8

Views: 6247

Answers (4)

Anshul garg
Anshul garg

Reputation: 233

You should call closedir() as it opendir also returns descriptor as in linux system maximum this much number of time /proc/sys/fs/file-max file can be opened although you can increase/decrease this number

Upvotes: 0

witeX
witeX

Reputation: 55

I solved the problem by adding to /etc/security/limits.conf

* soft nofile 40960

* hard nofile 102400

Problem was when login to debian it shows ulimit -n 40960, but when su user, it's again 1024. Need uncomment one row on /etc/pam.d/su

session required pam_limits.so

Then always needed limits

Upvotes: 1

Patrick B.
Patrick B.

Reputation: 12393

You need to call closedir() after having looped. Opening a directory also consumes a file-descriptor.

Upvotes: 12

Matt Arnett
Matt Arnett

Reputation: 21

You may be hitting the OS limit for # of open files allowed. Not knowing which OS you are using, you should google your OS + "too many open files" to find out how to fix this. Here is one result for linux, http://lj4newbies.blogspot.com/2007/04/too-many-open-files.html

Upvotes: 2

Related Questions