Omnifarious
Omnifarious

Reputation: 56038

Maximum number of file descriptors for pre-allocating a data structure

I would like to create a data structure to hold information about every file descriptor the process has open, but I would like to size this data structure once and never change it. I also want constant time random access, so this data structure will likely be an array indexed by fd value.

Is there a way to reliably determine what the maximum possible fd value is for a process at runtime? It's alright if this value is not exactly the maximum possible, but can change only in extraordinary circumstances (like root writing a value to a file in /proc/, though it would be nice then to know what circumstances these might be.

Upvotes: 2

Views: 179

Answers (1)

Omnifarious
Omnifarious

Reputation: 56038

I decided against this design, but the correct answer to this question is to use sysconf(_SC_OPEN_MAX) to determine the current soft-limit on file descriptors.

Various other aspects of Unix constrain file handles to all be numbers within the range [0, <max number of file descriptors>). For example, the select call assumes that every file descriptor can be represented as an individual bit in a contiguous bitset, so assuming that file descriptors fall within this range is a good assumption.

Of course, the soft-limit can be changed, but that requires the process itself to alter it after it's started, so this is acceptable.

Upvotes: 1

Related Questions