Reputation: 616
I am working on a Centos-6 x86_64 machine. By default, if I start a process, it can open 1024 files (including the standard in/out/error streams) and I am able to extend that limit using setrlimit()
APIs.
My problem is that if I start a thread within the process, it shares this limit. For example, assume I am running a parent process which opens 1024 descriptors, and then if I create one thread using pthread_create()
, it can't open a single file because its parent already opened 1024 descriptors and consumed the full limit.
I want to make the child thread (not child process) able to open 1024 files like its parent. I know that extending parent's file descriptor limit to 2048 will allow the child to open 1024 more files. But I want to make the parent and child have individual limits, not a shared limit.
I am expecting some attributes in pthread_attr_t
that can be applicable for the child thread to have an individual file table.
Upvotes: 1
Views: 1086
Reputation: 60137
On Linux,
unshare(CLONE_FILES);
(if successful) will give the current thread its own filedescriptor table.
It should be usable from an already- spawned kernel-backed (all Linux pthreads implementations I know of) pthread.
There doesn't appear to be even a nonportable pthread attribute that you could set this up with, but you can use the above approach to wrap pthread_create
, adding this capability.
If you're doing your own threading on Linux, you can pass the CLONE_FILES
flag directly to clone
. Otherwise you'll need the to make your wrapped_pthread_create
wait until the child has made the unshare
call (and cancel and reap the thread if the call failed).
Upvotes: 4