user2830601
user2830601

Reputation: 11

Limit the number of file descriptors a process can open over its lifespan

I'm on a debian based system. One of the vulnerability I'm trying to solve is multiple zero byte file creation which tends to exhaust inodes available in the Filesystem.

My system allows users to execute code in a sandboxed environment and all resource ulimits are set (memory, cpu, process, STDOUT, etc). One resource limit that I'm unable to set is the total number of files a process can create.

The problem exists because there are a few world writable directories in the sandbox and file creation permissions cannot be revoked to the executing process due to other constraints.

  1. ulimit has open files (-n) option but this only refers to number of concurrent file descriptors a process can open.

  2. I tried exploring disk quota but this looks like user specific limits on number of inodes a user can create. Ideally i'd like this to be process limit rather than user limit.

Q1) Is there no reliable way to limit the number of file descriptors a process can create over its life span?

OR

Q2) Any low overhead monitoring tool to keep track of the number of open(O_CREAT) calls by a process?

Upvotes: 1

Views: 773

Answers (1)

You can limit resources inside a process using setrlimit(2) (which can be used with bash builtin ulimit).

You can set up disk quotas on a file system.

There is no more finegrained ways.

Any low overhead monitoring tool to keep track of the number of open(O_CREAT) calls by a process?

You could monitor the creation of files using inotify(7) facilities.

You might write and use some FUSE filesystem for your needs.

Look also into containers, perhaps Docker.

Upvotes: 0

Related Questions