clund
clund

Reputation: 99

Overcoming inode limitation

What's the best practise for storing a large (expanding) number of small files on a server, without running into inode limitations?

For a project, I am storing a large number of small files on a server with 2TB HD space, but my limitation is the 2560000 allowed inodes. Recently the server used up all the inodes and was unable to write new files. I subsequently moved some files into databases, but others (images and json files remain on the drive). I am currently at 58% inode usage, so a solution is required imminently.

The reason for storing the files individually is to limit the number of database calls. Basically the scripts will check if the file exists and if so, then return results dependently. Performance wise this makes sense for my application, but as stated above it has limitations.

As I understand it does not help to move the files into sub-directories, because each inode points to a file (or a directory file), so in fact I would just use up more inodes.

Alternatively I might be able to bundle the files together in an archive type of file, but that will require some sort of indexing.

Perhaps I am going about this all wrong, so any feedback is greatly appreciated.

Upvotes: 1

Views: 374

Answers (1)

clund
clund

Reputation: 99

On the advice of arkascha I looked into loop devices and found some documentation about losetup. Remains to be tested.

Upvotes: 0

Related Questions