Reputation: 3647
I'm using C++ to backup files from my Clients Systems. Each and every second, the files are flooding from my Client's Machines and i have to get file descriptors for every file to write in server. Sometimes i get even 10K files within a min, so how can i actually make use of file descriptors to write multiple files efficiently.
I have a C++ Socket Listener, each Client machine connects to the Server and starts uploading files. Is it possible to write multiple files with limited file descriptors. I have tried writing all the files into a single one large file, but i have keep track of the file's end and start bytes in the large file for each and every single file i write in to it.That would be some heck of a work.
I want to be able to write files with high performance and also keep the disk safe. Any ideas to share ..?
Upvotes: 1
Views: 943
Reputation: 393739
It seems you should probably queue the requests.
That way you can have a limited number of workers (say, 16, depending on the hardware on your system) to actually transfer files, and keep the rest waiting.
This would likely improve the throughput and performance since
it doesn't hit the disks concurrently (on old fashioned hardware (i.e. spinning disks most often) this can lead to major performance degradation if you cause repeated disk seeks.
regardless, when writing sequential uploads to shared volumes, usually the bandwidth usage will be optimal when
sum(upload rates) ~= effective disk write bandwitdh
On the queueing:
backlog
parameter to the listen
functionlook at producer/consumer implementations
See also consumer/producer in c++ or other search engines
Upvotes: 1