Reputation: 2281
I'm working on a C project where we have to write some binary files in a Embedded Linux environment (2.6.37). Normally we are able to write the files in something like 200-300 ms, but eventually the file take up to 10 seconds to be written and we have no idea why - the occurrence is quite randomic with no special event happening in other parts of the system, such as in the UI app.
Either way we are revising or method to write to the file and doing some research on the web (here and here and here) we concluded that writing using native Linux code would be better then doing it pure C even though that may not end up helping much with our problem. For now we are writing in a way similar to this, that is, with these functions:
#include <stdio.h>
const unsigned long long size = 8ULL*1024ULL*1024ULL;
unsigned long long a[size];
int main()
{
FILE* pFile;
pFile = fopen("file.binary", "wb");
for (unsigned long long j = 0; j < 1024; ++j){
//Some calculations to fill a[]
fwrite(a, 1, size*sizeof(unsigned long long), pFile);
}
fclose(pFile);
return 0;
}
Well what I would like to know is which would be the native Linux way to do an equivalent operation (and in the fastest way possible)? The links mentioned only tell about copying files, not simply writing to them, so I suppose there might be more specific functions to be used.
Any help appreciated (as well as any tip regarding the original problem).
Upvotes: 1
Views: 1602
Reputation: 201
Sounds like you are writing to an SD Card or micro Sd Card. Not all card are created equally. Start with the fastest you can find. I recommend the SanDisk Extreme Pro. They claim to be able to store up to 90Mbits a second.
Two, your, "randomic" comment gives rise to the SD Card possibly doing a load balance. The card actually has a small cpu inside that will do storage reallocation based on sector hits. They do this to elongate the life of the sd card.
If you write a small amount to an sd card it goes to an internal buffer of ram on the sd card itself. Usually they are sized at 512 bytes per block. Sometimes they have multiple blocks of these, "caches" resulting in higher throughput. This gives excellent performance for small file sizes. Also, note that 512. If you can match your writes to be in chunks of 512 you are matching the fastest possible way to transfer to the underlying medium. I.E. never write less than that size ;-)
Upvotes: 2