Cheick
Cheick

Reputation: 2204

Is opening a file stream a costly operation

Is opening a file stream a costly operation

I'd like to provide a lazy loading functionality in a class that reads a structured file. Every element of the file has a header and a payload.

the idea is to load only the headers from the file and access the payload data only when the relevent field is accessed

a prototype of the class would look like this in C#

public class OneElement
{
     public Header TheHeader {get; private set;}

     private string _FileName;

     private long _StreamPosition; // this value is initialized when the header is read

     private Payload _ThePayload;


     public Payload ThePayload
     {
         get{
                    if (_ThePayload == null)
                        using (var stream = File.OpenRead(_FileName) )
                        {
                             stream.seek(_StreamPosition,SeekOrigin.Begin); // seek to the relevent position
                             _ThePayload =  ReadPayload(stream); // this method return the paylod read reads from the current position
                        }
                    return _ThePayload;

               }
     }
} 

Will the operation of opening the file and getting the payload be costly especially in a context where the payload will represent audio or video data

Upvotes: 1

Views: 2952

Answers (2)

Foxfire
Foxfire

Reputation: 5765

Opening a file does cost some resources. But if your payload is audio or video it will be far more resource intensive to read those than opening the file.

So in case you are trying to cache the contents to save you a single file open forget about that idea.

Upvotes: 1

Jon Skeet
Jon Skeet

Reputation: 1502476

If you're reading audio/video then I'd expect you to be reading rather a lot of data. That would dwarf the cost of just opening the file. Reading large amounts of data from disk is generally a costly operation though.

On the other hand, if you were just reading a few bytes at a time then repeatedly opening/closing the file wouldn't be a good idea - it would be better to read large chunks and cache them appropriately.

Do you have a benchmark for this? How does it perform at the moment? If it's working okay, do you have any reason to try to think of a more complicated system?

Upvotes: 4

Related Questions