Alexander Shukaev
Alexander Shukaev

Reputation: 17019

C: How Efficient Are Output Routines in Terms of Buffering?

I can't find any information on whether buffering is already implicitly done out of the box when one is writing a file with either fprintf or fwrite. I understand that this might be implementation/platform dependent feature. What I'm interested in, is whether I can at least expect it to be implemented efficiently on modern popular platforms such as Windows, Linux, or Mac OS X?

AFAIK, usually buffering for I/O routines is done on 2 levels:

  1. Library level: this could be C standard library, or Java SDK (BufferedOutputStream), etc.;
  2. OS level: modern platforms extensively cache/buffer I/O operations.

My question is about #1, not #2 (as I know it's already true). In other words, can I expect C standard library implementations for all modern platforms to take advantage of buffering?

If not, then is manually creating a buffer (with cleverly chosen size) and flushing it on overflow a good solution to the problem?

Conclusion


Thanks to everyone who pointed out functions like setbuf and setvbuf. These are the exact evidence that I was looking for to answer my question. Useful extract:

All files are opened with a default allocated buffer (fully buffered) if they are known to not refer to an interactive device. This function can be used to either set a specific memory block to be used as buffer or to disable buffering for the stream.

The default streams stdin and stdout are fully buffered by default if they are known to not refer to an interactive device. Otherwise, they may either be line buffered or unbuffered by default, depending on the system and library implementation. The same is true for stderr, which is always either line buffered or unbuffered by default.

Upvotes: 0

Views: 177

Answers (4)

Alexander L. Belikoff
Alexander L. Belikoff

Reputation: 5721

In most cases buffering for stdio routines is tuned to be consistent with typical block size of the operating system in question. This is done to optimize the number of I/O operations in the default case. Of course you can always change it with setbuf()/setvbuf() routines.

Unless you are doing something special, you should stick to the default buffering as you can be quite sure it's mostly optimal on your OS (for the typical scenario).

The only case that justifies it is when you want to use stdio library to interact with I/O channels that are not geared towards it, in which case you might want to disable buffering altogether. But I don't get to see cases for this too often.

Upvotes: 3

AProgrammer
AProgrammer

Reputation: 52334

The C IO library allows to control the way buffering is done (inside the application, before what the OS does) with setvbuf. If you don't specify anything, the standard requires that "when opened, a stream is fully buffered if and only if it can be determined not to refer to an interactive device.", the requirement also holds for stdin and stdout while stderr is not buffered even if one could detect that it is directed to a non interactive device.

Upvotes: 1

Mike Dunlavey
Mike Dunlavey

Reputation: 40699

As @David said, you can expect sensible buffering (at both levels).

However, there can be a huge difference between fprintf and fwrite, because fprintf interprets a format string. If you stack-sample it, you can find a significant percent of time converting doubles into character strings, and stuff like that.

Upvotes: 1

David Schwartz
David Schwartz

Reputation: 182875

You can safely assume that standard I/O is sensibly buffered on any modern system.

Upvotes: 1

Related Questions