Reputation: 137
It is well known that ios_base::sync_with_stdio(false)
will help the performance of cin and cout in <iostream>
by preventing sync b/w C and C++ I/O. However, I am curious as to whether it makes any difference at all in <fstream>
.
I ran some tests with GNU C++11 and the following code (with and without the ios_base::sync_with_stdio(false)
snippet):
#include <fstream>
#include <iostream>
#include <chrono>
using namespace std;
ofstream out("out.txt");
int main() {
auto start = chrono::high_resolution_clock::now();
long long val = 2;
long long x=1<<22;
ios_base::sync_with_stdio(false);
while (x--) {
val += x%666;
out << val << "\n";
}
auto end = chrono::high_resolution_clock::now();
chrono::duration<double> diff = end-start;
cout<<diff.count()<<" seconds\n";
return 0;
}
The results are as follows:
With sync_with_stdio(false): 0.677863 seconds (average 3 trials)
Without sync_with_stdio(false): 0.653789 seconds (average 3 trials)
Is this to be expected? Is there a reason for a nearly identical, if not slower speed, with sync_with_stdio(false)?
Thank you for your help.
Upvotes: 5
Views: 1555
Reputation: 153840
The idea of sync_with_stdio()
is to allow mixing input and output to standard stream objects (stdin
, stdout
, and stderr
in C and std::cin
, std::cout
, std::cerr
, and std::clog
as well as their wide character stream counterparts in C++) without any need to worry about characters being buffered in any of the buffers of the involved objects. Effectively, with std::ios_base::sync_with_stdio(true)
the C++ IOStreams can't use their own buffers. In practice that normally means that buffering on std::streambuf
level is entirely disabled. Without a buffer IOStreams are rather expensive, though, as they process individual character involving potentially multiple virtual function calls. Essentially, the speed-up you get from std::ios_base::sync_with_stdio(false)
is allowing both the C and C++ library to user their own buffers.
An alternative approach could be to share the buffer between the C and C++ library facilities, e.g., by building the C library facilities on top of the more powerful C++ library facilities (before people complain that this would be a terrible idea, making C I/O slower: that is actually not true at all with a proper implementation of the standard C++ library IOStreams). I'm not aware of any non-experimental implementation which does use that. With this setup std::ios_base::sync_with_stdio(value)
wouldn't have any effect at all.
Typical implementations of IOStreams use different stream buffers for the standard stream objects from those used for file streams. Part of the reason is probably that the standard stream objects are normally not opened using a name but some other entity identifying them, e.g., a file descriptor on UNIX systems and it would require a "back door" interface to allow using a std::filebuf
for the standard stream objects. However, at least early implementations of Dinkumware's standard C++ library which shipped (ships?), e.g., with MSVC++, used std::filebuf
for the standard stream objects. This std::filebuf
implementation was just a wrapper around FILE*
, i.e., literally implementing what the C++ standard says rather than semantically implementing it. That was already a terrible idea to start with but it was made worse by inhibiting std::streambuf
level buffering for all file streams with std::ios_base::sync_with_stdio(true)
as that setting also affected file streams. I do not know whether this [performance] problem was fixed since. Old issue in the C/C++ User Journal and/or P.J.Plauger's "The [draft] Standard C++ Library" should show a discussion of this implementation.
tl;dr: According to the standard std::ios_base::sync_with_stdio(false)
only changes the constraints for the standard stream objects to make their use faster. Whether it has other effects depends on the IOStream implementation and there was at least one (Dinkumware) where it made a difference.
Upvotes: 1