Reputation: 1178
I'm imagining reading byte by byte would be very inefficient, but reading in bulks would almost always read more than needed, requiring to store the rest of the read input in a global context for all subsequent read operations to find. What am I missing?
Upvotes: 1
Views: 140
Reputation: 148870
That is what buffered io is for. Long story short, the C library reads a full buffer the size of which is implementation dependant but is rather large, and then the getline
function looks for the first newline in that memory buffer, leaving the pointer positioned for next access.
Upvotes: 2
Reputation: 399703
The prototype is:
ssize_t getline(char **lineptr, size_t *n, FILE *stream);
So it's clearly using FILE
, which is already buffered. So reading character-by-character is not inefficient at all.
Upvotes: 3