Shah Fahad
Shah Fahad

Reputation: 53

Can a output stream, in C, which is full buffered, be flushed automatically even before the buffer is fully filled?

Consider this code:

#include <stdio.h>
int main()
{
    char buffer[500];
    int n = setvbuf(stdout, buffer, _IOFBF, 100);
    
    printf("Hello");
    while(1 == 1)
        ;
    return 0;
}

When run on Linux, the "Hello" message appears on the output device immediately, and the program then hangs indefinitely. Shouldn't the output instead be buffered until stdout is flushed or closed, either manually or at normal program termination? That's what seems to happen on Windows 10, and also what happens on Linux if the buffer size is specified as 130 bytes or more. I am using VS Code on both systems.

What am I missing? Am I wrong about the Full Buffering Concept?

Upvotes: 5

Views: 357

Answers (2)

John Bollinger
John Bollinger

Reputation: 181094

What am I missing? Am I wrong about the Full Buffering Concept?

You are not wrong about the concept. There is wiggle room in the wording of the language of the specification, as @WilliamPursell observes in his answer, but your program's observed behavior does not exhibit full buffering according to the express intent of the specification. Moreover, I interpret the specification as leaving room here for implementations to conform despite being incapable for one reason or another of implementing the intent, not as offering a free pass for implementations that reasonably can implement the intent nevertheless to do something different at will.

I tested this variation on your program against Glibc 2.22 on Linux:

#include <stdio.h>

int main() {
    static char buffer[BUFSIZ] = { 0 };
    int n = setvbuf(stdout, buffer, _IOFBF, 100);

    if (n != 0) {
        perror("setvbuf");
        return 1;
    }

    printf("Hello");
    puts(buffer);
    return 0;
}

The program exited with status 0 and did not print any error output, so I conclude that setvbuf returned 0, indicating success. However, the program printed "Hello" only once, showing that in fact it did not use the specified buffer. If I increase the buffer size specified to setvbuf to 128 bytes (== 27) then the output is "HelloHello", showing that the specified buffer is used.

The observed behavior, then, seems to be1 that this implementation of setvbuf silently sets the stream to unbuffered when the provided buffer is specified to be smaller than 128 bytes. That is consistent with the behavior of your version of the program, too, but inconsistent with my reading of the function's specifications:

[...]The argument mode determines how stream will be buffered, as follows: _IOFBF causes input/output to be fully buffered [...]. If buf is not a null pointer, the array it points to may be used instead of a buffer allocated by the setvbuf function and the argument size specifies the size of the array; otherwise, size may determine the size of a buffer allocated by the setvbuf function. The contents of the array at any time are indeterminate.

The setvbuf function returns zero on success, or nonzero if an invalid value is given for mode or if the request cannot be honored.

(C17, 7.21.5.6/2-3)

As I read the specification, setvbuf is free to use the specified buffer or not, at its discretion, and if it chooses not to do so then it may or may not use a buffer of the specified size, but it must either set the specified buffering mode or fail. It is inconsistent with those specifications for it to change the buffering mode to one that is different from both the original mode and the requested mode, and it is also inconsistent to fail to set the requested mode and nevertheless return 0.

Inasmuch as I conclude that this Glibc version's setvbuf is behaving contrary to the language specification, I'd say you've tripped over a glibc bug.


1 But it should be noted that the specifications say that the contents of the buffer at any time are indeterminate. Therefore, by accessing the buffer after asking setvbuf to assign it as a stream buffer, this program invokes undefined behavior, hence, technically, it does not prove anything.

Upvotes: 2

William Pursell
William Pursell

Reputation: 212454

Given the lack of specificity in the standard, I would argue that such behavior is not prohibited.

According to https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_05:

When a stream is "unbuffered", bytes are intended to appear from the source or at the destination as soon as possible; otherwise, bytes may be accumulated and transmitted as a block. When a stream is "fully buffered", bytes are intended to be transmitted as a block when a buffer is filled. When a stream is "line buffered", bytes are intended to be transmitted as a block when a byte is encountered. Furthermore, bytes are intended to be transmitted as a block when a buffer is filled, when input is requested on an unbuffered stream, or when input is requested on a line-buffered stream that requires the transmission of bytes. Support for these characteristics is implementation-defined, and may be affected via setbuf() and setvbuf().

Upvotes: 0

Related Questions