poinda
poinda

Reputation: 13

php flushing stdout/stderr

I have a php script which runs on my ubunutu server. this script runs in an infinite loop running queries and gathering statistics... it is a monitor for some other services. Every 60 seconds the stats that are gathered are dumped as a json object to a .js file. The .js file is monitored elsewhere and is unimportant for the purpose of my question.

When I run my script I redirect both stdout and stderr to a log file. For the most part there is no output either standard or error and so the log will remain empty.

Obviously though on occasion there will be output. The problem I experienced last night was the script failed for some reason and the log file ended up filling up the whole partition and messed with the stability of the server.

What I have done.. or tried to do is add in a simple check that every 60 seconds (when the stats are dumped) the script checks the size of the log file.. and if it is greater than X mb in size, basically empties the file and allow logging to continue. With the intention being that if the script does experience a problem again, the log file wont fill up the whole partition.

I can successfully detect the log file size, I can successfully clear it when it reaches my set size, but then the next time there is either a STDOUT or STDERR write, all the previous data is dumped in one go.

Any assistance with this would be greatly appreciated.


Calling script by:

php checker.php &> log

but have tried

php checker.php > log 2>&1

and also redirecting the streams at the start of the script

close(STDOUT);
fclose(STDERR);
$STDOUT = fopen($LOG_FILE_LOCATION, 'wb');
$STDERR = fopen($LOG_FILE_LOCATION, 'wb');

each way produces the same output.

Code clearing log file

function CheckLogFilesize($file)
{
    global $LOG_FILE_LOCATION;
    global $MAX_LOG_SIZE_MB;
    $max_log_size_bytes = $MAX_LOG_SIZE_MB * 1024 * 1024;

    $size = exec('stat -c %s ' . $file);

    // If log file size is too large.. reset it.
    if (intval($size) > $max_log_size_bytes)
    {
        flush();

        // Clear the log
        $fp = fopen($file, 'w');
        fwrite($fp, '');
        fclose($fp);
    }
}

Thanks

Upvotes: 1

Views: 3819

Answers (3)

sberry
sberry

Reputation: 131978

EDIT

Figured it out:

function get_log_size() {
    clearstatcache();
    return filesize("/tmp/log.log");
}

foreach (range(1, 1000000) as $interval) {
    echo $interval . "\n";
    usleep(10000);
    if ($interval % 50 == 0) {
        fwrite(STDERR, "mod 50 fail test");
    }
    $size = get_log_size();
    echo $size . " bytes\n";
    if ($size > 100) {
        $file = "/tmp/log.log";
        $fp = fopen($file, 'w');
        fclose($fp);
        print "Truncated\n";
    }

}

Started with

php test.php >> /tmp/log.log 2>&1

Note the >> instead of >. You have to use append mode

Example output of tail -f /tmp/log.log:

λ ~/ tail -f /tmp/log.log

1
2 bytes
2
12 bytes
3
23 bytes
4
34 bytes
5
45 bytes
6
56 bytes
7
67 bytes
8
78 bytes
9
89 bytes
10
101 bytes
Truncated
11
13 bytes
12
25 bytes
13
37 bytes
14
49 bytes
15
61 bytes
16
73 bytes
17
85 bytes
18
97 bytes
19
109 bytes
Truncated
20
13 bytes
21
25 bytes
22
37 bytes
23
49 bytes
24
61 bytes
25
73 bytes
26
85 bytes
27
97 bytes
28
109 bytes
Truncated

Upvotes: 1

bart s
bart s

Reputation: 5100

If I had to do such thing, I would detect the logfile size and if it is larger than the threshold, move the file to another filename (e.g. old-log.txt). In that case you will have maximum of 2 logfiles with total size of 2 times threshold.

Advantage is that you still have some old log entries just after the threshold was reached. If you just clear the file, you will see that you try to find info in an empty file

EDIT:

Take a look at the ob_* commands in PHP. They may help you clearing the buffer(s)

ob_start() In the comments you can find some more info about stdout

...
ob_start();
// keep writing log until done
ob_end_clean(); // or use ob_end_flush();

some pseudo code...

Upvotes: 1

Ja͢ck
Ja͢ck

Reputation: 173542

If you're redirecting stdout into a log file, you can even try to unlink() that file but it won't work, because your stdout is still connected to it.

One way is to clear the log file before you start the process and have the process stop when the file goes beyond a certain size. Then just let your script run inside a shell script loop (that only terminates when the process exist code = 0, i.e. no errors); when the script stops, it will get launched again immediately.

Normally, daemons open their own log files, so you can send them a HUP signal to reopen their log files, thus avoiding a complete restart.

Upvotes: 1

Related Questions