Noé Malzieu
Noé Malzieu

Reputation: 2600

Logging to a non blocking named pipe?

I have a question, and I could'nt find help anywhere on stackoverflow or the web.

I have a program (celery distributed task queue) and I have multiple instances (workers) each having a logfile (celery_worker1.log, celery_worker2.log).

The important errors are stored to a database, but I like to tail these logs from time to time when running new operations to make sure everything is ok (the loglevel is lower).

My problem: these logs are taking a lot of disk space. What I would like to do: be able to "watch" the logs (tail -f) only when I need it, without them taking a lot of space.

My ideas until now:

Is there a way to have a non-blocking named pipe, which would just throw to stdout when tailed, and throw to /dev/null when not?

Or are there technical difficulties to such a type of pipe? If there are, what are they?

Thank you for your answers!

Upvotes: 6

Views: 3347

Answers (2)

event
event

Reputation: 379

You could try shared memory device man:shm_overview or perhaps a number of them. You need to organise them as circular buffers so they'd store last N kb of your log and whenever you read them with reader it will output everything to your console. This approach is adopted by busybox's syslog/logread suit (see logread.c).

Upvotes: 1

pilcrow
pilcrow

Reputation: 58589

Have each worker log to stdout, but connect each stdout to a utility that automatically spools and rotates logs based on size or time. multilog and svlogd are examples of such. For those programs, you'd merely tail the "current" log file.

You're right that logrotate is not quite the right solution for the problem you have.

Named pipes won't work as you want. At best, your writers could fill up their pipes and then discard subsequent logs, which is the inverse of the behavior you want.

Upvotes: 1

Related Questions