Mike Trpcic
Mike Trpcic

Reputation: 25659

Why are logs stored in flat files, rather than a database (SQL)?

Why is it that web servers and other technology use flat files for logging, rather than a database of some sort, whether it by SQL or some sort of KVS or "NoSQL" solution?

Is there a benefit (speed, latency, write-times, etc) to using flat files, or am I simply missing something?

Upvotes: 6

Views: 1056

Answers (5)

Matt Grande
Matt Grande

Reputation: 12157

Here's why we stopped using DB logs at my job.

try
{
    tx.Begin();
    // Exception here!
    tx.Commit();
}
catch(Exception ex)
{
    LogToDB(ex);
    tx.Rollback();
}

Everytime we had an exception within a database connection, the logging was being rolled back.

(I suppose I shouldn't say always... Just when the Rollback occurred after the logging. Still, that was confusing for awhile!)

Upvotes: 1

Merlyn Morgan-Graham
Merlyn Morgan-Graham

Reputation: 59131

While the rest of the answers here are true (KISS principal, etc), I've seen a project where logging filled up the server's hard drive, and they had to build automation to clean up the logs. To solve this, one might have to implement a rolling backup/max log size feature, or make a scheduled task (cron) to move or delete the logs.

There is no free lunch.

Upvotes: 3

jches
jches

Reputation: 4527

There's an eternal principle that says the fewer moving parts, the fewer points where something can go wrong. You minimize dependencies, thereby minimizing places where bugs can occur. It's also fast because it's simple, and an a high-load server you don't want logging to bog down the system.

Interestingly, it's the same principle that made Charles Lindberg's single-engine plane successful in flying non-stop across the Atlantic when so many other, bigger planes failed before him. Keep it simple :)

Upvotes: 3

Jeff Ferland
Jeff Ferland

Reputation: 18312

  • First and foremost, its easy to write.
  • It's simple... writing to a file is one of the least likely things to go wrong.
  • Because of the simplicity, it is fast. This applies both to disk writes and to the CPU time for performing the operations.

A lot of this is also a legacy. While database servers and machines with 100s of gigabytes of storage if not terabytes and gigs of RAM are abundant, if not the norm, most seasoned servers hail from a time where memory was system resources were fully utilized by much lower loads.

But mostly it's just easy. Why rely on SQLite (as simple embedded SQL service), ensuring your license is compatible, that you maintain appropriate versions, and that it doesn't have underlying bugs or security issues... in order to do nothing but sequential inserts?

KISS. Log analysis tools shouldn't be part of log writing.

Upvotes: 3

user226555
user226555

Reputation:

You're not guaranteed to have a DB on the server.

If the DB is part of the problem, how do you look at the logs.

If the DB is not part of the problem, it's still simpler to look at the logs with any old text editor.

Why use complicated when simple works. When Apache (etc.) was first developed, Open source (free, ubiquitous) DBs were not as available.

Etc. Etc.

Upvotes: 10

Related Questions