Reputation: 36404
Our PHP & MySQL based application creates custom logs which are written to a MySQL database for users actions. We mainly did this for ease of searching and because the app was already using MySQL for persistant storage, so it just made sense.
Our log now contains 17.6 million rows and is 2GB in size. Not that friendly when moving around the place.
I was wondering what the community might suggest as a better more efficient way to store logs.
You could obviously split this table to 1 weeks worth of all logs and then delete non critical logs and split the table in two for historic critical logs, for such things as payments etc.
In general we're writing to the log through the means of a function such as
playerlog($id,$message,$cash,$page,$ip,$time);
But that's a fairly simplified version, we're also using MySQL's INSERT DELAYED as the logs are not critical for page loads.
Upvotes: 0
Views: 212
Reputation: 11129
If you're interested in doing this with MongoDB (which I assume from the tag), you might want to take a look here: http://docs.mongodb.org/manual/use-cases/storing-log-data/
Upvotes: 2
Reputation: 7111
You should clarify for what the logs are needed. As a second step after inserting you could set up a job that works on the log data, e.g. reads the logs and processes them (which degrades your DBMS to some sort of messaging middleware). That may be storing parts (like payments) to an archive that doesn't get deleted or writing authentication logs to a place where they get deleted after a specified retention time. But this all depends on your use case.
Depending on what you plan to analyze or the way you have to query the data you could even store them outside of MySQL.
Some possibilities:
Upvotes: 0