Reputation: 11876
I have 8 load balanced web servers powered by NGINX and PHP. Each of these web servers posts data to a central MySQL database server. They [web servers] will also post same data (albeit slightly formatted) to a text file in a separate Log Server (line-by-line) i.e. One database insert = One line in log file.
The active code of the PHP file doing the logging looks something like below:
file_put_contents(file_path_to_log_file, single_line_of_text_to_log, FILE_APPEND | LOCK_EX);
The problem I'm having is scaling this to 5,000 or so logs per second. The operation will take multiple seconds to complete and will slow down the Log server considerably.
I'm looking for a way to speed things up dramatically. I looked at the following article: Performance of Non-Blocking Writes via PHP.
However, from the tests it looks like the author has the benefit of access to all the log data prior to the write. In my case, each write is initiated randomly by the web servers.
Is there a way I can speed up the PHP writes considerably?! Or should I just log to a database table and then dump the data later to text file at timed intervals?!
Just for your info: I'm not using the said text file in the traditional 'logging' sense...the text file is a CSV file that I'm going to be feeding to Google BigQuery later.
Upvotes: 0
Views: 1067
Reputation: 11
Since you're writing all the logs to a single server, have you considered implementing the logging service as a simple socket server? That way you would only have to fopen the log file once when the service starts up, and write out to it as the log entries come in. You would also get the added benefit of the web server clients not needing to wait for this operation to complete...they could simply connect, post their data, and disconnect.
Upvotes: 1