Reputation: 2405
What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable.
The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table.
How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)
Upvotes: 7
Views: 3169
Reputation: 727
Since you're tracking impressions, what if try only saving, say, one in every 5. Then you still have a completely "random" sample, and you can just apply the percentages to the bigger dataset.
Upvotes: 0
Reputation: 15832
I'd recommend memcaching, too.
Write your data into a memcache and have a periodically running job aggregate it and do the inserts.
Writing to an actual file would probably DECREASE your performance since file system access is mostly slower than talking to a database that can handle writing access much more efficiently.
Upvotes: 1
Reputation: 10637
Writing to a file is great, but you still need to synchronize your file writes which puts you back to square one.
Suggestions:
Upvotes: 0
Reputation: 2368
PHP is not well-suited to high volume web traffic IMHO. However, the database will likely bog you down before the PHP performance - especially with PHP's connection model (opens a new connection for every requst).
I have two suggestions for you:
SQL Relay effectively allows PHP to tke advantage of connection pooling and that will give much better performance for a high volume database application.
PHP accelrators (generally speaking) cache the PHP opcodes which saves the overhead of interpreting the PHP code with every request.
Good Luck!
Upvotes: 1
Reputation: 625037
A couple of ways:
Firstly, you will reach a point where you need to partition or shard your data to split it across multiple servers. This could be as simple as A-C on server1, D-F on server2 and so on.
Secondly, defer writing to the database. Instead write to a fast memory store using either beanstalkd or memcached directly. Have another process collect those states and write aggregated data to the database. Periodically amalgamate those records into summary data.
Upvotes: 5
Reputation: 3060
This is not a problem you can handle in PHP alone.
If you have 20 000 requests a second hitting your "low-budget" (as I understood by the undertone of your question) server, then it will reach its limit before most of them reach the PHP processor (and, eventually, MySQL).
If you have a traffic tracker script, you'll very likely cause problems for all the sites you track too.
Upvotes: 1
Reputation: 59973
How do the big dogs do it?
Multiple servers. Load balancing.
How do the little dogs do it?
Multiple servers. Load balancing.
You really want to save up inserts and push them to the database in bulk. 20k individual inserts a second is a crapton of overhead, and simplifying that down to one big insert each second eliminates most of that.
Upvotes: 5
Reputation: 14234
Thats impressive. Most of my data has been from massive inserts at once. One thing that I find is that bulk inserts do a lot better than individual inserts. Also, the design of your tables, indexes etc has a lot to do with insert speed. The problem with using cron and bulk inserting are the edge cases. (When it goes to do the inserts).
Additionally with flatfiles. You can easily run into issues with concurrency with writing the inserts to the file. If you are writting 1k+ inserts a s you'll quickly run into lots of conflicts and loss when there are issues with the file writing.
Upvotes: 2