Reputation: 69
Considering moving my MySQL architecture to AWS DynamoDB. My application has a requirement of 1,000 r/w requests per second. How does this play with PHP/updates? Having 1,000 workers process DynamoDB r/w's seems like it will have a higher toll on CPU/Memory than MySQL r/w's.
I have thought about a log file to store the updated information, then create scripts to process the log files to remove db load - however stunted by file locking, would be curious if anyone had any ideas on implementing this - 300 separate script's would be writing to a single log file. The log file could then be processed every minute to the db. Not sure how this could be implemented without locking. Server script is written in PHP.
Current Setup MYSQL Database (RDS on AWS)
Table A updates around 1,000 records per second then updated / added rows are queued for adding to SOLR search.
Would appreciate some much needed advice to lower costs. Are there hidden costs or other solutions I should be aware of before starting development?
Upvotes: 0
Views: 218
Reputation: 13166
I afraid the scope for performance improvement for your DB just too broad.
text <500 chars
. Don't underestimated the text workload.Adding 200k records/days actually not much for today RDBMS. Even 1000 IOPS are only happen in burst. If query is the heaviest part, then you need to optimize that part.
Upvotes: 1