user1516606
user1516606

Reputation: 69

Move from MySQL to AWS DynamoDB? 5m rows, 4 tables, 1k writes p/s

Considering moving my MySQL architecture to AWS DynamoDB. My application has a requirement of 1,000 r/w requests per second. How does this play with PHP/updates? Having 1,000 workers process DynamoDB r/w's seems like it will have a higher toll on CPU/Memory than MySQL r/w's.

I have thought about a log file to store the updated information, then create scripts to process the log files to remove db load - however stunted by file locking, would be curious if anyone had any ideas on implementing this - 300 separate script's would be writing to a single log file. The log file could then be processed every minute to the db. Not sure how this could be implemented without locking. Server script is written in PHP.

Current Setup MYSQL Database (RDS on AWS)

Table A updates around 1,000 records per second then updated / added rows are queued for adding to SOLR search.

Would appreciate some much needed advice to lower costs. Are there hidden costs or other solutions I should be aware of before starting development?

Upvotes: 0

Views: 218

Answers (1)

mootmoot
mootmoot

Reputation: 13166

I afraid the scope for performance improvement for your DB just too broad.

  • IOPS. Some devops choose provision 200GB storage (200 x 3 = 600 IOPS) than the "provisioned IOPS" for smaller storage (say they only need 50GB then purchase provisioned IOS). You need to launch an excel sheet to find the pricing/performance sweet spot.
  • You might need to create another "denormalised table" from table A, if frequent select from table A but not traverse the whole text <500 chars. Don't underestimated the text workload.
  • Index, index , index.
  • if you deal with tons of non-linear search, perhaps copy part of relevant data to dynamodb that you think will improve the performance, test it first, but maintain the RDBMS structure.
  • there is no one size fit all solutions. Please also inspect usage of Message queue if required.

Adding 200k records/days actually not much for today RDBMS. Even 1000 IOPS are only happen in burst. If query is the heaviest part, then you need to optimize that part.

Upvotes: 1

Related Questions