Alex Mathew
Alex Mathew

Reputation: 1554

how to do fast read data and write data in mysql?

Hi Friends i am using MySQL DB for one of my Product, about 250 schools are singed for it now, its about 1500000 insertion per hour and about 12000000 insertion per day, i think my current setup like just a single server may crash with in hours, and the read is also same as write, how can i make it crash free DB server, the main problem i am facing now is the slow of both writing and reading data how can i over come that,it is very difficult for me to get a solution.guys please help me..which is the good model for doing the solution?

Upvotes: 1

Views: 1838

Answers (3)

c0rnh0li0
c0rnh0li0

Reputation: 149

My laid back advice is:

  • Build you application lightweight. Don't use an high level database abstraction layer like Active Record. They suck at scaling.
  • Learn a lot about mysql permformance.
  • Learn about mysql replication.
  • Learn about load balancing.
  • Learn about in memory caches. (memcached)
  • Hire an administrator (with decent mysql knowledge) or web app performance guru/consultant.

The concrete strategy depends on your application and how it is used. Mysql replication, may or may not be appropriate (same applies for the mentioned sharding strategy). But it's a rather simple way to achive some scaling, because it doesn't impact your application design too much. In memory caches can keep away some load from your databases, but they need some work to apply and some trade offs. In the end you need a good overall understanding how to handle a database driven application under heavy load. If you have a tight deadline, add external manpower, because you won't do this right within 6 weeks without experience.

Upvotes: 0

Gintautas Miliauskas
Gintautas Miliauskas

Reputation: 7892

~500 inserts per second is nothing to sneeze at indeed.

For a flexible solution, you may want to implement some sort of sharding. Probably the easiest solution is to separate schools into groups upfront and store data for different groups of schools on different servers. E.g., data for schools 1-10 is stored on server A, schools 11-20 on server B, etc. This is almost infinitely scalable, assuming that there are few relationships between data from different schools.

Also you could just try throwing more horsepower at the problem and invest into a RAID of SSD drives and, assuming that you have enough processing power, you should be OK. Of course, if it's a huge database, the capacity of SSD drives may not be enough.

Finally, see if you can cut down on the number of insertions, for example by denormalizing the database. Say, instead of storing attendance for each student in a separate row put attendance of the entire class as a vector in a single row. Of course, such changes will heavily limit your querying capabilities.

Upvotes: 3

Mark Byers
Mark Byers

Reputation: 838106

It is difficult to get both fast reads and writes simultaneously. To get fast reads you need to add indexes. To get fast writes you need to have few indexes. And to get both to be fast they must not lock each other.

Depending on your needs, one solution is to have two databases. Write new data to your live database and every so often when it is quiet you can synchronize the data to another database where you can perform queries. The disadvantage of this approach is that data you read will be a little old. This may or may not be a problem depending on what it is you need to do.

Upvotes: 3

Related Questions