Nitish Jha
Nitish Jha

Reputation: 23

Optimal and efficient way to store 200 fields inside a table in mysql

I am working on a new project in which requirement is to store around 200 fields in a table, what could be the most efficient and best possible way to do the same. The table would contain around 1,000,000 rows and the same would keep on increasing.

I have created two tables and divided the fields based on their frequency of update and select.

Upvotes: 1

Views: 309

Answers (2)

Martijn
Martijn

Reputation: 16103

I have never seen a table which just had to contain 200 columns. Especially not for a million+ rows, this'll jam stuck due to size alone. One insert/update/delete will have a gigantic amount of overhead.

Even if there is nothing to normalize (I'm assuming you have, otherwise study that!), you can also make 1-to-1 relations. For example:

Instead of on table with:
mainTable: id, name, something, else, another, column, just, as, demo

You can also break it down into many small tables:

mainTable:        id, something, else
descriptionStuff: id, main_id, name
relatedToX:       id, main_id, another, column
relatedToY:       id, main_id, just, as, demo

And main_id being the link between them. Inserts are now much faster!

Upvotes: 2

Hektor
Hektor

Reputation: 1945

You should bear in mind that with a database requiring 200 fields, optimisation will include not just speed of the SQL engine, but also the speed with which users can access the information they need, and the speed with which developers can make the contents of the database easy to comprehend, especially if they need to expand functionality down the line.

A good guide to database normalization without confusing terminology can be found at https://www.essentialsql.com/get-ready-to-learn-sql-database-normalization-explained-in-simple-english/ but you have already taken the first step by starting to build your databse.

See how quickly the database performs as is. It may be that performance optimization will come from stored procedures or fixed-length fields and other programming tricks rather than from the underlying SQL-engine. Articles like this can help: https://jaxenter.com/6-ways-optimize-sql-database-136448.html

Ultimately remember that you can scale up performance-wise once the database is built, whether that means caching and optimising queries/likely queries etc or simply using a distributed database architecture; see https://blog.sqlauthority.com/2014/10/06/mysql-how-to-create-a-distributed-relational-sql-database/ for a helpful guide.

As you mention in your question, knowing the likely usage patterns of your customers will be important for the choices you make, but the obvious basics are a good starting point - separate tables for users (logins, account information, password hashes, etc) and for administrative functions (statistics, performance metrics, unanswered queries etc) and then tables for the main data are a good start.

The simplest approach for dividing up data tables is to group related information into a single table, so that all information likely to be required for a single query (i.e. the most simple queries) is located in the same table. More complicated queries will have to access multiple tables, and those queries are prime candidates for stored procedures and such like, but organizing the tables in this way will make refining and optimizing the database structure at a later date or as you learn more about the actual use of the database much, much easier. And use of the database of course also includes entering data - not all of your tables will contain millions of entries.

Another useful resource can be found here: https://www.sqlshack.com/sql-server-table-structure-overview/ I hope this helps.

Upvotes: 2

Related Questions