ramachandran natesan
ramachandran natesan

Reputation: 53

Querying huge table in MySQL

I have a table which has around 1 million rows (size on physical disk is nearly 8 GB since it has a text column) which takes a lot of time for any transaction. In particular for "select" it takes enormous time for e.g. it takes around 20 minutes for count query without any condition i.e., select count(*) from TestPerformance.

Table schema is:

Name : TestPerformance

Field       Type    Null    Key     Default     Extra

ID      int(11)     NO  PRI     null    
TEXT        text        YES         null    
CATEGORY    varchar(100)    YES     MUL     null    
DDOMAIN     varchar(100)    YES         null    
NETWORK     varchar(100)    YES         null    
NODE        varchar(100)    YES         null    
ENTITY      varchar(100)    YES     MUL     null    
SEVERITY    int(11)     YES         null    
TTIME       bigint(20)  YES         null    
SOURCE      varchar(255)    NO  MUL     null    
HELPURL     varchar(100)    YES         null    
WEBNMS      varchar(100)    YES         null    
GROUPNAME   varchar(100)    YES         null    
OWNERNAME   varchar(25)     NO  PRI     null      

and indexes are

Table           Non_unique  Key_name        Seq_in_index    Column_name     
TestPerformance     0       PRIMARY         1       ID      
TestPerformance     0       PRIMARY         2       OWNERNAME   
TestPerformance     1       TestPerformance0_ndx    1       ID      
TestPerformance     1       TestPerformance1_ndx    1       OWNERNAME   
TestPerformance     1       TestPerformance_ndx     1       CATEGORY    
TestPerformance     1       TestPerformance_ndx     2       SOURCE      
TestPerformance     1       TestPerformance_ndx1    1       ENTITY      
TestPerformance     1       TestPerformance_ndx2    1       SOURCE  

I have tuned key_buffer size to 1 GB but nothing changed in performance.

How can I speed up transaction of this table without deleting any data?

I am not a DB expert. Kindly provide your suggestions to improve the performance of the table.

Upvotes: 2

Views: 248

Answers (2)

symcbean
symcbean

Reputation: 48357

How can I speed up transaction of this table without deleting any data?

1 million rows is not a lot of data. 8Gb is a sizable amount of data.

Move the text type column into a sperate table (with a 1:1 relationship). Reduce the size of those varchar tables to the minimum size needed to hold the data (or consider moving any you don't need for filtering to the other table).

Do you really need id and ownername for the primary key? I suspect id may unique. If so, lose TestPerformance0_ndx - it's redundant. Indeed, you should tart analysing your logs and seeing what indexes the DBMS actually needs to service the queries and amend the schema accordingly

Upvotes: 3

davidethell
davidethell

Reputation: 12018

Run EXPLAIN on your query (which you should post for us to see). This will help identify what indices your query is attempting to use and what columns are using a full table scan.

Also, don't do select count *, instead count your primary recid so it can just count using your index.

Upvotes: 1

Related Questions