Michael
Michael

Reputation: 6405

mysql slow query

ALTER TABLE customers ADD split INT(1);

10 mil. records... I performed this command for 1 hour and still loading ..is there any way to make it finish faster ?

Upvotes: 3

Views: 2332

Answers (2)

Jon Black
Jon Black

Reputation: 16569

The following is pretty quick, takes a little over 6 minutes with 10 million rows but the example table has fewer fields and indexes than your production table so expect it to take a little longer in your case if you decide to use it !

Note: the example was done on windows OS so you'll have to change pathnames and \r\n to \n to conform to linux standards !

Here's my existing table (InnoDB engine):

drop table if exists customers;
create table customers
(
customer_id int unsigned not null auto_increment primary key,
name varchar(255) not null,
country_id tinyint unsigned not null default 0,
key (country_id)
)
engine=innodb;

mysql> select count(*) from customers;
+----------+
| count(*) |
+----------+
| 10000000 |
+----------+
1 row in set (1.78 sec)

Create a new version of the table which includes the new field you require:

drop table if exists customers_new;
create table customers_new
(
customer_id int unsigned not null auto_increment primary key,
name varchar(255) not null,
country_id tinyint unsigned not null default 0,
split tinyint not null default 0,
key (country_id)
)
engine=innodb;

Find the location of your Uploads folder:

select @@secure_file_priv;

Export the data in PK order from the old customer table into csv format:

select * into outfile 'C:\\ProgramData\\MySQL\\MySQL Server 8.0\\Uploads\\customers.dat'
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
from customers order by customer_id;

Query OK, 10000000 rows affected (17.39 sec)

Load the customer.dat file into the new customer table:

truncate table customers_new;

set autocommit = 0;

load data infile 'C:\\ProgramData\\MySQL\\MySQL Server 8.0\\Uploads\\customers.dat'
into table customers_new
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
(
customer_id,
name,
country_id,
@dummy -- represents the new split field
)
set
name = nullif(name,'');

commit;

Query OK, 10000000 rows affected (6 min 0.14 sec)

Confirm that the new table looks okay:

select * from customers_new order by customer_id desc limit 1;
+-------------+-------------------+------------+-------+
| customer_id | name              | country_id | split |
+-------------+-------------------+------------+-------+
|    10000000 | customer 10000000 |        218 |     0 |
+-------------+-------------------+------------+-------+
1 row in set (0.00 sec)

insert into customers_new (name, country_id, split) values ('f00',1,1);
Query OK, 1 row affected (0.07 sec)

select * from customers_new order by customer_id desc limit 1;
+-------------+------+------------+-------+
| customer_id | name | country_id | split |
+-------------+------+------------+-------+
|    10000001 | f00  |          1 |     1 |
+-------------+------+------------+-------+
1 row in set (0.00 sec)

Drop the old table and rename new one:

drop table customers;
Query OK, 0 rows affected (0.18 sec)

rename table customers_new to customers;
Query OK, 0 rows affected (0.05 sec)

select * from customers order by customer_id desc limit 1;
+-------------+------+------------+-------+
| customer_id | name | country_id | split |
+-------------+------+------------+-------+
|    10000001 | f00  |          1 |     1 |
+-------------+------+------------+-------+
1 row in set (0.00 sec)

That's all folks !

Upvotes: 7

Nick Rolando
Nick Rolando

Reputation: 26177

Probably not by code..
Close all other apps, make sure nothing else is taking up your cpu usage, no malware? get a faster computer?
It would help if you could tell us the environment setup you're working with and such. It could be a number of things involving your network, your server, etc, as well.

Upvotes: 1

Related Questions