Gunnar
Gunnar

Reputation: 759

mySQL Temporary Table is Full

I am trying to create and load a temporary mySQL table into memory using the following syntax but am running into a "Table is full" error:

CREATE TEMPORARY TABLE IF NOT EXISTS tmpHistory ENGINE=MEMORY SELECT * FROM history ORDER BY date ASC;

My original history InnoDB table has about 3m rows and about 300mb. I've increased both of the following server variables from their 16mb default values:

max_heap_table_size = 536870912

tmp_table_size = 536870912

I'm running mySQL on AWS r3.xlarge which is a 4-core box with 30.5GB of RAM.

I've reviewed this SO guidance but am still running into the Table is Full error. I'm new to using the Memory Engine, so any suggestions are appreciated.

Upvotes: 4

Views: 4241

Answers (2)

Rick James
Rick James

Reputation: 142433

max_heap_table_size, not tmp_table_size controls the maximum size of any subsequent MEMORY table.

MEMORY has several quirks. Perhaps this one bit you: VARCHARs are turned into CHARs. So a VARCHAR(255) takes 765 bytes for each row if it is CHARACTER SET utf8.

Why do you want to copy a perfectly good InnoDB table into a MEMORY table? To avoid disk hits? No... If innodb_buffer_pool_size is big enough, the InnoDB table will effectively live in RAM. To speed things up? Not necessarily, because InnoDB has row locks but MEMORY has only table locks.

Please provide SHOW CREATE TABLE; there may be other things to argue against what you are doing and/or explain why you got the error.

Upvotes: 3

MontyPython
MontyPython

Reputation: 2994

Find out what all tables you have created using engine=memory

select table_name,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB` 
from information_schema.tables 
where table_schema = 'db1' and engine = 'memory';

Use the top command to see what process is consuming what amount of memory. In case of very large tables, avoid using memory. Think about a column-store like Amazon RedShift.

Upvotes: 1

Related Questions