Reputation: 175
I have a problem with my MySQL database. I would like to run a huge insert/on duplicate statement and receive a 1118 error.
The mysterious about this problem is, that it depends on the "complete" statement. If I delete one char on a row (it doensn't matter which row) the statement will work. If I add a char (it doesn't matter which row) I receive the 1118 error.
I have tried to increase many values on my my.ini, e.g. innodb_log_file_size, innodb_buffer_pool_size and innodb_log_buffer_size. Nothing worked for me.
Where is the limitation?
I can run the following, but the 3 variables with Varchar(1) are getting cutted. If I change Varchar to text, I receive the error.
Because of the Stackoverflow char limitation I have uploaded my statement here: http://pastie.org/private/yriegkyurqavnqezzbsqta
Between '' you could fill around 12569 random chars, it doesnt matter where you fill them in, after 12570 I receive the error.
This is the create table code:
CREATE TABLE `produktliste_icecat` (
`EAN` VARCHAR(50) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Fehler` VARCHAR(200) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Bild_high` VARCHAR(250) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Bild_low` VARCHAR(250) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Bild_thumb` VARCHAR(250) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Name` VARCHAR(200) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Titel` VARCHAR(200) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Quality` VARCHAR(50) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Anleitung` VARCHAR(100) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Datenblatt` VARCHAR(100) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_URL` VARCHAR(500) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Garantie` VARCHAR(2500) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Beschreibung_kurz` VARCHAR(700) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Beschreibung_lang` TEXT NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Zusammenfassung_kurz` VARCHAR(700) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Zusammenfassung_lang` VARCHAR(1500) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Titel` VARCHAR(1500) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Name` VARCHAR(1) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Wert` VARCHAR(1) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Kategorie_Zuordner` VARCHAR(1) NOT NULL COLLATE 'utf8_general_ci',
`Icecat_Eigenschaft_Titel_ID` VARCHAR(700) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_alternative_EAN` VARCHAR(2500) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Wert_presentation` TEXT NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Wert_local` TEXT NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Kategorie` VARCHAR(100) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Kategorie_Eigenschaft_suchbar` VARCHAR(2500) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Titel_Prio` VARCHAR(3300) NOT NULL COLLATE 'utf8_unicode_ci',
`Icecat_Eigenschaft_Wert_Prio` VARCHAR(700) NOT NULL COLLATE 'utf8_unicode_ci',
UNIQUE INDEX `EAN` (`EAN`)
)
COLLATE='utf8_unicode_ci'
ENGINE=InnoDB;
Here is my my.ini:
[mysqld]
port= 3306
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "Z:/xampp_sql/data"
pid_file = "mysql.pid"
key_buffer = 1G
max_allowed_packet = 500M
sort_buffer_size = 4M
net_buffer_length = 4M
read_buffer_size = 50M
read_rnd_buffer_size = 1024K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
innodb_buffer_pool_size = 1G
query_cache_type = 1
innodb_log_file_size = 20000M
innodb_log_buffer_size=1250M
innodb_flush_log_at_trx_commit=2
innodb_thread_concurrency=0
innodb_file_per_table
bind-address="0.0.0.0"
plugin_dir = "C:/xampp/mysql/lib/plugin/"
skip-federated
server-id = 1
innodb_data_home_dir = "C:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "C:/xampp/mysql/data"
innodb_additional_mem_pool_size = 2M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
Has anyone an idea how I could run statements with more than 12570 characters in a statement?
Thank you in advance!
Upvotes: 0
Views: 4117
Reputation: 24960
with your create table, and your insert statement from the pastie url (having removed the db name reference)
This is what I get
Error 1406: Data too long for column Icecat_Eigenschaft_Wert_presentation
which is a varchar(1000)
.
Fix that and move on to the next error message (presumably). Fix that. Rinse and repeat.
I simply am getting a different error message as seen above.
Running mysql 5.6.24.
See manual page Limits on InnoDB Tables:
Although InnoDB supports row sizes larger than 65,535 bytes internally, MySQL itself imposes a row-size limit of 65,535 for the combined size of all columns:
mysql> CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),
-> c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
-> f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;
ERROR 1118 (42000): Row size too large. The maximum row size for the
used table type, not counting BLOBs, is 65535. You have to change some
columns to TEXT or BLOBs
Though looking at your create, that does not seem to be an issue at the moment. But something to keep in mind. How to solve? Perhaps the below, also from that manual page:
The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page. That is, the maximum row length is about 8000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 14.12.2, “File Space Management”.
The row size for BLOB columns that are chosen for external off-page storage should not exceed 10% of the combined redo log file size. If the row size exceeds 10% of the combined redo log file size, InnoDB could overwrite the most recent checkpoint which may result in lost data during crash recovery. (Bug#69477).
So when you reach a certain threshold, start converting to TEXT
datatypes and the like.
Upvotes: 0