Reputation: 197
I created a dictionary on Clickhouse using the following script:
CREATE DICTIONARY IF NOT EXISTS default.testDICT
(
-- attributes
)
PRIMARY KEY DATETIME, SOMEID, SOMEID2
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' PASSWORD '' DB 'default' TABLE 'test'))
LIFETIME(MIN 0 MAX 300)
LAYOUT(COMPLEX_KEY_HASHED())
The table test has approximately 19 000 000 rows.
And when I try to execute a select
SELECT * FROM testDICT
, which also loads the dictionary if I understood well, it sends me the following error:
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from clickhouse-server:9000
Connecting to clickhouse-server:9000 as user default.
Code: 210. DB::NetException: Connection refused (clickhouse-server:9000)
Do you know what it means and also how can I correct it?
Upvotes: 2
Views: 11549
Reputation: 4212
As suggested in this blog, try to do the following before inserting: set max_insert_threads=32;
I first also got the same error, but after I changed max_insert_threads, I successfully inserted almost 200GB of data.
https://altinity.com/blog/clickhouse-and-redshift-face-off-again-in-nyc-taxi-rides-benchmark
Upvotes: 0
Reputation: 13310
19 000 000 rows is too many for a dictionary. Probably it will require 10-20GB RAM.
So your CH crashed or killed by OOM killer. Check sudo dmesg|tail -100
Try cached dictionaries layout to load only part of 19 000 000 into memory at once.
Upvotes: 3