Marta Karas
Marta Karas

Reputation: 5165

Why Cassandra does not load all values from CSV file?

I have just started working with Cassandra (single node setup, 2.0.9 version). I tried to load data into COLUMNFAMILY from a CSV file, but noticed it de facto loaded only 2 out of 239595 rows. I cannot understand why. I would appreciate any hint.

cqlsh console output:

load data from CSV

cqlsh:keyspace_test1> COPY invoices (date, product_id, customer_id, quantity, sales) FROM '/home/martakarass/Desktop/invoices.csv';
239595 rows imported in 1 minute and 52.766 seconds.

notice that SELECT displays only 2 rows

cqlsh:keyspace_test1> SELECT * FROM invoices limit 10; 

 date     | customer_id | product_id    | quantity | sales
----------+-------------+---------------+----------+--------
 2/1/2015 |  Client_100 | Product_15702 |        6 | 123.42
 1/9/2015 |  Client_998 | Product_43550 |     3000 | 15.368

(2 rows)

check with count that not all rows have been loaded

cqlsh:keyspace_test1> SELECT count(*) FROM invoices; 

 count
-------
     2

(1 rows)

cqlsh:keyspace_test1> 

(updated) table details:

cqlsh:keyspace_test1> DESCRIBE COLUMNFAMILY keyspace_test1.invoices; 

CREATE TABLE invoices (
  date text,
  customer_id text,
  product_id text,
  quantity int,
  sales float,
  PRIMARY KEY ((date))
) WITH
  bloom_filter_fp_chance=0.010000 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.100000 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.000000 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};

Upvotes: 2

Views: 808

Answers (2)

giasuddin
giasuddin

Reputation: 133

It was a bug not it resolved cqlssh bug

you can solve your problem by doing this : fist go to your root directory you will get .cassandra folder enter into this folder you will get

cqlshrc

add this two line

[csv]
field_size_limit=1000000000

then close your file.restart cassandra an your problem will be solve.

Upvotes: 0

Aaron
Aaron

Reputation: 57758

I am going to venture a guess that dates are not unique in your invoices.csv file. When I create a similar table:

CREATE TABLE stackoverflow.invoices (
    date timestamp PRIMARY KEY,
    amount bigint,
    id bigint
)

And I use a CSV file that has 4 rows like this:

date|id|amount
2015-03-30 00:00:00-0500|1|4500
2015-03-31 00:00:00-0500|2|5500
2015-03-31 00:00:00-0500|3|6600
2015-03-31 00:00:00-0500|4|7500

Next, I import them with COPY FROM:

aploetz@cqlsh:stackoverflow> COPY invoices (date, id, amount) FROM 
    '/home/aploetz/invoices.csv' WITH DELIMITER='|' AND HEADER=true;

4 rows imported in 0.035 seconds.

I should have 4 rows, right? Wrong.

aploetz@cqlsh:stackoverflow> SELECT * FROm invoices;

 date                     | amount | id
--------------------------+--------+----
 2015-03-30 00:00:00-0500 |   4500 |  1
 2015-03-31 00:00:00-0500 |   7500 |  4

(2 rows)

Cassandra PRIMARY KEYs are unique. So if you were to import 239595 rows from a file, but there are really only two unique dates, then 2 rows is all you will have.

Upvotes: 4

Related Questions