Marina
Marina

Reputation: 4064

Cassandra 2.1 CQL error creating table with tuple: no viable alternative at input '>'

It's probably something really stupid... but I can't create a table with the new tuple type:

cqlsh:ta> CREATE TABLE tuple_test (k int PRIMARY KEY, v frozen <tuple<int, int>> );
Bad Request: line 1:68 no viable alternative at input '>'
cqlsh:ta> 

I've pretty much copied the table creation statement from the DataStax docs... What am I missing?

thanks!

Update - based on help from BryceAtNetwork23 and RossS:

Yes, you are right - I had DataStax Enterprise which had Cassandra 2.0.

I have installed DataStax Community with Cassandra 2.1 and all worked fine!

One note: skipping the 'frozen' keyword does not work with the DSC's Cassandra distribution - but having the frozen in does work. Thanks for your help!

[cqlsh 4.1.1 | Cassandra 2.1.2 | DSE  | CQL spec 3.1.1 | Thrift protocol 19.39.0]
cqlsh> CREATE TABLE ta.tuple_test (k int, v tuple<int, int>,PRIMARY KEY(k) );
Bad Request: Non-frozen tuples are not supported, please use frozen<>
cqlsh> CREATE TABLE ta.tuple_test (k int, v frozen <tuple<int, int>>,PRIMARY KEY(k) );
cqlsh> 

Upvotes: 1

Views: 2361

Answers (1)

Aaron
Aaron

Reputation: 57808

That is weird. I get the same error. I did manage to get it to work with a slight modification or two. I then did a desc just to make sure that it created ok:

aploetz@cqlsh> CREATE TABLE stackoverflow.tuple_test (k int, v tuple<int, int>,PRIMARY KEY(k) );

aploetz@cqlsh> use stackoverflow ;
aploetz@cqlsh:stackoverflow> desc table tuple_test ;

CREATE TABLE stackoverflow.tuple_test (
    k int PRIMARY KEY,
    v frozen<tuple<int, int>>
) WITH bloom_filter_fp_chance = 0.01
    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
    AND comment = ''
    AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99.0PERCENTILE'

The main thing, is that I didn't specify frozen in my CREATE, but when you desc the table, you can see that it knew to put it there.

Edit- Here is my cqlsh spec:

[cqlsh 5.0.1 | Cassandra 2.1.0-rc5-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]

Cassandra 2.0.11.83

Hmm...based on this, I don't know that you're actually on Cassandra 2.1. And I'm pretty sure that the Tuple type is a 2.1 and higher feature. Double check your Cassandra version once. Also, if you're on DSE (which means you have support) I'd open up a ticket with them, describing the error that you're seeing.

Edit- FYI, I have upgraded my 2.1.0-rc5 version to 2.1.2, and run your original CREATE, and it works as-is:

Connected to PermanentWaves at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
aploetz@cqlsh> use stackoverflow ;
aploetz@cqlsh:stackoverflow> CREATE TABLE tuple_test (k int PRIMARY KEY, v frozen <tuple<int, int>> );
aploetz@cqlsh:stackoverflow> desc table tuple_test ;

CREATE TABLE stackoverflow.tuple_test (
    k int PRIMARY KEY,
    v frozen<tuple<int, int>>
)...

Upvotes: 2

Related Questions