Reputation: 35
I am using Cassandra 3.10 and am trying to follow best practice by having a table per query so I am using the Batch insert proncipal to insert into multiple tables as a single transaction however I get the following error in the cassandra log.
Batch for [zed.payment, zed.trade_party_b_ref, zed.trade_product_type, zed.trade, zed.fx_variance_swap, zed.trade_party_a_ref, zed.trade_party_b_trade_id, zed.market_value] is of size 5.926KiB, exceeding specified threshold of 5.000KiB by 0.926KiB.
Upvotes: 0
Views: 1022
Reputation: 35
Thanks for the info, the parameter in cassandra.yaml is
batch_size_warn_threshold_in_kb: 5 which is in KB, not MB so my batch statement is really 6KB not 6MB. After 30 years working with Oracle, this is my first venture into Cassandra so I have tried to follow the guidelines of having a separate table for each query so where I have a financial trade table which has to be queried in up to 8 different ways I have 8 tables. That then implies that an insert into the tables must be done in a batch to create what would be a single transaction in Oracle. The master table of the eight has a significant number of sibling tables which must also be included in the batch so here is my point: If cassandra does not support transactions but relies on the batch functionality to achieve the same effect it must not impose a limit on the size of the batch. If this is not possible then cassandra is really limited to applications with VERY simple data structures.
Upvotes: 0
Reputation: 2643
The log is saying that you are sending a batch of almost 6MB when the limit is 5MB.
You should send smaller batches of data to avoid going over that batch size limit.
You can also change the batch size limit in cassandra.yaml, but I would not recommend to change it.
Upvotes: 1