CQL: Escape single quote in a map<int,text> attribute - cassandra

For text attributes, CQL allows escaping single quote using additional single quote. But applying the same syntax for a map<int,text> attribute does not work.
CREATE query:
CREATE TABLE university.test (id int, data map<int, text>, PRIMARY KEY (id));
INSERT query:
insert into university.test (id, data) values(1, {1:'I''m newb'});
On running the aforementioned INSERT query, Cassandra inserts and returns {1: 'I''m newb'} while the expected result is {1: 'I'm newb'}
Screenshot FR:
How to escape single quote ' while inserting in a map<int,text> attribute?

Related

How do I escape the ampersand character (&) in cql?

I am inserting a statement into a table that looks something like this:
insert into db.table (field1, field2) values (1, 'eggs&cheese')
but when i later query this error on our servers, my query returns:
eggs\u0026cheese instead.
Not sure whether to use \ or '
If anyone can help, that would be great. Thank you!
This doesn't appear to be a problem with CQL but the way your app displays the value.
For example, if the CQL column type is text, the unicode character is encoded as a UTF-8 string.
Using this example schema:
CREATE TABLE unicodechars (
id int PRIMARY KEY,
randomtext text
)
cqlsh displays the ampersand as expected:
cqlsh> SELECT * FROM unicodechars ;
id | randomtext
----+-------------
1 | eggs&cheese

nested map in cassandra data modelling

I have following requirement of my dataset, need to unserstand what datatype should I use and how to save my data accordingly :-
CREATE TABLE events (
id text,
evntoverlap map<text, map<timestamp,int>>,
PRIMARY KEY (id)
)
evntoverlap = {
'Dig1': {{'2017-10-09 04:10:05', 0}},
'Dig2': {{'2017-10-09 04:11:05', 0},{'2017-10-09 04:15:05', 0}},
'Dig3': {{'2017-10-09 04:11:05', 0},{'2017-10-09 04:15:05', 0},{'2017-10-09 04:11:05', 0}}
}
This gives an error :-
Error from server: code=2200 [Invalid query] message="Non-frozen collections are not allowed inside collections: map<text, map<timestamp, int>>"
How should I store this type of data in single column . Please suggest datatype and insert command for the same.
Thanks,
There is limitation of Cassandra - you can't nest collection (or UDT) inside collection without making it frozen. So you need to "froze" one of the collections - either nested:
CREATE TABLE events (
id text,
evntoverlap map<text, frozen<map<timestamp,int>>>,
PRIMARY KEY (id)
);
or top-level:
CREATE TABLE events (
id text,
evntoverlap frozen<map<text, map<timestamp,int>>>,
PRIMARY KEY (id)
);
See documentation for more details.
CQL collections limited to 64kb, if putting things like maps in maps you might push that limit. Especially with frozen maps you are deserializing the entire map, modifying it, and re inserting. Might be better off with a
CREATE TABLE events (
id text,
evnt_key, text
value map<timestamp, int>,
PRIMARY KEY ((id), evnt_key)
)
Or even a
CREATE TABLE events (
id text,
evnt_key, text
evnt_time timestamp
value int,
PRIMARY KEY ((id), evnt_key, evnt_time)
)
It would be more efficient and safer while giving additional benefits like being able to order the event_time's in ascending or descending order.

Cassandra: Inserting value in UDT

I am trying to insert values in UDT but getting error message -
message="unconfigured columnfamily my_object"
below my statement-
INSERT INTO home.my_object (id,type,quantity ,critical,page_count,stock,outer_envelope ) VALUES ('3.MYF','COM','D','A','VV','','');
What am i doing wrong?
That error means that the keyspace "home" exists, but does not contain a table (column family) called "my_object". I also noticed that your insert statement does not contain a UDT literal.
UDTs define a type, but you must also define a table with a column of that type before inserting any data. I assume your UDT is called "my_object". Try this:
create table home.test (key int primary key, object frozen<my_object>);
insert into home.test (key, object) values (0, {id: 'value', type: 'othervalue'});

Cassandra throws Bad Request: Batch with conditions cannot span multiple tables

Cassandra docs and blog says that batches with conditional update statements work at the granularity of the partition, and partition is defined by first key in primary key
So, I have following keyspaces:
CREATE TABLE SOCIAL_PROFILE (
soc_net_type text,
soc_net_user_id text,
user_prof_id text,
PRIMARY KEY (soc_net_type, soc_net_user_id));
CREATE TABLE SOCIAL_PROFILE_CONTACT (
soc_prof_soc_net_type text,
soc_prof_soc_net_user_id text,
soc_net_user_id text,
PRIMARY KEY (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id));
And insert statement:
BEGIN BATCH
INSERT INTO social_profile (soc_net_type, soc_net_user_id, user_prof_id) VALUES ('vk', '1', '100') IF NOT EXISTS;
INSERT INTO social_profile_contact (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id) VALUES ('vk', '1', '2');
INSERT INTO social_profile_contact (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id) VALUES ('vk', '1', '3');
INSERT INTO social_profile_contact (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id) VALUES ('vk', '1', '4');
APPLY BATCH;
And social_profile_contact.soc_prof_soc_net_type = social_profile.soc_net_type they are both have same values and should be in the same partition, but Cassandra throws:
Bad Request: Batch with conditions cannot span multiple tables
I haven't found a word about tables in docs, what I'm doing wrong?
cqlsh 4.1.1 | Cassandra 2.0.11 | CQL spec 3.1.1 | Thrift protocol 19.39.0
"Batch with conditions cannot span multiple tables"
Two different column families (tables) using the same primary key are still two different partitions.
You need to move the statement with IF NOT EXISTS in the independent batch.
INSERT INTO social_profile (soc_net_type, soc_net_user_id, user_prof_id) VALUES ('vk', '1', '100') IF NOT EXISTS;
The batch insert with conditionals IF NOT EXISTS could contain only single type.

How to insert a string in SQLite database without conversion?

The table:
CREATE TABLE configuration(Key STRING, Value STRING, PRIMARY KEY (Key) );
Here is what I tried:
insert into configuration(Key,Value) values(42,cast('0042' as text));
Here is the dump:
INSERT INTO "configuration" VALUES(42,42);
What I wanted:
INSERT INTO "configuration" VALUES(42,'0042');
If you create the table with:
CREATE TABLE configuration(Key STRING, Value TEXT, PRIMARY KEY (Key) );
(there is no storage penalty for doing this with SQLite) then you'll get the leading zeroes preserved, even if you use the very simplest form of INSERT. This is because STRING is not a real SQLite type, and so has NUMERIC affinity.
I'm not sure what you are trying, but essentially, it should be:
INSERT INTO configuration VALUES(42,'0042');

Resources