Cassandra: Inserting value in UDT - cassandra

I am trying to insert values in UDT but getting error message -
message="unconfigured columnfamily my_object"
below my statement-
INSERT INTO home.my_object (id,type,quantity ,critical,page_count,stock,outer_envelope ) VALUES ('3.MYF','COM','D','A','VV','','');
What am i doing wrong?

That error means that the keyspace "home" exists, but does not contain a table (column family) called "my_object". I also noticed that your insert statement does not contain a UDT literal.
UDTs define a type, but you must also define a table with a column of that type before inserting any data. I assume your UDT is called "my_object". Try this:
create table home.test (key int primary key, object frozen<my_object>);
insert into home.test (key, object) values (0, {id: 'value', type: 'othervalue'});

Related

Convert single UDT object to list of UDT in Cassandra table

I have two userdefined type in Cassandra. First one is using the second object as frozen object inside it.
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata frozen<testdata>
);
CREATE TYPE my_keyspace.testdata (
subject text,
metadata text
);
Now my requirement is to convert this single object to list of UDT . Something like this
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata list<frozen<testdata>>
);
Is it possible to update single object to list of object in Cassandra. Whata are options available to update the schema.
The only way to do it, is to add another field with required type using the ALTER TYPE, and start to use this new field, migrating existing data using some code. Cassandra doesn't allow to change type of the existing fields, and you can't also drop a field from UDT. So your type should be something like this:
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata frozen<testdata>
testdata_lst list<frozen<testdata>>
);
Existing data could be migrated into a list, and then set to null to free the space.

Checking if key exists in Presto value map

I am new to Presto, and can't quite figure out how to check if a key is present in a map. When I run a SELECT query, this error message is returned:
Key not present in map: element
SELECT value_map['element'] FROM
mytable
WHERE name = 'foobar'
Adding AND contains(value_map, 'element') does not work
The data type is a string array
SELECT typeof('value_map') FROM mytable
returns varchar(9)
How would I only select records where 'element' is present in the value_map?
You can lookup a value in a map if the key is present with element_at, like this:
SELECT element_at(value_map, 'element')
FROM ...
WHERE element_at(value_map, 'element') IS NOT NULL
element_at is ambiguous in that case -- it'll return NULL when either there's no such key or the key does exist and has NULL associated with it. A guaranteed approach is contains(map_keys(my_map), 'mykey'), which admittedly should be a bit slower than the original variant.

Cassandra: Inserting column in cassandra table

I am trying to insert new column in Cassandra table.
Column Name- feedback_map
Column type- map of text, feedback_map
And trying to execute this statement.
ALTER TABLE my_table ADD feedback_map map<text, frozen<feedback_details>>;
but getting unknown type error on feedback_details
Am pretty new to cassandra.
I am having custom feedback_details user type with two field-
#Field(name = "field_mappings")
private Map<String, String> fieldMappings;
#Field(name = "field_defaults")
private Map<String, String> fieldDefaults;
No i have not defined it yet? do i need to add this user type to my table first??
First, make sure that your User Defined Type (UDT) exists. If not, you'll need to create it:
CREATE TYPE feedback_details (
field_mappings MAP<TEXT, TEXT>,
field_defaults MAP<TEXT, TEXT>);
Then you'll be able to add it as a column on your table (as you have above):
ALTER TABLE my_table ADD feedback_map map<text, frozen<feedback_details>>;

Cassandra throws Bad Request: Batch with conditions cannot span multiple tables

Cassandra docs and blog says that batches with conditional update statements work at the granularity of the partition, and partition is defined by first key in primary key
So, I have following keyspaces:
CREATE TABLE SOCIAL_PROFILE (
soc_net_type text,
soc_net_user_id text,
user_prof_id text,
PRIMARY KEY (soc_net_type, soc_net_user_id));
CREATE TABLE SOCIAL_PROFILE_CONTACT (
soc_prof_soc_net_type text,
soc_prof_soc_net_user_id text,
soc_net_user_id text,
PRIMARY KEY (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id));
And insert statement:
BEGIN BATCH
INSERT INTO social_profile (soc_net_type, soc_net_user_id, user_prof_id) VALUES ('vk', '1', '100') IF NOT EXISTS;
INSERT INTO social_profile_contact (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id) VALUES ('vk', '1', '2');
INSERT INTO social_profile_contact (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id) VALUES ('vk', '1', '3');
INSERT INTO social_profile_contact (soc_prof_soc_net_type, soc_prof_soc_net_user_id, soc_net_user_id) VALUES ('vk', '1', '4');
APPLY BATCH;
And social_profile_contact.soc_prof_soc_net_type = social_profile.soc_net_type they are both have same values and should be in the same partition, but Cassandra throws:
Bad Request: Batch with conditions cannot span multiple tables
I haven't found a word about tables in docs, what I'm doing wrong?
cqlsh 4.1.1 | Cassandra 2.0.11 | CQL spec 3.1.1 | Thrift protocol 19.39.0
"Batch with conditions cannot span multiple tables"
Two different column families (tables) using the same primary key are still two different partitions.
You need to move the statement with IF NOT EXISTS in the independent batch.
INSERT INTO social_profile (soc_net_type, soc_net_user_id, user_prof_id) VALUES ('vk', '1', '100') IF NOT EXISTS;
The batch insert with conditionals IF NOT EXISTS could contain only single type.

How to insert a string in SQLite database without conversion?

The table:
CREATE TABLE configuration(Key STRING, Value STRING, PRIMARY KEY (Key) );
Here is what I tried:
insert into configuration(Key,Value) values(42,cast('0042' as text));
Here is the dump:
INSERT INTO "configuration" VALUES(42,42);
What I wanted:
INSERT INTO "configuration" VALUES(42,'0042');
If you create the table with:
CREATE TABLE configuration(Key STRING, Value TEXT, PRIMARY KEY (Key) );
(there is no storage penalty for doing this with SQLite) then you'll get the leading zeroes preserved, even if you use the very simplest form of INSERT. This is because STRING is not a real SQLite type, and so has NUMERIC affinity.
I'm not sure what you are trying, but essentially, it should be:
INSERT INTO configuration VALUES(42,'0042');

Resources