I have a table that looks like this:
(id, a, b, mapValue)
I want to update if exist or insert if not VALUES(id,a,b,mapValue). Where mapValue is the combination of the old mapValue with the new one, replacing the values of each key that was already there.
For example if the old mapValue was {1:c, 2:d} and the new one is {2:e, 3:f} the result would be {1:c, 2:e, 3:f}.
I want to do this in a query that also updates/inserts id,a,b in VALUES(id,a,b,mapValue).
How can I achieve this?
I've found this guide about updating maps but it doesn't say anything about updating them while dealing with other values in the table. I need to do this at the same time.
In Cassandra, there is no difference between INSERT & UPDATE - everything is UPSERT, so when you do UPDATE and data doesn't exist, it's inserted. Or if you do INSERT and data already exist, it will be updated.
Regarding map update, you can use + and - operations on the corresponding column when doing UPDATE. For example, I have a table:
CREATE TABLE test.m1 (
id int PRIMARY KEY,
i int,
m map<int, text>
);
and I can do following to update existing row:
cqlsh:test> insert into test.m1 (id, i, m) values (1, 1, {1:'t1'});
cqlsh:test> select * from test.m1;
id | i | m
----+---+-----------
1 | 1 | {1: 't1'}
(1 rows)
cqlsh:test> update test.m1 set m = m + {2:'t2'}, i = 4 where id = 1;
cqlsh:test> select * from test.m1;
id | i | m
----+---+--------------------
1 | 4 | {1: 't1', 2: 't2'}
(1 rows)
and I can use the similar UPDATE command to insert completely new data:
cqlsh:test> update test.m1 set m = m + {6:'t6'}, i = 6 where id = 6;
cqlsh:test> select * from test.m1;
id | i | m
----+---+--------------------
1 | 4 | {1: 't1', 2: 't2'}
6 | 6 | {6: 't6'}
(2 rows)
Usually, if you know that no data existed before for given primary key, then UPDATE with + is better way to insert data into set or map because it doesn't generate a tombstone that is generated when you do INSERT or UPDATE without + on the collection column.
P.S. You can find more information on using collections in the following document.
Related
I have a table with AUTO_INCREMENT column in vertica db and using this column as foreign key for some other table. For that i need last inserted value for AUTO_INCREMENT column.
CREATE TABLE orders.order_test
(
order_id AUTO_INCREMENT(1,1,1) PRIMARY KEY,
order_type VARCHAR(255)
);
Found this function,but not sure how it works for multiple sessions?
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/Functions/VerticaFunctions/LAST_INSERT_ID.htm
Above link says this:
Returns the last value of an AUTO_INCREMENT/IDENTITY column. If multiple sessions concurrently load the same table with an AUTO_INCREMENT/IDENTITY column, the function returns the last value generated for that column.
It is by session.
Let's test it.
Two command line windows. Starting vsql on both.
The transcript is the whole of the sessions.
Transaction 1:
sbx=> select export_objects('','id1',false);
CREATE TABLE dbadmin.id1
(
id IDENTITY ,
num int
);
[. . .]
sbx=> select * from id1;
id | num
--------+-----
250001 | 1
sbx=> \pset null NULL
Null display is "NULL".
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
NULL
-- insert a row ...
sbx=> INSERT INTO id1 (num) VALUES(2);
OUTPUT
--------
1
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
500001
Transaction 2:
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
NULL
-- now insert another row ...
sbx=> INSERT INTO id1 (num) VALUES(3);
OUTPUT
--------
1
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
750001
Now, back to Transaction 1:
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
500001
Still at the old value ...
I have a cassandra table with data in it.
I add three new columns country as text, lat and long as double.
When these columns are added null values are inserted against the already present rows in the table. However, null is inserted as text in country column and null as value in lat and long columns.
Is this something that is the default behavior and can I add null as value under the newly created text columns?
Cassandra uses null to show that value is missing, not that this is explicitly inserted. In your case, when you add new columns - they are just added to table's specification stored in Cassandra itself - existing data (stored in SSTables) is not modified, so when Cassandra reads old data it doesn't find values for that columns in SSTable, and output null instead.
But you can have the same behavior without adding new columns - just don't insert value for specific regular column (you must have non-null values for columns of primary key!). For example:
cqlsh> create table test.abc (id int primary key, t1 text, t2 text);
cqlsh> insert into test.abc (id, t1, t2) values (1, 't1-1', 't2-1');
cqlsh> insert into test.abc (id, t1) values (2, 't1-2');
cqlsh> insert into test.abc (id, t2) values (3, 't3-3');
cqlsh> SELECT * from test.abc;
id | t1 | t2
----+------+------
1 | t1-1 | t2-1
2 | t1-2 | null
3 | null | t3-3
(3 rows)
What would be the easiest way to migrate an int to a bigint in Cassandra? I thought of creating a new column of type bigint and then running a script to basically set the value of that column = the value of the int column for all rows, and then dropping the original column and renaming the new column. However, I'd like to know if someone has a better alternative, because this approach just doesn't sit quite right with me.
You could ALTER your table and change your int column to a varint type. Check the documentation about ALTER TABLE, and the data types compatibility matrix.
The only other alternative is what you said: add a new column and populate it row by row. Dropping the first column can be entirely optional: if you don't assign values when performing insert everything will stay as it is, and new records won't consume space.
You can ALTER your table to store bigint in cassandra with varint. See the example-
cassandra#cqlsh:demo> CREATE TABLE int_test (id int, name text, primary key(id));
cassandra#cqlsh:demo> SELECT * FROM int_test;
id | name
----+------
(0 rows)
cassandra#cqlsh:demo> INSERT INTO int_test (id, name) VALUES ( 215478936541111, 'abc');
cassandra#cqlsh:demo> SELECT * FROM int_test ;
id | name
---------------------+---------
215478936541111 | abc
(1 rows)
cassandra#cqlsh:demo> ALTER TABLE demo.int_test ALTER id TYPE varint;
cassandra#cqlsh:demo> INSERT INTO int_test (id, name) VALUES ( 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999, 'abcd');
cassandra#cqlsh:demo> SELECT * FROM int_test ;
id | name
------------------------------------------------------------------------------------------------------------------------------+---------
215478936541111 | abc
9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 | abcd
(2 rows)
cassandra#cqlsh:demo>
I'm trying to model some time series data in Cassandra which I had been able to do with the older thrift client but CQL seems to be throwing me off.
I want to add a NEW column to my row IF a specific column value matches.
My table definition is:
CREATE TABLE TestTable (
key int,
base uuid,
ts int, // Timestamp (column name)
val text, // Timestamp value (column value)
PRIMARY KEY (key, ts)
) WITH CLUSTERING ORDER BY (ts DESC);
What I'm guessing it'd look like is:
Row | UUID | TS | TS | TS
--- | ---- | --- | ---| ---
1 | id1 | 1 | 2 | 3
--- | --- | --- | ---| ---
2 | id2 | 1 | 5 | 6
So essentially, I can have a bunch of Timestamps for a given row and a SINGLE UUID for a row.
The UUID needs to be updated for each new insert of a TS column.
So inserts in a row work just fine:
insert into TestTable(key, base, ts, val) values (1, dfb63886-91a4-11e6-ae22-56b6b6499611, 50, 'one')
But I'm failing to figure out a way, using CQL, to INSERT a new column in a row using Cassandra transactions (CAS).
This one fails:
insert into TestTable(key, base, ts, val) values (1, dfb63886-91a4-11e6-ae22-56b6b6499611, 70, 'four') if base = dfb63886-91a4-11e6-ae22-56b6b6499611;
with the error:
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="line 1:106 mismatched input 'base' expecting K_NOT (..., 70, 'four') if [base] =...)">
And the query:
update TestTable set val = 'four', ts=70 where key = 1 if base = dfb63886-91a4-11e6-ae22-56b6b6499611;
fails with the error:
InvalidRequest: code=2200 [Invalid query] message="PRIMARY KEY part ts found in SET part"
I'm trying to figure out how to model the data properly so that I only have one UUID per row and can have multiple columns without having to explicitly define them during table creation, since it can vary quite a bit.
IIRC, it was easy doing this with the thrift client but using that isn't an option =/
There is a nice tutorial regarding data series here
In a nutshell, your composite key will be your unique identifier (like the UUID that you were proposing) and a timestamp, so you will be able to add as many events/values associated to a UUID
CREATE TABLE IF NOT EXISTS TestTable (
base uuid,
ts timestamp, // Timestamp (column name)
value text, // Timestamp value (column value)
PRIMARY KEY (base, ts)
) WITH CLUSTERING ORDER BY (ts DESC);
Adding values will have the same UUID with different times:
INSERT INTO TestTable (base, ts, value)
VALUES (467286c5-7d13-40c2-92d0-73434ee8970c, dateof(now()), 'abc');
INSERT INTO TestTable (base, ts, value)
VALUES (467286c5-7d13-40c2-92d0-73434ee8970c, dateof(now()), 'def');
cqlsh:test> SELECT * FROM TestTable WHERE base = 467286c5-7d13-40c2-92d0-73434ee8970c;
base | ts | value
--------------------------------------+---------------------------------+-------
467286c5-7d13-40c2-92d0-73434ee8970c | 2016-10-14 04:13:42.779000+0000 | def
467286c5-7d13-40c2-92d0-73434ee8970c | 2016-10-14 04:12:50.551000+0000 | abc
(2 rows)
Updating can be done in any of the columns, except the ones used as keys, the errors displayed in the update statement was caused by the "IF" statement and because it was tried to update ts which is part of the composite key.
INSERT INTO TestTable (base, ts, value)
VALUES (ffb0bb8e-3d67-4203-8c53-046a21992e52, dateof(now()), 'bananas');
SELECT * FROM TestTable WHERE base = ffb0bb8e-3d67-4203-8c53-046a21992e52 AND ts < dateof(now());
base | ts | value
--------------------------------------+---------------------------------+---------
ffb0bb8e-3d67-4203-8c53-046a21992e52 | 2016-10-14 04:17:26.421000+0000 | apples
(1 rows)
UPDATE TestTable SET value = 'apples' WHERE base = ffb0bb8e-3d67-4203-8c53-046a21992e52;
SELECT * FROM TestTable WHERE base = ffb0bb8e-3d67-4203-8c53-046a21992e52 AND ts < dateof(now());
base | ts | value
--------------------------------------+---------------------------------+---------
ffb0bb8e-3d67-4203-8c53-046a21992e52 | 2016-10-14 04:17:26.421000+0000 | bananas
(1 rows)
Want to write System.currentMiliseconds in the cassandta table for each column by cassandra. For example
writeToCassandra(name, email)
in cassandra table:
--------------------------------
name | email| currentMiliseconds
Can cassandra prepare currentMiliseconds column automatically like auto increment ?
BR!
Cassandra has some sort of columnar database taste inside. So if you read docs how the columns are stored inside SSTable, you'll notice that each column has a personal write timestamp appended (used for conflict resolution, like last-write-wins strategy). You can query for that timestamp using writetime() function:
cqlsh:so> create table ticks ( id text primary key, value int);
cqlsh:so> insert into ticks (id, value) values ('foo', 1);
cqlsh:so> insert into ticks (id, value) values ('bar', 2);
cqlsh:so> insert into ticks (id, value) values ('baz', 3);
cqlsh:so> select id, value from ticks;
id | value
-----+-------
bar | 2
foo | 1
baz | 3
(3 rows)
cqlsh:so> select id, writetime(value) from ticks;
id | writetime(value)
-----+------------------
bar | 1448282940862913
foo | 1448282937031542
baz | 1448282945591607
(3 rows)
As you requested, I've not explicitly inserted write timestamp to DB, but able to query it. Note you cannot use writetime() function for PK.
You can try with: dateof(now())
e.g.
INSERT INTO YOUR_TABLE (NAME, EMAIL, DATE)
VALUES ('NAME', 'EMAIL', dateof(now()));