Does last_insert_id() gives session specific value or global value across all sessions(multi-client environment)? - auto-increment

I have a table with AUTO_INCREMENT column in vertica db and using this column as foreign key for some other table. For that i need last inserted value for AUTO_INCREMENT column.
CREATE TABLE orders.order_test
(
order_id AUTO_INCREMENT(1,1,1) PRIMARY KEY,
order_type VARCHAR(255)
);
Found this function,but not sure how it works for multiple sessions?
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/Functions/VerticaFunctions/LAST_INSERT_ID.htm
Above link says this:
Returns the last value of an AUTO_INCREMENT/IDENTITY column. If multiple sessions concurrently load the same table with an AUTO_INCREMENT/IDENTITY column, the function returns the last value generated for that column.

It is by session.
Let's test it.
Two command line windows. Starting vsql on both.
The transcript is the whole of the sessions.
Transaction 1:
sbx=> select export_objects('','id1',false);
CREATE TABLE dbadmin.id1
(
id IDENTITY ,
num int
);
[. . .]
sbx=> select * from id1;
id | num
--------+-----
250001 | 1
sbx=> \pset null NULL
Null display is "NULL".
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
NULL
-- insert a row ...
sbx=> INSERT INTO id1 (num) VALUES(2);
OUTPUT
--------
1
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
500001
Transaction 2:
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
NULL
-- now insert another row ...
sbx=> INSERT INTO id1 (num) VALUES(3);
OUTPUT
--------
1
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
750001
Now, back to Transaction 1:
sbx=> SELECT LAST_INSERT_ID();
LAST_INSERT_ID
----------------
500001
Still at the old value ...

Related

Why is the ResultSet not same for insert and select queries

When I use session.execute in cassandra, I notice that the ResultSet's structure is different for the same table. If I am querying the table to get a record using Where, the ResultSet contains the data fetched from the table.
val resultSet = session.execute(whereClause)
gives
ResultSet[ exhausted: false, Columns[year(bigint), month(bigint),
creation_time_hour(bigint), creation_time_minute(bigint),
question_id(uuid), question_description(varchar)]]
But if I use Insert, I get something totally different.
ResultSet[ exhausted: false, Columns[[applied](boolean)]]
Is this expected behavior? Is there a way to get the data "inserted" by cassandra in the table in the ResultSet returned by execute method?
Usually, the INSERT doesn't return the inserted values back to user. The exception is insert that triggers lightweight transaction - if you use IF NOT EXISTS. In this case, it may return:
single row with single column [applied] with true value - this means that data were inserted;
cqlsh:test> insert into test.u2(id,u) values(5, {id:1, t1:3}) if not exists;
[applied]
-----------
True
single row with all values of the corresponding row of the table, plus column [applied] with false value - this happens when row with given primary key already exists.
cqlsh:test> insert into test.u2(id,u) values(1, {id:1, t1:2});
cqlsh:test> insert into test.u2(id,u) values(1, {id:1, t1:3}) if not exists;
[applied] | id | u
-----------+----+----------------
False | 1 | {id: 1, t1: 2}

Cassandra altering table to add new columns adds null as text

I have a cassandra table with data in it.
I add three new columns country as text, lat and long as double.
When these columns are added null values are inserted against the already present rows in the table. However, null is inserted as text in country column and null as value in lat and long columns.
Is this something that is the default behavior and can I add null as value under the newly created text columns?
Cassandra uses null to show that value is missing, not that this is explicitly inserted. In your case, when you add new columns - they are just added to table's specification stored in Cassandra itself - existing data (stored in SSTables) is not modified, so when Cassandra reads old data it doesn't find values for that columns in SSTable, and output null instead.
But you can have the same behavior without adding new columns - just don't insert value for specific regular column (you must have non-null values for columns of primary key!). For example:
cqlsh> create table test.abc (id int primary key, t1 text, t2 text);
cqlsh> insert into test.abc (id, t1, t2) values (1, 't1-1', 't2-1');
cqlsh> insert into test.abc (id, t1) values (2, 't1-2');
cqlsh> insert into test.abc (id, t2) values (3, 't3-3');
cqlsh> SELECT * from test.abc;
id | t1 | t2
----+------+------
1 | t1-1 | t2-1
2 | t1-2 | null
3 | null | t3-3
(3 rows)

Why does a (upserted) row disappear after updating the column to null? (But not when it's been inserted)

My understanding of inserts and updates in Cassandra was that they were basically the same thing. That's is also what the documentation says ( https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlUpdate.html?hl=upsert )
Note: Unlike the INSERT command, the UPDATE command supports counters. Otherwise, the UPDATE and INSERT operations are identical.
So aside from support for counters they should be the same.
But then I ran across a problem where rows that where create via update would disappear if I set columns to null, whereas this doesn't happen if the rows are created with insert.
cqlsh:test> CREATE TABLE IF NOT EXISTS address_table (
... name text PRIMARY KEY,
... addresses text,
... );
cqlsh:test> insert into address_table (name, addresses) values ('Alice', 'applelane 1');
cqlsh:test> update address_table set addresses = 'broadway 2' where name = 'Bob' ;
cqlsh:test> select * from address_table;
name | addresses
-------+-------------
Bob | broadway 2
Alice | applelane 1
(2 rows)
cqlsh:test> update address_table set addresses = null where name = 'Alice' ;
cqlsh:test> update address_table set addresses = null where name = 'Bob' ;
cqlsh:test> select * from address_table;
name | addresses
-------+-----------
Alice | null
(1 rows)
The same thing happens if I skip the separate step of first creating a row. With insert I can create a row with a null value, but if I use update the row is nowhere to be found.
cqlsh:test> insert into address_table (name, addresses) values ('Caroline', null);
cqlsh:test> update address_table set addresses = null where name = 'Dexter' ;
cqlsh:test> select * from address_table;
name | addresses
----------+-----------
Caroline | null
Alice | null
(2 rows)
Can someone explain what's going on?
We're using Cassandra 3.11.3
This is expected behavior. See details in https://issues.apache.org/jira/browse/CASSANDRA-14478
INSERT adds a row marker, while UPDATE does not. What does this mean? Basically an UPDATE requests that individual cells of the row be added, but not that the row itself be added; So if one later deletes the same individual cells with DELETE, the entire row goes away. However, an "INSERT" not only adds the cells, it also requests that the row be added (this is implemented via a "row marker"). So if later all the row's individual cells are deleted, an empty row remains behind (i.e., the primary of the row which now has no content is still remembered in the table).

Cassandra migrate int to bigint

What would be the easiest way to migrate an int to a bigint in Cassandra? I thought of creating a new column of type bigint and then running a script to basically set the value of that column = the value of the int column for all rows, and then dropping the original column and renaming the new column. However, I'd like to know if someone has a better alternative, because this approach just doesn't sit quite right with me.
You could ALTER your table and change your int column to a varint type. Check the documentation about ALTER TABLE, and the data types compatibility matrix.
The only other alternative is what you said: add a new column and populate it row by row. Dropping the first column can be entirely optional: if you don't assign values when performing insert everything will stay as it is, and new records won't consume space.
You can ALTER your table to store bigint in cassandra with varint. See the example-
cassandra#cqlsh:demo> CREATE TABLE int_test (id int, name text, primary key(id));
cassandra#cqlsh:demo> SELECT * FROM int_test;
id | name
----+------
(0 rows)
cassandra#cqlsh:demo> INSERT INTO int_test (id, name) VALUES ( 215478936541111, 'abc');
cassandra#cqlsh:demo> SELECT * FROM int_test ;
id | name
---------------------+---------
215478936541111 | abc
(1 rows)
cassandra#cqlsh:demo> ALTER TABLE demo.int_test ALTER id TYPE varint;
cassandra#cqlsh:demo> INSERT INTO int_test (id, name) VALUES ( 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999, 'abcd');
cassandra#cqlsh:demo> SELECT * FROM int_test ;
id | name
------------------------------------------------------------------------------------------------------------------------------+---------
215478936541111 | abc
9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 | abcd
(2 rows)
cassandra#cqlsh:demo>

SQLite Help: Need the date field in one table separated into three fields in another table

I am using SQLite3 on a Linux workstation. In one table (A), there is a date field (YYYY-MM-DD) and in the other table (B), I need that same date to be separated into three fields (day, month, year).
CREATE TABLE A (
ID char(6),
Date date
);
CREATE TABLE B (
ID char(6),
Date_Day int(2),
Date_Month int(2),
Date_Year int(4)
);
INSERT INTO A (ID, Date) VALUES (1,'2016-07-13');
INSERT INTO A (ID, Date) VALUES (2,'2013-06-01');
INSERT INTO A (ID, Date) VALUES (3,'2015-05-20');
INSERT INTO B (ID) VALUES (1);
INSERT INTO B (ID) VALUES (2);
INSERT INTO B (ID) VALUES (3);
The following script, produces an "Unable to get property 'substring' of undefined or null reference" error. If I remove the INSERT INTO and just run the select portion of the following script, I get NULL values so either way, it's not working.
INSERT INTO B (
Date_Day,
Date_Month,
Date_Year
)
SELECT ID,strftime('%d',Date),
strftime('%m',Date),
strftime('%Y',Date)
FROM A
WHERE A.ID = B.ID;
Any suggestions would be appreciated. TIA
Your INSERT INTO B with the following SELECT referencing B.ID does not make sense; what you probably want to do is UPDATE B instead (since you already inserted the rows with corresponding ids beforehand).
Instead of doing this manually, though, I would suggest you use a trigger on insert into A to insert the corresponding values into B. Actually, B is completely redundant so I would question whether or not it actually makes sense to use B at all -- it might be better to calculate the values in B on the fly when you need them (or define B as a view etc.).
Anyway, if you really need B, like I said, you could use a trigger like this:
> CREATE TABLE A (id INTEGER PRIMARY KEY, date);
> CREATE TABLE B (
~ id INTEGER PRIMARY KEY REFERENCES A ON DELETE CASCADE,
~ day,
~ month,
~ year
~ );
> CREATE TRIGGER insert_a
~ AFTER INSERT ON A
~ BEGIN
~ INSERT INTO B (id, day, month, year)
~ VALUES (
~ NEW.id,
~ strftime("%d", NEW.date),
~ strftime("%m", NEW.date),
~ strftime("%Y", NEW.date)
~ );
~ END;
> INSERT INTO A (date) VALUES ('2016-07-13');
> select * from A;
id date
---------- ----------
1 2016-07-13
> select * from B;
id day month year
---------- ---------- ---------- ----------
1 13 07 2016
If you ever update values in A you'd also need to define a trigger for updates.
Assuming the date string has always the format yyyy-mm-dd, you can just use this statement:
INSERT INTO B ( ID, Date_Day, Date_Month,Date_Year)
SELECT ID, SUBSTR( date,1,4)*1, SUBSTR( date, 6,2 )*1, SUBSTR(date,9,2)*1 FROM A;
The substr function is documented here:
SQL As Understood By SQLite / Core Functions
The multiplication with 1 forces SQLite to convert the strings to an integer field.

Resources