The table:
CREATE TABLE configuration(Key STRING, Value STRING, PRIMARY KEY (Key) );
Here is what I tried:
insert into configuration(Key,Value) values(42,cast('0042' as text));
Here is the dump:
INSERT INTO "configuration" VALUES(42,42);
What I wanted:
INSERT INTO "configuration" VALUES(42,'0042');
If you create the table with:
CREATE TABLE configuration(Key STRING, Value TEXT, PRIMARY KEY (Key) );
(there is no storage penalty for doing this with SQLite) then you'll get the leading zeroes preserved, even if you use the very simplest form of INSERT. This is because STRING is not a real SQLite type, and so has NUMERIC affinity.
I'm not sure what you are trying, but essentially, it should be:
INSERT INTO configuration VALUES(42,'0042');
Related
From what I looked so far, it seems impossible with Cassandra. But I thought I'd give it a shot:
How can I select a value of a json property, parsed from a json object string, and use it as part of an update / insert statement in Cassandra?
For example, I'm given the json object:
{
id:123,
some_string:"hello there",
mytimestamp: "2019-09-02T22:02:24.355Z"
}
And this is the table definition:
CREATE TABLE IF NOT EXISTS myspace.mytable (
id text,
data blob,
PRIMARY KEY (id)
);
Now the thing to know at this point is that for a given reason the data field will be set to the json string. In other words, there is no 1:1 mapping between the given json and the table columns, but the data field contains the json object as kind of a blob value.
... Is it possible to parse the timestamp value of the given json object as part of an insert statement?
Pseudo code example of what I mean, which obviously doesn't work ($myJson is a placeholder for the json object string above):
INSERT INTO myspace.mytable (id, data)
VALUES (123, $myJson)
USING timestamp toTimeStamp($myJson.mytimestamp)
The quick answer is no, it's not possible to do that with CQL.
The norm is to parse the elements of the JSON object within your application to extract the corresponding values to construct the CQL statement.
As a side note, I would discourage using the CQL blob type due to possible performance issues should the blob size exceeed 1MB. If it's JSON, consider storing it as CQL text type instead. Cheers!
Worth mentioning, but CQL can do a limited amount of JSON parsing on its own. Albeit, not as detailed as you're asking here (ex: USING timestamp).
But something like this works:
> CREATE TABLE myjsontable (
... id TEXT,
... some_string TEXT,
... PRIMARY KEY (id));
> INSERT INTO myjsontable JSON '{"id":"123","some_string":"hello there"}';
> SELECT * FROM myjsontable WHERE id='123';
id | some_string
-----+-------------
123 | hello there
(1 rows)
In your case you'd either have to redesign the table or the JSON payload so that they match. But as Erick and Cédrick have mentioned, the USING timestamp part would have to happen client-side.
What you detailed is doable with Cassandra.
Timestamp
To insert timestamp in a query it should be formatted as an ISO 8601 String. Sample examples could be found here. In your code, you might have to convert your incoming value to expected type and format.
Blob:
Blob expects to store binary data, as such it cannot be put Ad hoc as a String in a CQL query. (you can use TEXT type to do it if you want to encode base64)
When you need to insert binary data you need to provide proper type as well. For instance if you are working with Javascript to need to provide a Buffer as describe in the documentation Then when you execute your query you externalized your parameters
const sampleId = 123;
const sampleData = Buffer.from('hello world', 'utf8');
const sampleTimeStamp = new Date();
client.execute('INSERT INTO myspace.mytable (id, data) VALUES (?, ?) USING timestamp toTimeStamp(?)', [ sampleId, sampleData, sampleTimeStamp ]);
I have two userdefined type in Cassandra. First one is using the second object as frozen object inside it.
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata frozen<testdata>
);
CREATE TYPE my_keyspace.testdata (
subject text,
metadata text
);
Now my requirement is to convert this single object to list of UDT . Something like this
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata list<frozen<testdata>>
);
Is it possible to update single object to list of object in Cassandra. Whata are options available to update the schema.
The only way to do it, is to add another field with required type using the ALTER TYPE, and start to use this new field, migrating existing data using some code. Cassandra doesn't allow to change type of the existing fields, and you can't also drop a field from UDT. So your type should be something like this:
CREATE TYPE my_keyspace.test (
testid text,
testdate text,
testdata frozen<testdata>
testdata_lst list<frozen<testdata>>
);
Existing data could be migrated into a list, and then set to null to free the space.
I want to use Node.js with Sequelize and SQLite. I have the following model for users:
const User = sequelize.define('user', {
rowid: {
type: 'INTEGER',
primaryKey: true,
},
// other properties
});
If I now execute const newUser = await User.create({ /* properties */ }), I would expect to be able to access the ID of the new user with newUser.rowid. But this is not the case, only if I add autoIncrement: true to the specification. Following https://www.sqlite.org/autoinc.html, I don't want to do this. Are there any other possibilities?
Edit
As it turns out, this is only possible by creating the table without autoIncrement: true and only afterward add it to the column definition. The much more practical way is probably to just use autoincrement, the performance decrease won't matter for most small applications.
You should not have to use autoincrement to access rowid in the table User. I would expect to see it this way User.rowid not newUser.rowid as in the example, since the table name is (apparently) User.
Also from the sqlite doc:
if a rowid table has a primary key that consists of a single column
and the declared type of that column is "INTEGER" in any mixture of
upper and lower case, then the column becomes an alias for the rowid.
Such a column is usually referred to as an "integer primary key". A
PRIMARY KEY column only becomes an integer primary key if the declared
type name is exactly "INTEGER".
And finally, you might consider a different name than rowid for the PK, since sqlite already has a rowid.
Except for WITHOUT ROWID tables, all rows within SQLite tables have a
64-bit signed integer key that uniquely identifies the row within its
table. This integer is usually called the "rowid". The rowid value can
be accessed using one of the special case-independent names "rowid",
"oid", or "rowid" in place of a column name. If a table contains a
user defined column named "rowid", "oid" or "rowid", then that name
always refers the explicitly declared column and cannot be used to
retrieve the integer rowid value.
I have following requirement of my dataset, need to unserstand what datatype should I use and how to save my data accordingly :-
CREATE TABLE events (
id text,
evntoverlap map<text, map<timestamp,int>>,
PRIMARY KEY (id)
)
evntoverlap = {
'Dig1': {{'2017-10-09 04:10:05', 0}},
'Dig2': {{'2017-10-09 04:11:05', 0},{'2017-10-09 04:15:05', 0}},
'Dig3': {{'2017-10-09 04:11:05', 0},{'2017-10-09 04:15:05', 0},{'2017-10-09 04:11:05', 0}}
}
This gives an error :-
Error from server: code=2200 [Invalid query] message="Non-frozen collections are not allowed inside collections: map<text, map<timestamp, int>>"
How should I store this type of data in single column . Please suggest datatype and insert command for the same.
Thanks,
There is limitation of Cassandra - you can't nest collection (or UDT) inside collection without making it frozen. So you need to "froze" one of the collections - either nested:
CREATE TABLE events (
id text,
evntoverlap map<text, frozen<map<timestamp,int>>>,
PRIMARY KEY (id)
);
or top-level:
CREATE TABLE events (
id text,
evntoverlap frozen<map<text, map<timestamp,int>>>,
PRIMARY KEY (id)
);
See documentation for more details.
CQL collections limited to 64kb, if putting things like maps in maps you might push that limit. Especially with frozen maps you are deserializing the entire map, modifying it, and re inserting. Might be better off with a
CREATE TABLE events (
id text,
evnt_key, text
value map<timestamp, int>,
PRIMARY KEY ((id), evnt_key)
)
Or even a
CREATE TABLE events (
id text,
evnt_key, text
evnt_time timestamp
value int,
PRIMARY KEY ((id), evnt_key, evnt_time)
)
It would be more efficient and safer while giving additional benefits like being able to order the event_time's in ascending or descending order.
I'd like to switch an actual system importing data into a PostgreSQL 9.5 database from CSV files to a more efficient system.
I'd like to use the COPY statement because of its good performance. The problem is that I need to have one field populated that is not in the CSV file.
Is there a way to have the COPY statement add a static field to all the rows inserted ?
The perfect solution would have looked like that :
COPY data(field1, field2, field3='Account-005')
FROM '/tmp/Account-005.csv'
WITH DELIMITER ',' CSV HEADER;
Do you know a way to have that field populated in every row ?
My server is running node.js so I'm open to any cost-efficient solution to complete the files using node before COPYing it.
Use a temp table to import into. This allows you to:
add/remove/update columns
add extra literal data
delete or ignore records (such as duplicates)
, before inserting the new records into the actual table.
-- target table
CREATE TABLE data
( id SERIAL PRIMARY KEY
, batch_name varchar NOT NULL
, remote_key varchar NOT NULL
, payload varchar
, UNIQUE (batch_name, remote_key)
-- or::
-- , UNIQUE (remote_key)
);
-- temp table
CREATE TEMP TABLE temp_data
( remote_key varchar -- PRIMARY KEY
, payload varchar
);
COPY temp_data(remote_key,payload)
FROM '/tmp/Account-005'
;
-- The actual insert
-- (you could also filter out or handle duplicates here)
INSERT INTO data(batch_name, remote_key, payload)
SELECT 'Account-005', t.remote_key, t.payload
FROM temp_data t
;
BTW It is possible to automate the above: put it into a function (or maybe a prepared statement), using the filename/literal as argument.
Set a default for the column:
alter table data
alter column field3 set default 'Account-005'
Do not mention it the the copy command:
COPY data(field1, field2) FROM...