sqlite3.OperationalError: default DATE value of column is not constant - python-3.x

I am creating a sqlite3 table that accepts records from a server. There should be one date/text column that also has a datetime DEFAULT value, so I can sync a record which times differ from the server's record.
I found a solution on this forum from here. The problem is it gives me the following error on executing the table creation script: sqlite3.OperationalError: default value of column [updated_at] is not constant.
Table is created:
cur.execute('CREATE TABLE IF NOT EXISTS emp_tb(\
emp_id INTEGER PRIMARY KEY NOT NULL,\
emp_names TEXT NOT NULL,\
emp_number TEXT NOT NULL UNIQUE,\
ent_id INTEGER NOT NULL,\
active INTEGER NOT NULL DEFAULT "0",\
updated_at TEXT NULL DEFAULT (datetime("now", "localtime")),\
syncstatus INTEGER NOT NULL DEFAULT "0")')
Should I create a trigger? or How can I have a default value in format ("YYYY-MM-DD HH:MM:SS.SSS") in case the update misses a spot?

Use single quotes (') for the datetime options. As mentioned in the comments, they will have to be escaped (because the query is delimited with single quotes).

Related

Postgres column contains string values which need to pivoted

details column has string data in the following format (enclosed by parenthesis as shown) -
{"id":"350876","Time":"Aug 22 2022, 12:41:57 PM" ,"Session":"NO","teamPercentage":89}
how do I add these id , Time, Session , teamPercentage as new columns in the same row?
My thought process:
Do I need to do some pivot of some sort? But I don't know if pivot can be done with string values, and more specifically how to detect id, time, session etc. and pivot these values.
This seems to be a JSON value, so just treat it as one. Then you can use the JSON functions to extract the values for the keys
select details::json ->> 'id' as id,
details::json ->> 'Time' as time,
details::json ->> 'Session' as session,
details::json ->> 'teampPercentage' as percentage
from the_table;

how to store double colon values in oracle database table

I have excel which i am trying to import in oracle database table.
Some of the values in excel consist of for example 14:39.5 with double colon. What dataype in oracle database table i should provide to store this value ?
Currently have given varchar datatype and its throwing an error during import as :
Conversion error! Value: "00:12:01.615518000" to data type: "Number". Row ignored! Value is '00:12:01.615518000'. Cannot be converted to a decimal number object. Valid format: 'Unformatted'
You can store it as an INTERVAL DAY(0) TO SECOND(9) data type:
CREATE TABLE table_name (
time INTERVAL DAY(0) TO SECOND(9)
);
Then you can use TO_DSINTERVAL passing your value with '0 ' prepended to the start:
INSERT INTO table_name (time)
VALUES ( TO_DSINTERVAL('0 ' || '00:12:01.615518000') );
db<>fiddle here
If it is part of a date/time stamp then you could store it as DATE or TIMESTAMP if you could add the date component. Oracle doesn't have just a TIME data type.
If you can't add a date component to it, then assuming it is a time interval you could convert it to seconds or microseconds (lose the colons) and store it as a NUMBER.
If you want to maintain the exact formatting as shown, your only option is to store it as text using VARCHAR2 or something similar.

SQLite returns int instead of real

I'm working on a website in node js and use SQLite as a database for the first time.
I want to be able to use real for some form data and I noticed that every real in my database are converted to integer once the query is made.
To vizualize the database i am using DB Browser and i checked if the columns are defined as REAL which they are.
If i try to query a data set as 0.1 in my DB I get this :
sqlite> select step_variable
from variables
where id=38;
0.0
After trying as suggested the command TYPEOF(step_variable) it returned :
0.0|real
In the SQLite CREATE TABLE command, one defines a data type affinity, not a data type. SQLite supports the following five column affinities: TEXT, NUMERIC, INTEGER, REAL, NONE.
Thus the data type you specify when creating a table does not enforce a certain data type. You can supply any data type you want or even omit the data type.
CREATE TABLE table1(
column1 ABC,
column2 Others,
column3 WHATEVER);
CREATE TABLE table2(column1, column2, column3);
Populate tables:
INSERT INTO table1 VALUES( 1, 'my text', 123.45);
INSERT INTO table2 VALUES( 1, 'my text', 123.45);
Now let us check what SQLite made out of it:
SELECT column1, TYPEOF(column1) from table1
SELECT column2, TYPEOF(column1) from table1
SELECT column3, TYPEOF(column1) from table1
With:
column TYPEOF(column)
------------------------
1 INTEGER
my text TEXT
123.45 REAL
When you go through a query result e.g. by using sqlite2_step you can use the sqlite3_column_type statement to confirm the column type - unless you know the result anyway and simply cast the result to the data type expected.
Martin
I found the solution it was simply that i didn't save my file after modifying it.

Boolean in Cassandra

I see an issue in Cassandra boolean datatype,
I have a table with one field as boolean
CREATE TABLE keyspace.issuetable (
"partitionId" text,
"name" text,
"field" text,
"testboolean" boolean,
PRIMARY KEY ("partitionId", "name"));
Now when I try to insert in table, I didn't add the boolean 'testboolean'
INSERT into keyspace.issuetable("partitionId", "name", "field")
VALUES ('testpartition', 'cluster1_name','testfiled');
Issue :
1) If the boolean entry (say testboolean entry) in INSERT query is not added so as per the data type it needs to be 'false' but it is added as null
SELECT * FROM issuetable ;
partitionId | name | field | testboolean
---------------+---------------+-----------+-------------
testpartition | cluster1_name | testfiled | null
Could you someone explain why? Also let me know the solution to solve this, I expect 'false' not 'null'
Cassandra is not like the traditional SQL databases. It does not store rows in tables. The best way to think about Cassandra's data model is to imagine a sortedMap<rowKey, map<columnKey, value>>.
This means that any particular row is not required to have the same fields/columns as any other one. In your example the inserted row simply does not have a property named testboolean.
To understand more I can recommend referring here.
And no, you cannot set a default value for a column (or rather you can do it only on application side).

HIVE rendered timestamp column data as NULL

I am trying to create an external table using Hive. Below is the Hive query I ran:
create external table trips_raw
(
VendorID int,
tpep_pickup_datetime timestamp,
tpep_dropoff_datetime timestamp
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' location '/user/taxi_trips/';
When I looked at the output from the 'trips_raw' table created by the query above, I saw that both the 'tpep_pickup_date_time' and 'tpep_dropoff_datetime' columns are 'NULL' in all rows. I have seen other threads talked about the reason being that the '1/1/2018 11:13:00 AM' timestamp format is not accepted by Hive, but problem is that's the timestamp format I have in my csv source data (as you can see from screenshot here).
I could specify those 2 timestamp columns as 'string' and Hive will be able to render them correctly, but I still would want those 2 columns to be 'timestamp' type so specifying those 2 columns as 'string' type is not a viable option here.
I had also tried the following technique using recommendation from this site (https://community.hortonworks.com/questions/55266/hive-date-time-problem.html) but had no success:
Create the 'trips_raw' table using 'string' as type for the 2 timestamp columns. This allows the resulting table to render the timestamps correctly, albeit in 'string' type. The Hive command I used is shown below:
create external table trips_raw
(
VendorID int,
tpep_pickup_datetime string,
tpep_dropoff_datetime string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' location
'/user/taxi_trips/';
When I look at the resulting table, the dates are shown as string as you can see from this screenshot below.
But as I had mentioned earlier, I want the time columns to be in timestamp type and not string type. Therefore in the next 2 steps I tried to create a blank table and then insert the data from the table created from Step 1 but converting the string to timestamp this time.
Create an external blank table called 'trips_not_raw' using the following Hive commands:
create external table trips_not_raw
(VendorID int,
tpep_pickup_datetime timestamp,
tpep_dropoff_datetime timestamp
);
Insert data from 'trips_raw' table (which was mentioned earlier in this question), using the Hive commands below:
insert into table trips_not_raw select vendorid,
from_unixtime(unix_timestamp(tpep_pickup_datetime, 'MM/dd/yyyy HH:mm:ss
aa')) as tpep_pickup_datetime,
from_unixtime(unix_timestamp(tpep_dropoff_datetime, 'MM/dd/yyyy HH:mm:ss
aa')) as tpep_dropoff_datetime
from trips_raw;
Doing this inserts the rows into the blank table 'trips_not_raw', but the results from the 2 timestamp columns still showed as 'Null' as you can see from the screenshot below:
Is there a simple way to store the 2 time columns as 'timestamp' type and not 'string', but still be able to render them correctly in the output without seeing 'Null/None'?
I'm afraid you need to parse timestamp column and then cast string as timestamp. For example,
select cast(regexp_replace('1/1/2018 11:13:00 AM', '(\\d{1,2})/(\\d{1,2})/(\\d{4})\\s(\\d{2}:\\d{2}:\\d{2}) \\w{2}', '$3-$1-$2 $4') as timestamp)
You can create and use a macro function for convenience, e.g.,
create temporary macro parse_date (ts string)
cast(regexp_replace(ts, '(\\d{1,2})/(\\d{1,2})/(\\d{4})\\s(\\d{2}:\\d{2}:\\d{2}) \\w{2}', '$3-$1-$2 $4') as timestamp);
then use it as follows
select parse_date('1/1/2018 11:13:00 AM');

Resources