How to add a default for datetime datatype in memsql - singlestore

I want to create a table in memsql with one of the column as datetime and its value will be current date-time. I tried giving now(),current_time() and current_date() as default value but nothing is working
singlestore> create table y (col2 datetime default now());
ERROR 1067 (42000): Invalid default value for 'col2'
OR
singlestore> create table y (col2 datetime default current_timestamp());
ERROR 1067 (42000): Invalid default value for 'col2'
OR
singlestore> create table y (col2 datetime default current_date());
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'current_date())' at line 1
Version Server version: 5.5.58 MemSQL source distribution (compatible; MySQL Enterprise & MySQL Commercial)
Thank you.

That is the right syntax. Which version of SinglestoreDB are you using ?
singlestore [test]> create table y (col2 datetime default now());
Query OK, 0 rows affected (1.691 sec)
edit: This requires a I believe at least version 7.0 of SinglestoreDB. Older versions only support default now() on timestamp columns and not datetime.

Related

How to typecast timestamp_ntz to CST format using Spark/PySpark while writing to Snowflake

As we know we have 3 timestamps in Snowflake,
TIMESTAMP_NTZ
TIMESTAMP_LTZ
TIMESTAMP_TZ
So while writing timestamp to snowflake table, it bydefault takes as TIMESTAMP_NTZ.
How snowflake can take the timestamp in CST timezone while writing to snowflake table?
First it's important to know what timezone has been set as the default for your account/session:
SHOW PARAMETERS LIKE 'TIMEZONE';
Then change the default for your session to CST
ALTER SESSION SET TIMEZONE = 'America/Chicago';
And thereafter any selects of current_timestamp will be providing the data in the right timezone
SELECT CURRENT_TIMESTAMP;
This is a great article for reference:
Snowflake timestamp datatype ref
Assuming you have control over the precise column type in your table, I found that TIMESTAMP_TZ is how you want to define your table. Here's working example of everything I did:
alter session set timezone = 'America/Los_Angeles';
create or replace table ts_test(rn number, ts timestamp_tz);
insert into ts_test values(1, current_timestamp());
insert into ts_test values(2, '2019-12-10 07:50:00 -06:00');
insert into ts_test values(3, CONVERT_TIMEZONE('America/Chicago', CURRENT_TIMESTAMP()));
select * from ts_test;
if the timestamp is being generated in code, then make sure you include the UTC offset when inserting (rn 2). If you're using the current_timestamp() which is LA, then make sure you convert to CST (rn 3).
if the table is being generated and you don't control the timezone default, then issue this first:
alter session set timestamp_type_mapping = timestamp_tz;

How can we write pandas dataframe to a Netezza Database directly using pyodbc?

I have Netezza database at remote server and i am trying to write to the database using Pyodbc.
The connection work's while reading from the database..However while trying to write i am not able to write to the Netezza database. It shows the following error:
"Error: ('HY000', '[HY000] ERROR: Column 4 : Field cannot contain null values (46) (SQLExecDirectW)')"
On further inspecting the Column 4, i found no Null value in the specified problem.
Also, the snippet of the code which i am using to write to the database is as follows:
for row in Full_Text_All.itertuples():
srows = str(row[1:]).strip("()")
query2 = "insert into MERGED_SOURCES values('+srows+')"
where,
Full_Text_All is the name of the dataframe
MERGED_SOURCES is the name of the table.
It might be that Column 4 has been defined as NOT NULL when the table was created.
If you may have access to the DDL of the table, you should be able to check this.
If the NOT NULL option was specified for Column 4, I suggest you to double check the data you are trying to insert into the table: for them, the value correspondent to Column 4 should not be null.

Cassandra Timestamp behavior with Select query

I have a column "postingdate" with datatype timestamp in Cassandra. I am using spring data Cassandra to save current date/time in this column when posting happens (Instant.now()). This is inserting date/time in UTC.
I have to select records which got posted on "2018-11-06". In table I have one record posted on this date and postingdate column is showing that as "2018-11-07 04:25:24+0000" in UTC.
I am running following query -
select * from mytable where id='5' and postingdate >=
'2018-11-06 00:00:00' and postingdate <= '2018-11-06 23:59:59';
Running this query on Dev Center console (or CQLSH), is giving me same results irrespective of timezone. I tried that in PST as well as IST and got the same result. Is Cassandra doing PST -> UTC OR IST -> UTC conversion before executing the query? If yes then how?
Per documentation:
When timezone is excluded, it's set to the client or coordinator timezone.
You can configure default timezone for CQLSH either by setting the TZ environment variable, or by specifying the timezone parameter in the cqlshrc configuration file.

Cassandra timeuuid to datetime

I want to get datetime from Cassandra timeuuid using php.
i tried it
$timeuuid = 'ebadad30-d625-11e5-bab2-b9c665f5f7cd';
$date = new Cassandra/Timeuuid::Time($timeuuid);
You can use toDate() functions that are builded inside Cassandra
For example SELECT toDate(columTimeUUID) FROM YourTable will give you a date formated as : YYYY-MM-DD
However, it works for Cassandra versions greater than 2.2
SELECT toTimestamp(columTimeUUID) FROM YourTable
For more details visit : https://docs.datastax.com/en/cql/3.3/cql/cql_reference/timeuuid_functions_r.html

custom cassandra / cqlsh time_format

System:
CentOS 6.7 x86_64
cqlsh 5.0.1 | Cassandra 2.2.1 | CQL spec 3.3.0
I'm having problem inserting (copy csv file) timestamp field with the format '%d-%m-%Y %H:%M:%S' ,
This format not supported by default, so I've created it manually in the ~/.cassandra/cqlshrc file:
[ui]
time_format = %d-%m-%Y %H:%M:%S
and started cqlsh again, but I'm still unable to insert:
system#cqlsh> insert into nir.nir_test (END_DATE) values ('01-09-2015 18:55:50');
InvalidRequest: code=2200 [Invalid query] message="Unable to coerce '01-09-2015 18:55:50' to a formatted date (long)"
Any advice?
The [ui] configuration in cqlshrc only affects the output format. It gets applied when you query a timestamp column. For example:
select END_DATE from nir.nir_test;
Might output:
end_date
---------------------
01-09-2015 18:55:50
But for insertion, you need to use one of the specified formats. For example:
insert into nir.nir_test (END_DATE) values ('2015-09-01 18:55:50');
This probably means that you'll need to convert the timestamps in the CSV file before trying to insert them.

Resources