Convert Current Timestamp UTC to CET - presto

I'm trying to find a way to convert the current_timestamp into a different time zone.
The function current_timestamp in Simba Athena gives me the timestamp in UTC by default.
I would like to get this same timestamp in the CET timezone. Please note that I would like to automate this conversion in my query.
I tried the convert_timezone('CET', current_timestamp) function, but it's not recognized in AWS Athena
In simpler works, I'm looking for a substitute for the SQL Server function GETUTCDATE() in Presto, except that i would like to get the CET timestamp. Any suggestions?

current_timestamp in Trino (formerly known as Presto SQL) returns a timestamp with time zone value in terms of the current session time zone. For instance, if the client is in America/Los_Angeles time zone:
trino> select current_timestamp;
_col0
---------------------------------------------
2020-08-26 09:14:43.259 America/Los_Angeles
(1 row)
You can check what's the current timezone for the session with current_timezone():
trino> select current_timezone();
_col0
---------------------
America/Los_Angeles
(1 row)
To convert a timestamp with time zone to another time zone, you can use the AT TIME ZONE syntax:
trino> select current_timestamp at time zone 'America/New_York';
_col0
------------------------------------------
2020-08-26 12:18:37.901 America/New_York
(1 row)

Related

How can I fetch timestamp data in my timezone?

I am using Cassandra 3.11.13 and I have table with timestamp column. Where are my data stored in terms of +0 timezone, i.e. 2022-10-14 07:51:00.000000+0000, but I am hosting in Kazakhstan GMT+6
I want to export certain rows and certain period of time. When I am exporting into CSV, I am getting a file with timezone +0.
I tried to query like select * from table_name where primary_key = 'smth' and timestamp > '2022-10-14T06:30:00+0600' and timestamp < '2022-10-14T23:59:59+0600', but it's changed nothing.
Question is: How can I fetch timestamp with certain/correct timestamp?
The CQL timestamp data type is encoded as the number of milliseconds since Unix epoch (Jan 1, 1970 00:00 GMT) so its value is encoded in UTC timezone. Clients also display timestamps with a UTC timezone by default.
If you want the data to be displayed in your timezone, you need to configure your app or client to a specific timezone. For example, you can configure cqlsh to use a different timezone by specifying it in the cqlshrc file:
;; Display timezone
timezone = Australia/Melbourne
You can find a sample copy of cqlshrc here. Note that you will need to install the pytz Python library to use different timezones with cqlsh.
For details, see Cassandra CQL shell. Cheers!

How to typecast timestamp_ntz to CST format using Spark/PySpark while writing to Snowflake

As we know we have 3 timestamps in Snowflake,
TIMESTAMP_NTZ
TIMESTAMP_LTZ
TIMESTAMP_TZ
So while writing timestamp to snowflake table, it bydefault takes as TIMESTAMP_NTZ.
How snowflake can take the timestamp in CST timezone while writing to snowflake table?
First it's important to know what timezone has been set as the default for your account/session:
SHOW PARAMETERS LIKE 'TIMEZONE';
Then change the default for your session to CST
ALTER SESSION SET TIMEZONE = 'America/Chicago';
And thereafter any selects of current_timestamp will be providing the data in the right timezone
SELECT CURRENT_TIMESTAMP;
This is a great article for reference:
Snowflake timestamp datatype ref
Assuming you have control over the precise column type in your table, I found that TIMESTAMP_TZ is how you want to define your table. Here's working example of everything I did:
alter session set timezone = 'America/Los_Angeles';
create or replace table ts_test(rn number, ts timestamp_tz);
insert into ts_test values(1, current_timestamp());
insert into ts_test values(2, '2019-12-10 07:50:00 -06:00');
insert into ts_test values(3, CONVERT_TIMEZONE('America/Chicago', CURRENT_TIMESTAMP()));
select * from ts_test;
if the timestamp is being generated in code, then make sure you include the UTC offset when inserting (rn 2). If you're using the current_timestamp() which is LA, then make sure you convert to CST (rn 3).
if the table is being generated and you don't control the timezone default, then issue this first:
alter session set timestamp_type_mapping = timestamp_tz;

Cassandra Timestamp behavior with Select query

I have a column "postingdate" with datatype timestamp in Cassandra. I am using spring data Cassandra to save current date/time in this column when posting happens (Instant.now()). This is inserting date/time in UTC.
I have to select records which got posted on "2018-11-06". In table I have one record posted on this date and postingdate column is showing that as "2018-11-07 04:25:24+0000" in UTC.
I am running following query -
select * from mytable where id='5' and postingdate >=
'2018-11-06 00:00:00' and postingdate <= '2018-11-06 23:59:59';
Running this query on Dev Center console (or CQLSH), is giving me same results irrespective of timezone. I tried that in PST as well as IST and got the same result. Is Cassandra doing PST -> UTC OR IST -> UTC conversion before executing the query? If yes then how?
Per documentation:
When timezone is excluded, it's set to the client or coordinator timezone.
You can configure default timezone for CQLSH either by setting the TZ environment variable, or by specifying the timezone parameter in the cqlshrc configuration file.

Cassandra inserts timestamp in UTC time

I have json logs with timestamp(UTC TIME) in it. I map keys and values to Cassandra Table keys and Insert the record. However, Cassandra converts the already UTC timestamps to UTC again by subtracting 5 hours from the timestamp. The timezone here is (GMT + 5).
cqlsh> INSERT INTO myTable (id,time) VAlUES (abc123, 2018-01-12T12:32:31);
Now the time is already UTC time and its still inserts a timestamp of 5 hours ago.
How can I resolve this?
If you're using cqlsh to insert data, then you can specify default timezone in the cqlshrc file using the timezone parameter (see default cqlshrc as example).
If you insert dates programmatically, then you need to convert your time into corresponding type matching to the Cassandra's timestamp type (java.util.Date for Java, for example). In your case change could be simple - just append Z to timestamp string as pointed by Ralf

Cassandra parsing date

I have a table created using following script:
CREATE TABLE "TestTable2" (
id uuid,
timestamp timestamp,
msg text,
priority int,
source text,
PRIMARY KEY (id, timestamp)
);
Now I'm inserting one row:
INSERT INTO "TestTable2" (id, timestamp, msg, source) values (uuid(), '2002-03-31 02:36:10', 'asdas dasdasd', 'system1');
and I get an error:
Unable to execute CQL script on 'UdcCluster':Unable to coerce '2002-03-31 02:36:10' to a formatted date (long)
If I change the day of month to 30th or hour to 22 the statement is successfully executed.
Can you please explain to me what is wrong with the date?
PS.
Same error repeats for '1998-03-29 02:12:13', '1987-03-29 02:55:21' and '1984-03-25 02:45:25'. In all cases it's 2 am at the ending of March...
You're trying to get from a specific local time to a DateTime instance and you want that to be robust against daylight savings.
Specify the timezone in the pattern: yyyy-mm-dd HH:mm:ssZ
where Z is the RFC-822 4-digit time zone, expressing the time zone's
difference from UTC. For example, for the date and time of Jan 2,
2003, at 04:05:00 AM, GMT:
If no time zone is specified, the time zone of the Cassandra
coordinator node handing the write request is used. For accuracy,
DataStax recommends specifying the time zone rather than relying on
the time zone configured on the Cassandra nodes.
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/timestamp_type_r.html

Resources