Generating Total Time in Milliseconds in SQL Developer - linux

I need to replicate the linux command "date +%s%3N" in SQL Developer. I have tried the below code sample but it returns with a different value. I have also done extensive searching Google with no luck.
select to_char((extract(day from (systimestamp - timestamp '1970-01-01 00:00:00')) * 86400000
+ extract(hour from (systimestamp - timestamp '1970-01-01 00:00:00')) * 3600000
+ extract(minute from (systimestamp - timestamp '1970-01-01 00:00:00')) * 60000
+ extract(second from (systimestamp - timestamp '1970-01-01 00:00:00')) * 1000) * 1000) unix_time
from dual;
The date +%s%3N command returns something like:
1475615656692870653
Whereas the above code sample returns something like:
1475594089419116
The date command returns a longer and larger number than the code sample even though it was run before the code sample. The ultimate solution would be a direct utility in Oracle if possible. If not, possibly invoking the date command within Oracle would work.

Try this one:
CREATE OR REPLACE FUNCTION GetUnixTime RETURN INTEGER IS
dsInt INTERVAL DAY(9) TO SECOND;
res NUMBER;
BEGIN
dsInt := CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC';
res:= EXTRACT(DAY FROM dsInt)*24*60*60
+ EXTRACT(HOUR FROM dsInt)*60*60
+ EXTRACT(MINUTE FROM dsInt)*60
+ EXTRACT(SECOND FROM dsInt);
RETURN ROUND(1000*res);
END GetUnixTime;
ROUND(1000*res) will return Unix time in Milliseconds, according to your question it is not clear whether you like to get Milliseconds, Microseconds or even Nanoseconds. But it is quite obvious how to change the result to desired value.
This function considers your local time zone and time zone of Unix epoch (which is always UTC)
If you don't like a function, you can write it at a query of course:
SELECT
ROUND(EXTRACT(DAY FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')*24*60*60
+ EXTRACT(HOUR FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')*60*60
+ EXTRACT(MINUTE FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')*60
+ EXTRACT(SECOND FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')
* 1000) AS unix_time
FROM dual;

I ended up going with using oscommands through the method described in this link here http://www.orafaq.com/scripts/plsql/oscmd.txt. The solutions below were a step in the right direction, however, other parts of the script we were making were running into issues. Using oscommands solved all of our issues. With the method mentioned in the link, I was simply able to type
l_time := oscomm('/bin/date +%s%3N');
to get the correct number.

Related

Influxdb time range query

I am trying to form a sql query for the below condition in where clause but it is giving an error.
date(\"time\") BETWEEN date_trunc('month', current_date - INTERVAL '1 months') \
AND (date_trunc('MONTH', current_date - INTERVAL '1 months' + INTERVAL '1 MONTH - 1 day'))
Influxdb query which I am trying and it is giving an error is as below.
SELECT * FROM "series" WHERE time >= (now() - 30d) AND time < (now() - (30d + 30d - 1d))
But above query is giving badrequest error. Could you please help me to correct this query.
The time clause in the InfluxQL query is unfulfillable, time lower and upper bounds seems reversed. Or the comparison operators are, depending on your coding preferences.

How to pass param inside single quotes to a postgres query?

In my node app, i need to run below query and and i'm passing parameters dynamically. But it's not fetching the parameters since they are referenced inside single quotes. Suggest a solution for this.
const text = `UPDATE glacier_restore_progress
SET
status='completed',
restore_end=CURRENT_TIMESTAMP,
restore_expire=DATE_TRUNC('minutes', current_timestamp + interval '$1 minutes')
WHERE file_path = '$2' AND date_trunc('minutes', current_timestamp - interval '$1 minutes') <= restore_start`;
const values = [restoreDuration, fileKey];
await pool.query(text, values);
and the error i get is,
"bind message supplies 2 parameters, but prepared statement \"\" requires 0"
You are absolutely right; parameters cannot be inside quotes. There are a few ways we can solve this:
Taking the similarly broken example SELECT * FROM employees WHERE CURRENT_TIMESTAMP - start_date < INTERVAL '$1 years';
Have the client submit the "full" value: SELECT * FROM employees WHERE CURRENT_TIMESTAMP - start_date < INTERVAL $1;
Construct string in the query: SELECT * FROM employees WHERE CURRENT_TIMESTAMP - start_date < INTERVAL ($1 || ' years');
(interval specific) Use the fact the unit can be specified as its own keyword: SELECT * FROM employees WHERE CURRENT_TIMESTAMP - start_date < INTERVAL $1 MINUTE
My preference in this case is 1, however the most faithful to your question is 3. Be careful to use MINUTE and not MINUTES. Option 2, using the concatentation operator, is a good hack to have in your toolbelt. All answers tested in Postgres 13.
In addition to this, your parameter is likely going to bound as a number, which will get you an error. You may need to cast it to text, like so $1::TEXT
Your full query would be:
UPDATE glacier_restore_progress
SET
status = 'completed',
restore_end = CURRENT_TIMESTAMP,
restore_expire = DATE_TRUNC('minutes', CURRENT_TIMESTAMP + INTERVAL $1::TEXT MINUTE)
WHERE file_path = $2
AND DATE_TRUNC('minutes', current_timestamp - INTERVAL $1 MINUTE) <= restore_start
use this,
const text = `UPDATE glacier_restore_progress
SET
status='completed',
restore_end=CURRENT_TIMESTAMP,
restore_expire=DATE_TRUNC('minutes', current_timestamp + interval '1 minutes' * $1)
WHERE file_path = $2 AND date_trunc('minutes', current_timestamp - interval '1 minutes' * $1) <= restore_start`;
const values = [restoreDuration, fileKey];
await pool.query(text, values);
this convention is mentioned here refrence

DATEDIFF overflow

I am using the following code in azure sql datawarehouse
SELECT cast(DATEDIFF(ms,cast(Start as datetime2),cast(EndTime as datetime2)
) as float) AS [total]--difference to be calculated in millisecond
FROM systable
but coming across an error as
"The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.
"
My requirement is to have the difference in milliseconds and if thats changed then it will affect other results.
request you to please provide some help
This happens because the DATEDIFF() function returns an integer. An integer only allows values up to 2,147,483,647. In this case, you have more than ~2B values causing the data type overflow. You would ideally use the DATEDIFF_BIG() function which returns a bigint that allows for values up to 9,223,372,036,854,775,807 or ~9 Septillion. DATEDIFF_BIG() isn't supported in SQL Data Warehouse / Azure Synapse Analytics (as of Jan 2020).
You can vote for the feature here: (https://feedback.azure.com/forums/307516/suggestions/14781627)
Testing DATEDIFF(), you can see that you can get ~25 days and 20 hours of difference between dates before you run out of integers. Some sample code is below.
DECLARE #startdate DATETIME2 = '01/01/2020 00:00:00.0000';
DECLARE #enddate DATETIME2 = '01/01/2020 00:00:02.0000';
-- Support:
-- MILLISECOND: ~25 days 20 Hours
-- MICROSECOND: ~35 minutes
-- NANOSECOND: ~ 2 seconds
SELECT
DATEDIFF(DAY, #startdate, #enddate) [day]
, DATEDIFF(HOUR, #startdate, #enddate) [hour]
, DATEDIFF(MINUTE, #startdate, #enddate) [minute]
, DATEDIFF(SECOND, #startdate, #enddate) [second]
, DATEDIFF(MILLISECOND, #startdate, #enddate) [millisecond]
, DATEDIFF(MICROSECOND, #startdate, #enddate) [microsecond]
, DATEDIFF(NANOSECOND, #startdate, #enddate) [nanosecond]
In the interim, you could calculate the ticks since the beginning of the time for each value and then subtract the difference. For a DATETIME2, you can calculate ticks like this:
CREATE FUNCTION dbo.DATEDIFF_TICKS(#date DATETIME2)
RETURNS BIGINT
AS
BEGIN
RETURN
(DATEDIFF(DAY, '01/01/0001', CAST(#date AS DATE)) * 864000000000.0)
+ (DATEDIFF(SECOND, '00:00', CAST(#date AS TIME(7))) * 10000000.0)
+ (DATEPART(NANOSECOND, #date) / 100.0);
END
GO
You can then just run the function and determine the ticks and the difference between ticks.
DECLARE #startdate DATETIME2 = '01/01/2020 00:00:00.0000';
DECLARE #enddate DATETIME2 = '01/30/2020 00:00:00.0000';
SELECT
dbo.DATEDIFF_TICKS(#startdate) [start_ticks],
dbo.DATEDIFF_TICKS(#startdate) [end_ticks],
dbo.DATEDIFF_TICKS(#enddate) - dbo.DATEDIFF_TICKS(#startdate) [diff];
Here is a sample running 500 years of differences:
DECLARE #startdate DATETIME2 = '01/01/2000 00:00:00.0000';
DECLARE #enddate DATETIME2 = '01/01/2500 00:00:00.0000';
SELECT
dbo.DATEDIFF_TICKS(#startdate) [start_ticks],
dbo.DATEDIFF_TICKS(#startdate) [end_ticks],
dbo.DATEDIFF_TICKS(#enddate) - dbo.DATEDIFF_TICKS(#startdate) [diff];
The results:
start_ticks end_ticks diff
-------------------- -------------------- --------------------
630822816000000000 630822816000000000 157785408000000000

BigQuery convert unix timestamp struct to struct of datetime

I have a BigQuery table that contains a struct column called daySliderTimes in the following form:
daySliderTimes STRUCT<_field_1 STRUCT<_seconds INT, _nanoseconds INT>, _field_1 STRUCT<_seconds INT, _nanoseconds INT>.
_field_1 and _field_2 represent two different timestamps. _seconds and _nanoseconds represent time since the unix epoch.
I want to convert the data into a new STRUCT with the following form:
daySlidertimes STRUCT<startTime DATETIME, endTime DATETIME>
This is the table as seen in the BigQuery UI:
If you want to create a new table from the old one with the format daySlidertimes STRUCT<startTime DATETIME, endTime DATETIME>, you can cast the data in milliseconds and so then transform it to TIMESTAMP with the function "TIMESTAMP_MICROS", check this link to see the amount of functions to parse timestamp [1].
An example of the query should look something like this:
CREATE TABLE `project.dataset.new_table` AS
SELECT searchDocId,
STRUCT(TIMESTAMP_MICROS(CAST(
((daySliderTimes.field1.seconds * 1e+6) +
ROUND(daySliderTimes.field1.nanoseconds * 0.001)) AS INT64)) as
startTime,
TIMESTAMP_MICROS(CAST( ((daySliderTimes.field2.seconds * 1e+6) +
ROUND(daySliderTimes.field2.nanoseconds * 0.001)) AS INT64)) as endTime)
as daySliderTimes,
enabledDaySliders
FROM `project.dataset.old_table`
[1] https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#parse_timestamp
You can use TIMESTAMP_SECONDS() function. This function converts the seconds to DATETIME format.
Therefore, you are ale to transform daySliderTimes._field_1.seconds to a date using TIMESTAMP_SECONDS() function. As well as, for _field_2, then aggregate them in a new struct format.
During the creation of the view or table, in your select you can do the following:
WITH table_newStruct as(
SELECT
#Select all the desired fields
searchDocId,
STRUCT(TIMESTAMP_SECONDS(daySliderTimes._field_1.seconds) as startTime,
TIMESTAMP_SECONDS(daySliderTimes._field_.seconds) as endTime) as new_daySlidertimes
FROM 'table_source')
SELECT searchDocId, new_daySlidertimes
FROM 'table_newStruct'
In addition, the returned TIMESTAMP should be in the following format 1970-01-01 00:00:00 UTC. You can format it using the FORMAT_DATE() function.

How to convert Java timestamp stored as bigint to timestamp in Presto?

I've had little luck searching for this over a couple days.
If my avro schema for data in a hive table is:
{
"type" : "record",
"name" : "messages",
"namespace" : "com.company.messages",
"fields" : [ {
"name" : "timeStamp",
"type" : "long",
"logicalType" : "timestamp-millis"
}, {
…
and I use presto to query this, I do not get formatted timestamps.
select "timestamp", typeof("timestamp") as type,
current_timestamp as "current_timestamp", typeof(current_timestamp) as current_type
from db.messages limit 1
timestamp type current_timestamp current_type
1497210701839 bigint 2017-06-14 09:32:43.098 Asia/Seoul timestamp with time zone
I thought it would be a non-issue then to convert them to timestamps with millisecond precision, but I'm finding I have no clear way to do that.
select cast("timestamp" as timestamp) from db.messages limit 1
line 1:16: Cannot cast bigint to timestamp
Also they've changed presto's timestamp casting to always assume the source is in seconds.
https://issues.apache.org/jira/browse/HIVE-3454
So if I used from_unixtime() I have to chop off the milliseconds or else it gives me a very distant date:
select from_unixtime("timestamp") as "timestamp" from db.messages limit 1
timestamp
+49414-08-06 07:15:35.000
Surely someone else who works with Presto more often knows how to express the conversion properly. (I can't restart the Presto nor Hive servers to force the timezone into UTC either btw).
I didn't find direct conversion from Java timestamp (number of milliseconds since 1970) to timestamp, but one can be done with to_unixtime and adding milliseconds as interval:
presto> with t as (select cast('1497435766032' as bigint) a)
-> select from_unixtime(a / 1000) + parse_duration(cast((a % 1000) as varchar) || 'ms') from t;
_col0
-------------------------
2017-06-14 12:22:46.032
(1 row)
(admittedly cumbersome, but works)
select from_unixtime(cast(event_time as bigint) / 1000000) + parse_duration(cast((cast(event_time as bigint) % 1000) as varchar) || 'ms') from TableName limit 10;

Resources