I am trying to form a sql query for the below condition in where clause but it is giving an error.
date(\"time\") BETWEEN date_trunc('month', current_date - INTERVAL '1 months') \
AND (date_trunc('MONTH', current_date - INTERVAL '1 months' + INTERVAL '1 MONTH - 1 day'))
Influxdb query which I am trying and it is giving an error is as below.
SELECT * FROM "series" WHERE time >= (now() - 30d) AND time < (now() - (30d + 30d - 1d))
But above query is giving badrequest error. Could you please help me to correct this query.
The time clause in the InfluxQL query is unfulfillable, time lower and upper bounds seems reversed. Or the comparison operators are, depending on your coding preferences.
Related
I have a Cosmos Db instance with > 1 Million JSON Documents stored in it.
I am trying to pull data of a certain time frame as to when the document was created based on the _ts variable which is auto-generated when the document is inserted. It represents the UNIX timestamp of that moment.
I am unable to understand, why both these queries produce drastically different results:
Query 1:
Select *
from c
where c._ts > TimeStamp1
AND c._ts < TimeStamp2
Produces 0 results
Query 2
Select *
from c
where c._ts > TimeStamp1
AND c._ts < TimeStamp2
order by c._ts desc
Produces the correct number of results.
What I have tried?
I suspected that might be because of the default CosmosDb index on the data. So, I rewrote the index policy to index only that variable. Still the same problem.
Since my end purpose is to group by on the returned data from the query, then I tried to use group by with order by alone or in a subquery. Surprisingly, according to the docs, CosmosDb yet doesn't support using group by with order by.
What I need help on?
Why am I observing such a behavior?
Is there a way to index the Db in such a way that the rows are returned.
Beyond this, is there a way to still use group by and order by together (Please don't link the question to another one because of this point, I have gone through them and their answers are not valid in my case).
#Andy and #Tiny-wa, Thanks for replying.
I was able to understand the unintended behavior and it was showing up because of the GetCurrentTimestamp() used to calculate the TimeStamps. The documentation states that
This system function will not utilize the index. If you need to
compare values to the current time, obtain the current time before
query execution and use that constant string value in the WHERE
clause.
Although, I don't fully understand what this means but I was to solve this by creating a Stored Procedure where the Time Stamp is fetched before the SQLAPI query is formed and executed and I was able to get the rows as expected.
Stored Procedure Pseudocode for that is like:
function FetchData(){
..
..
..
var Current_TimeStamp = Date.now();
var CDbQuery =
`Select *
FROM c
where (c._ts * 10000000) > DateTimeToTicks(DateTimeAdd("day", -1, TicksToDateTime(` + Current_TimeStamp + ` * 10000)))
AND (c._ts * 10000000) < (` + Current_TimeStamp + ` * 10000)`
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
XQuery,
function (err, feed, options) {
..
..
..
});
}
I'm trying to dynamically insert timestamps with varying hour intervals into Postgres (12.1), NodeJS 10.15.3, and Knex.
Inserting directly into Postgres via Postico, the NOW() + INTERVAL 'n hours' format works fine:
insert into users (first_name, updated_at)
values ('Bob', NOW() + INTERVAL '2 hours');
When inserting via Knex,
row.updated_at = `NOW() + INTERVAL '2 hours'`;
I'm getting error:
invalid input syntax for type timestamp with time zone: "NOW() + INTERVAL '2 hours'"
Outputting the query via query.toString(), I see the interval has been converted to
'NOW() + INTERVAL ''2 hours'''
How can I insert this in the correct format?
knex is converting your input into string and passing it to the database, which , it expects, would be parsed by db to be a valid timestamp. To avoid knex wrapping your input in double quotes, you need to specify it as a literal value. And the way to do that is use raw
row.updated_at = knex.raw("NOW() + INTERVAL '2 hours'");
where knex is the variable that you used to instantiate connection with the db.
I am querying data from HDFS using Impala in a python script using the python library Impyla. The specific data is proxy data and there is tons of it. I have a script that runs daily to pull the previous day and runs statistics. Currently I am using the devicereceipttime field for this query which is stored as a timestamp.
from impala.dbapi import connect
from impala.util import as_pandas
import pandas as pd
#Pull desired features from the proxy_realtime_p table
cursor.execute('select request, count(*) as count \
from default.proxy_realtime_p \
where devicereceipttime BETWEEN concat(to_date(now() - interval 1 days), " 00:00:00") and concat(to_date(now() - interval 1 days), " 23:59:59") \
group by request \
order by count desc')
This query takes a little bit and would like to speed this up if possible. From the given fields below is my query the most efficient?
devicereceipttime (timestamp)
year (int)
month (int)
day (int)
hour (int)
minute (int)
seconds (int)
I have the following Carts table schema of orientDB. All I wanted to do is to select those records where
(CurrentTime - timeStamp) >= expiration
I had also tried to achive my goal thorugh converting to unix timestamp and tried following queries
SELECT * FROM Carts WHERE eval("('+new Date().getTime()+' timeStamp.asLong())/1000") >= expiration
And also by following technique but when :parameter is passed in eval funtion it convert it into '?' and don't return required data.
db.query(
'SELECT eval("'+new Date().getTime()+' - timeStamp.asLong()") as DIFF, :nowTimeStamp as NOW, timeStamp.asLong() as THEN FROM Carts ',
{
params: {
nowTimeStamp: new Date().getTime()
}
}).then(callback);
Try this query:
SELECT * FROM Carts WHERE eval("SYSDATE().asLong() / 1000 - timeStamp.asLong() / 1000") >= expiration
Hope it helps.
I need to replicate the linux command "date +%s%3N" in SQL Developer. I have tried the below code sample but it returns with a different value. I have also done extensive searching Google with no luck.
select to_char((extract(day from (systimestamp - timestamp '1970-01-01 00:00:00')) * 86400000
+ extract(hour from (systimestamp - timestamp '1970-01-01 00:00:00')) * 3600000
+ extract(minute from (systimestamp - timestamp '1970-01-01 00:00:00')) * 60000
+ extract(second from (systimestamp - timestamp '1970-01-01 00:00:00')) * 1000) * 1000) unix_time
from dual;
The date +%s%3N command returns something like:
1475615656692870653
Whereas the above code sample returns something like:
1475594089419116
The date command returns a longer and larger number than the code sample even though it was run before the code sample. The ultimate solution would be a direct utility in Oracle if possible. If not, possibly invoking the date command within Oracle would work.
Try this one:
CREATE OR REPLACE FUNCTION GetUnixTime RETURN INTEGER IS
dsInt INTERVAL DAY(9) TO SECOND;
res NUMBER;
BEGIN
dsInt := CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC';
res:= EXTRACT(DAY FROM dsInt)*24*60*60
+ EXTRACT(HOUR FROM dsInt)*60*60
+ EXTRACT(MINUTE FROM dsInt)*60
+ EXTRACT(SECOND FROM dsInt);
RETURN ROUND(1000*res);
END GetUnixTime;
ROUND(1000*res) will return Unix time in Milliseconds, according to your question it is not clear whether you like to get Milliseconds, Microseconds or even Nanoseconds. But it is quite obvious how to change the result to desired value.
This function considers your local time zone and time zone of Unix epoch (which is always UTC)
If you don't like a function, you can write it at a query of course:
SELECT
ROUND(EXTRACT(DAY FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')*24*60*60
+ EXTRACT(HOUR FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')*60*60
+ EXTRACT(MINUTE FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')*60
+ EXTRACT(SECOND FROM CURRENT_TIMESTAMP - TIMESTAMP '1970-01-01 00:00:00 UTC')
* 1000) AS unix_time
FROM dual;
I ended up going with using oscommands through the method described in this link here http://www.orafaq.com/scripts/plsql/oscmd.txt. The solutions below were a step in the right direction, however, other parts of the script we were making were running into issues. Using oscommands solved all of our issues. With the method mentioned in the link, I was simply able to type
l_time := oscomm('/bin/date +%s%3N');
to get the correct number.