I'm pretty sure the following query used to work for me on Presto:
select segment, sum(count)
from modeling_trends
where segment='2557172' and date = '2016-06-23' and count_time between '2016-06-23 14:00:00.000' and '2016-06-23 14:59:59.000';
group by 1;
now when I run it (on Presto 0.147 on EMR) I get an error of trying to assigning varchar to date/timestamp..
I can make it work using:
select segment, sum(count)
from modeling_trends
where segment='2557172' and date = cast('2016-06-23' as date) and count_time between cast('2016-06-23 14:00:00.000' as TIMESTAMP) and cast('2016-06-23 14:59:59.000' as TIMESTAMP)
group by segment;
but it feels dirty...
is there a better way to do this?
Unlike some other databases, Presto doesn't automatically convert between varchar and other types, even for constants. The cast works, but a simpler way is to use the type constructors:
WHERE segment = '2557172'
AND date = date '2016-06-23'
AND count_time BETWEEN timestamp '2016-06-23 14:00:00.000' AND timestamp '2016-06-23 14:59:59.000'
You can see examples for various types here: https://prestosql.io/docs/current/language/types.html
Just a quick thought.. have you tried omitting the dashes in your date? try 20160623 instead of 2016-06-23.
I encountered something similar with SQL server, but not used Presto.
Related
I have a dynamoDB that has two string time objects. One is when the item was created and the other is when it was updated(processed). I am checking if the item was updated within 2 days of creation using the following conversions and then just using an if statement to compare if updateTime is greater than creationTime. But is this the best way to go about this?
creationTime = datetime.datetime.strptime(acc.createdAt,'%H:%M')
modcreationTime = creationTime + datetime.timedelta(days=2)
updateTime = datetime.datetime.strptime(acc.updatedAt,'%H:%M')
Creating the Metrics in CloudWatch using the data turned out to be the best solution. It simplified the code to a great extent and made the data easy to work with.
I am using tuple<timestamp, text> to store timestamp and zone information in the Cassandra database.
I want to filter data based on timestamps.
Is there any way I can use this tuple in where clause for comparison in cql?
I have tried following cql query but it is not giving me proper results
SELECT extid,time_created_ from d_account where time_created_>=('2021-04-06 7:09:06', '+05:30') allow filtering;
Thanks in advance.
Posting this answer so that someone might get help in the future.
The following query worked for me and returned the expected results.
SELECT extid,time_created_ from d_account where time_created_ < ('2021-04-06 07:24:10.347+0000', '+05:30') allow filtering;
Assuming I have a timestamp like one obtained from current_timestamp() UDF inside spark when using a function like: hour(), minute(), ... . How can I specify a time zone?
I believe that https://issues.apache.org/jira/browse/SPARK-18350 introduced the support for it. But can't get it to work. Similar to the last comment on the page:
session.read.schema(mySchema)
.json(path)
.withColumn("year", year($"_time"))
.withColumn("month", month($"_time"))
.withColumn("day", dayofmonth($"_time"))
.withColumn("hour", hour($"_time", $"_tz"))
Having a look at the definition of the hour function, it uses an Hour
expression which can be constructed with an optional timeZoneId. I
have been trying to create an Hour expression but this is
Spark-internal construct - and the API forbids to use it directly. I
guess providing a function hour(t: Column, tz: Column) along with the
existing hour(t: Column) would not be a satisfying design.
I am stuck on trying to pass a specific time zone to the default builtin time UDFs.
I am stuck on a problem and I am not sure what is the best way to solve it. I have a date column that I want to select and I want to fetch it as a string. Which is great, node-oracledb module has this option with fetchAsString mehotd. But it fetches the date like this for example 10-JAN-16 and I want to fetch it like this 10-01-2016. Is there a way to do that from the node-oracledb module, or I should modify the date after I get the result from the query?
UPDATE: I mean solution without to_char in the query and without query modifications
Check out this section of my series on Working with Dates in JavaScript, JSON, and Oracle Database:
https://dzone.com/articles/working-with-dates-using-the-nodejs-driver
The logon trigger shows an example of using alter session to set the default date format. Keep in mind that there is NLS_DATE_FORMAT, NLS_TIMESTAMP_FORMAT, NLS_TIMESTAMP_TZ_FORMAT.
I only show NLS_TIMESTAMP_TZ_FORMAT because I convert to that type in the examples that follow as I need to do some time zone conversion for the date format I'm using.
Another way to set the NLS parameters is to use environment variables of the same name. Note that this method will not work unless you set the NLS_LANG environment variable as well.
I have declared a date column in Postgres as date.
When I write the value with node's pg module, the Postgres Tool pgAdmin displays it correctly.
When I read the value back using pg, Instead of plain date, a date-time string comes with wrong day.
e.g.:
Date inserted: 1975-05-11
Date displayed by pgAdmin: 1975-05-11
Date returned by node's pg: 1975-05-10T23:00:00.000Z
Can I prevent node's pg to appy time-zone to date-only data? It is intended for day of birth and ihmo time-zone has no relevance here.
EDIT Issue response from Developer on github
The node-postgres team decided long ago to convert dates and datetimes
without timezones to local time when pulling them out. This is consistent
with some documentation we've dug up in the past. If you root around
through old issues here you'll find the discussions.
The good news is its trivially easy to over-ride this behavior and return
dates however you see fit.
There's documentation on how to do this here:
https://github.com/brianc/node-pg-types
There's probably even a module somewhere that will convert dates from
postgres into whatever timezone you want (utc I'm guessing). And if
there's not...that's a good opportunity to write one & share with everyone!
Original message
Looks like this is an issue in pg-module.
I'm a beginner in JS and node, so this is only my interpretation.
When dates (without time-part) are parsed, local time is assumed.
pg\node_modules\pg-types\lib\textParsers.js
if(!match) {
dateMatcher = /^(\d{1,})-(\d{2})-(\d{2})$/;
match = dateMatcher.test(isoDate);
if(!match) {
return null;
} else {
//it is a date in YYYY-MM-DD format
//add time portion to force js to parse as local time
return new Date(isoDate + ' 00:00:00');
But when the JS date object is converted back to a string getTimezoneOffset is applied.
pg\lib\utils.js s. function dateToString(date)
Another option is change the data type of the column:
You can do this by running the command:
ALTER TABLE table_name
ALTER COLUMN column_name_1 [SET DATA] TYPE new_data_type,
ALTER COLUMN column_name_2 [SET DATA] TYPE new_data_type,
...;
as descripted here.
I'd the same issue, I changed to text.
Just override node-postgres parser for the type date (1082) and return the value without parsing it:
import pg from pg
pg.types.setTypeParser(1082, value => value)