convert and Insert date format in nodejs into Oracle timestamp - node.js

let date = moment(new Date()).format("YYYY-MM-DD hh:mm:ss.000000000 A");
// when i tried to insert date in table it is null
// TImestamp format in OracleDB is 14-03-22 3:53:08.901008000 PM
INSERT INTO STUDENT VALUES(join_date) ( '14-03-22 3:53:08.901008000 PM')

How can get date format like YYYY-MM-DD HH:MM:SS.FF3 AM/PM because in oracle it supports this kind of timestamp
In Oracle, a TIMESTAMP is a binary data type that consists of 7 - 13 bytes (century, year-of-century, month, day, hour, minute, second and between zero and six bytes for fractional seconds). It ALWAYS contains those components and it is NEVER stored in a particular format.
The client application you are using (i.e. SQL/Plus, SQL Developer, NodeJS, Java, etc.) may chose to DISPLAY the binary value with a default format but this is a function of the client application and NOT a function of the database. (Some client applications may use the NLS_TIMESTAMP_FORMAT session parameter from the database as their default format model but the implicit conversion from binary-to-string for display purposes is still something that the client application does, not the database, and not all clients use the database session variables for their defaults.)
You should either:
Use a timestamp literal:
INSERT INTO STUDENT (join_date) VALUES (TIMESTAMP '2022-03-14 15:53:08.901008000');
Explicitly convert your formatted string to a timestamp binary data type using the TO_TIMESTAMP function with a format model:
INSERT INTO STUDENT (join_date)
VALUES (
TO_TIMESTAMP('14-03-22 3:53:08.901008000 PM', 'DD-MM-RR HH12:MI:SS.FF9 AM')
)

Related

Is it possible for CQL to parse a JSON object to insert data?

From what I looked so far, it seems impossible with Cassandra. But I thought I'd give it a shot:
How can I select a value of a json property, parsed from a json object string, and use it as part of an update / insert statement in Cassandra?
For example, I'm given the json object:
{
id:123,
some_string:"hello there",
mytimestamp: "2019-09-02T22:02:24.355Z"
}
And this is the table definition:
CREATE TABLE IF NOT EXISTS myspace.mytable (
id text,
data blob,
PRIMARY KEY (id)
);
Now the thing to know at this point is that for a given reason the data field will be set to the json string. In other words, there is no 1:1 mapping between the given json and the table columns, but the data field contains the json object as kind of a blob value.
... Is it possible to parse the timestamp value of the given json object as part of an insert statement?
Pseudo code example of what I mean, which obviously doesn't work ($myJson is a placeholder for the json object string above):
INSERT INTO myspace.mytable (id, data)
VALUES (123, $myJson)
USING timestamp toTimeStamp($myJson.mytimestamp)
The quick answer is no, it's not possible to do that with CQL.
The norm is to parse the elements of the JSON object within your application to extract the corresponding values to construct the CQL statement.
As a side note, I would discourage using the CQL blob type due to possible performance issues should the blob size exceeed 1MB. If it's JSON, consider storing it as CQL text type instead. Cheers!
Worth mentioning, but CQL can do a limited amount of JSON parsing on its own. Albeit, not as detailed as you're asking here (ex: USING timestamp).
But something like this works:
> CREATE TABLE myjsontable (
... id TEXT,
... some_string TEXT,
... PRIMARY KEY (id));
> INSERT INTO myjsontable JSON '{"id":"123","some_string":"hello there"}';
> SELECT * FROM myjsontable WHERE id='123';
id | some_string
-----+-------------
123 | hello there
(1 rows)
In your case you'd either have to redesign the table or the JSON payload so that they match. But as Erick and Cédrick have mentioned, the USING timestamp part would have to happen client-side.
What you detailed is doable with Cassandra.
Timestamp
To insert timestamp in a query it should be formatted as an ISO 8601 String. Sample examples could be found here. In your code, you might have to convert your incoming value to expected type and format.
Blob:
Blob expects to store binary data, as such it cannot be put Ad hoc as a String in a CQL query. (you can use TEXT type to do it if you want to encode base64)
When you need to insert binary data you need to provide proper type as well. For instance if you are working with Javascript to need to provide a Buffer as describe in the documentation Then when you execute your query you externalized your parameters
const sampleId = 123;
const sampleData = Buffer.from('hello world', 'utf8');
const sampleTimeStamp = new Date();
client.execute('INSERT INTO myspace.mytable (id, data) VALUES (?, ?) USING timestamp toTimeStamp(?)', [ sampleId, sampleData, sampleTimeStamp ]);

Query date on datetime stored within jsonfield in Postgres through Django?

I have a Postgres table with a jsonb column containing UTC timestamp data in ISO format like the following:
{
"time": "2021-04-13T20:14:56Z"
}
The Django model for this table looks like:
class DateModel(models.Model):
values = models.JSONField(default=dict)
I need to query the table for all records with a timestamp on a certain date (ignoring time)
I'm looking for a solution similar to the following:
DateModel.objects.filter(values__time__date='2021-04-13')
The other solution I have found is to query for records with date greater than the previous day and less than the next one. This works but I am looking for a way to do it with a single query so the code would be more concise.
Any suggestions?
There's a couple of annotations you need to perform on the queryset to extract the time field and convert it to a datetime.
First you need to extract the time string by using django.contrib.postgres.fields.jsonb.KeyTextTransform
from django.contrib.postgres.fields.jsonb import KeyTextTransform
query = DateModel.objects.annotate(time_str=KeyTextTransform('time', 'values'))
Then you need to convert that string to a datetime using Cast
from django.db.models.functions import Cast
from django.db.models import DateTimeField
query = query.annotate(time=Cast('time_str', output_field=DateTimeField()))
Then you can filter by that annotation
query = query.filter(time__date='2021-04-13')

invalid input syntax for type timestamp with time zone

I'm trying to dynamically insert timestamps with varying hour intervals into Postgres (12.1), NodeJS 10.15.3, and Knex.
Inserting directly into Postgres via Postico, the NOW() + INTERVAL 'n hours' format works fine:
insert into users (first_name, updated_at)
values ('Bob', NOW() + INTERVAL '2 hours');
When inserting via Knex,
row.updated_at = `NOW() + INTERVAL '2 hours'`;
I'm getting error:
invalid input syntax for type timestamp with time zone: "NOW() + INTERVAL '2 hours'"
Outputting the query via query.toString(), I see the interval has been converted to
'NOW() + INTERVAL ''2 hours'''
How can I insert this in the correct format?
knex is converting your input into string and passing it to the database, which , it expects, would be parsed by db to be a valid timestamp. To avoid knex wrapping your input in double quotes, you need to specify it as a literal value. And the way to do that is use raw
row.updated_at = knex.raw("NOW() + INTERVAL '2 hours'");
where knex is the variable that you used to instantiate connection with the db.

Date type displaying with timezone on node-postgres module

I have stored input data in date format in postgres database, but when I am showing the date in browser it's showing date with timezone and converting it from utc. For example I have stored the date in 2020-07-16 format. But when i am showing the date it becomes 2020-07-15T18:00:00.000Z. I have tried using select mydate::DATE from table to get only date but its still showing date with timezone. I am using node-postgres module in my node app. I suspect it's some configuration on node-postgres module? From their doc:
node-postgres converts DATE and TIMESTAMP columns into the local time
of the node process set at process.env.TZ
Is their any way i can configure it to only parse date? If i query like this SELECT TO_CHAR(mydate :: DATE, 'yyyy-mm-dd') from table i get 2020-07-16 but thats lot of work just to get date
You can make your own date and time type parser:
const pg = require('pg');
pg.types.setTypeParser(1114, function(stringValue) {
return stringValue; //1114 for time without timezone type
});
pg.types.setTypeParser(1082, function(stringValue) {
return stringValue; //1082 for date type
});
The type id can be found in the file: node_modules/pg-types/lib/textParsers.js
It is spelled out here:
https://node-postgres.com/features/types
date / timestamp / timestamptz
console.log(result.rows)
// {
// date_col: 2017-05-29T05:00:00.000Z,
// timestamp_col: 2017-05-29T23:18:13.263Z,
// timestamptz_col: 2017-05-29T23:18:13.263Z
// }
bmc=# select * from dates;
date_col | timestamp_col | timestamptz_col
------------+-------------------------+----------------------------
2017-05-29 | 2017-05-29 18:18:13.263 | 2017-05-29 18:18:13.263-05
(1 row)

How to store and query a unix 10-digit epoch time into something human readable in Cassandra?

I have a UNIX epoch time stored in a set of data I'm trying to import in the following format without milliseconds 1006,785,1502054277,8 (third entry). I noticed I can only store this in Cassandra as a timestamp. However, when I try to convert the time when querying it comes across as follows using this query:
select player_id, server_id, dateof(mintimeuuid(last_login)) as timestamp, sessions from servers_by_user where server_id = 440 and player_id = 217442
player_id | server_id | timestamp | sessions
-----------+-----------+---------------------------------+---------------
217442 | 440 | 1970-01-18 06:38:03.382000+0000 | 1
That's obviously not right because that epoch time is actually 2017-08-06T21:17:57+00:00.
I tried to store the data as timeuuid but then I get this error presumably because it is not a 13-digit epoch time: Failed to import 1 rows: ParseError - Failed to parse 1502054277 : badly formed hexadecimal UUID string,.
What would be the best way to store a 10-digit UNIX epoch time and to query it back into something that is human-readable?
The problem you notice is that unix timestamps are seconds since epoch - but timestamps in cassandra are stored as milliseconds since epoch instead.
First row is what you actually stored, the second one is what you want:
cqlsh:demo> SELECT id, blobAsBigint(timestampAsBlob(ts)) FROM demo3;
id | system.blobasbigint(system.timestampasblob(ts))
--------------------------------------+-------------------------------------------------
b7bac930-7b3e-11e7-a5b3-73178ecf2b4e | 1502054277
bfb37f10-7b3e-11e7-a5b3-73178ecf2b4e | 1502054277000
(2 rows)
cqlsh:demo> SELECT id, dateof(mintimeuuid(blobAsBigint(timestampAsBlob(ts)))) FROM demo3;
id | system.dateof(system.mintimeuuid(system.blobasbigint(system.timestampasblob(ts))))
--------------------------------------+------------------------------------------------------------------------------------
b7bac930-7b3e-11e7-a5b3-73178ecf2b4e | 1970-01-18 09:14:14+0000
bfb37f10-7b3e-11e7-a5b3-73178ecf2b4e | 2017-08-06 21:17:57+0000
(2 rows)
cqlsh:demo>
(using something like timestampasblob() in regular code is not a good idea, just as demo here to see whats going on under the hood)
If you can - do not store unix timestamps in cassandra but use timestamps if you want the 'magic'. Of course you can deal with conversion from seconds to timestamp in your code - using timestamps directly is much more convienent.
While you note you are importing some data, simply multiply them with 1000 before importing and you are done.
I can't try on my cluster right now, but with cassandra 3.x you can have user defined functions (UDF) to do conversions, but they need to be enabled in your cluster in cassandra.yaml in java or javascript (others as python are possible). See https://docs.datastax.com/en/cql/latest/cql/cql_using/useCreateUDF.html.
CREATE FUNCTION IF NOT EXISTS toMilliseconds(input int)
CALLED ON NULL INPUT
RETURNS int
LANGUAGE java AS '
return int*1000;
';
Or just convert directly to timestamp. Some blog post from Datastax with more examples: https://www.datastax.com/dev/blog/user-defined-functions-in-cassandra-3-0

Resources