I would like to query my Table for the value at midnight (00:00:00) everyday.
So The system gets a value every minutes but I want the value at 00:00:00 so I can plot daily sales.
Here is how my code looking so far:
const data = await Chart.findAll({
where: {
DATEPART(hour, TimeStamp): 0,
DATEPART(minute, TimeStamp): 0
},
});
res.send(data);
};
TimeStamp is the name of the field on SQL table that contain the time stamp.
Related
I want to using nodejs mssql package to bulk insert data with below json:
[
{
"name": "Tom",
"registerDate": "2021-10-10 00:00:00",
"gender": 0,
"consumeRecord":[
{
"date": "2021-10-11 00:00:00",
"price": 102.5
},
{
"date": "2021-10-12 00:00:00",
"price": 200
}
]
},
{
"name": "Mary",
"registerDate": "2021-06-10 00:00:00",
"gender": 1,
"consumeRecord":[
{
"date": "2021-07-11 00:00:00",
"price": 702.5
},
{
"date": "2021-12-12 00:00:00",
"price": 98.2
}
]
}
]
I am try to mssql bulk insert for the member record with multiple consume data?
Is there anything can insert one to many with bulk insert like below.
because it seems need to insert the member table and get the id (primary key) first. Then using the id (primary key) for the consume table relation data
const sql = require('mssql')
// member table
const membertable = new sql.Table('Member')
table.columns.add('name', sql.Int, {nullable: false})
table.columns.add('registerDate', sql.VarChar(50), {nullable: false})
table.columns.add('gender', sql.VarChar(50), {nullable: false})
// consume record table
const consumeTable = new sql.Table('ConsumeRecord')
table.columns.add('MemberId', sql.Int, {nullable: false})
table.columns.add('Date', sql.VarChar(50), {nullable: false})
table.columns.add('price', sql.Money, {nullable: false})
// insert into member table
jsonList.forEach(data => {
table.rows.add(data.name)
table.rows.add(data.registerDate)
table.rows.add(data.gender)
consumeTable.rows.add(data.memberId) // <---- should insert member table id
consumeTable.rows.add(data.consumeRecord.data)
consumeTable.rows.add(data.consumeRecord.price)
const request = new sql.Request()
request.bulk(consumeTable , (err, result) => {
})
})
const request = new sql.Request()
request.bulk(membertable , (err, result) => {
})
Expected Record:
Member Table
id (auto increment)
name
registerDate
gender
1
Tom
2021-10-10 00:00:00
0
2
Mary
2021-06-10 00:00:00
1
Consume Record Table
id
MemberId
Date
price
1
1
2021-10-10 00:00:00
102.5
2
1
2021-10-12 00:00:00
200
3
2
2021-07-11 00:00:00
702.5
4
2
2021-12-12 00:00:00
98.2
The best way to do this is to upload the whole thing in batch to SQL Server, and ensure that it inserts the correct foreign key.
You have two options
Option 1
Upload the main table as a Table Valued Parameter or JSON blob
Insert with OUTPUT clause to select the inserted IDs back to the client
Correlate those IDs back to the child table data
Bulk Insert that as well
Option 2 is a bit easier: do the whole thing in SQL
Upload everything as one big JSON blob
Insert main table with OUTPUT clause into table variable
Insert child table, joining the IDs from the table variable
CREATE TABLE Member(
Id int IDENTITY PRIMARY KEY,
name varchar(50),
registerDate datetime NOT NULL,
gender tinyint NOT NULL
);
CREATE TABLE ConsumeRecord(
MemberId Int NOT NULL REFERENCES Member (Id),
Date datetime not null,
price decimal(9,2)
);
Note the more sensible datatypes of the columns
DECLARE #ids TABLE (jsonIndex nvarchar(5) COLLATE Latin1_General_BIN2 not null, memberId int not null);
WITH Source AS (
SELECT
j1.[key],
j2.*
FROM OPENJSON(#json) j1
CROSS APPLY OPENJSON(j1.value)
WITH (
name varchar(50),
registerDate datetime,
gender tinyint
) j2
)
MERGE Member m
USING Source s
ON 1=0 -- never match
WHEN NOT MATCHED THEN
INSERT (name, registerDate, gender)
VALUES (s.name, s.registerDate, s.gender)
OUTPUT s.[key], inserted.ID
INTO #ids(jsonIndex, memberId);
INSERT ConsumeRecord (MemberId, Date, price)
SELECT
i.memberId,
j2.date,
j2.price
FROM OPENJSON(#json) j1
CROSS APPLY OPENJSON(j1.value, '$.consumeRecord')
WITH (
date datetime,
price decimal(9,2)
) j2
JOIN #ids i ON i.jsonIndex = j1.[key];
db<>fiddle
Unfortunately, INSERT only allows you to OUTPUT from the inserted table, not from any non-inserted columns. So we need to hack it with a weird MERGE
I have a problem try to query a document using DateTime object
document in path /users/{userId}/payments/{paymentDoc}
which I have document structure like this
Document data: {
banking: '',
isPaid: false,
endDate: Timestamp { _seconds: 1592179200, _nanoseconds: 0 },
startDate: Timestamp { _seconds: 1590969600, _nanoseconds: 0 }, **June 1, 2020 at 7:00:00 AM UTC+7**
totalIncome: 100
}
where I try to query document is
firstDayOfMonth = new Date(2020, 05, 01) **maybe wrong here?**
userPaymentRef = db.collection('users')
.doc(userId)
.collection('payments')
.where('startDate', '==', firstDayOfMonth)
.get()
.then(function (doc) {
if (doc.exists) {
console.log("Document data:", doc.data());
return true;
} else {
console.log("No such document!");
return false;
}
}).catch(function (error) {
console.log("Error getting document:", error);
});
But in firebase functions log its said No such document!
I try to log the timestamp of startDate I stored and timestamp of date I want to query. It's the same, but it's said 'No such document' is my query is wrong? or DateTime I want to query is wrong?
Edit:
Functions log
the timestamp of document I store and the timestamp from DateTime is matched but can't find document
If you have a timestamp field field in a document, it can only be matched exactly by other Date or Timestamp objects. The precision down to the nanosecond must match. Two timestamps with the same day but different times of the day are not equal.
Also bear in mind that the timestamp objects don't encode a timezone - they always use UTC. If you use a Date object, it must be created with the exact same moment in time as the timestamp in order to get an equality match. Date objects that don't specify precise moment in time will use the local computer's timestamp, which is definitely not guaranteed to match the time of day on any other computer.
The bottom line is this: if you want two timestamps to be equal, they must both represent the exact same moment in time, measured to the nanosecond.
Using Node + mssql, I need to get the offset for the date that is returned from SQL Server for this query:
SELECT getutcdate() AT TIME ZONE 'UTC' AT TIME ZONE 'Mountain Standard Time'
The issue is it comes out of mssql/tedious already parsed as a JavaScript Date() object, set to the time zone of the script/host machine. There is no way that I can see to tell what time zone/offset the original value is actually in.
Other tools, such as Azure Data Studio, correctly show the raw value in the table for DateTimeOffset columns, including showing the stored offset.
Output in Node:
[ { value: 2020-03-21T03:07:54.193Z } ]
Output in Azure Data Studio:
2020-03-20 21:07:22.9970000 -06:00
Code:
const sql = require('mssql');
sql.connect('mssql://sa:<password>#localhost?useUTC=false')
.then(() => {
const request = new sql.Request();
request.query(query, (err, result) => {
console.dir(result.recordset)
});
});
I have 4 tables with almost 770K rows in each table and I am using COPY TO STDOUT command for data backup. I am using pg module for database connection. Following code shows database client connection:
var client = new pg.Client({user: 'dbuser', database: 'dbname'});
client.connect();
Following is the code for backup:
var stream1 = client.copyTo("COPY table_name TO STDOUT WITH CSV");
stream1.on('data', function (chunk) {
rows = Buffer.concat([rows, chunk]);
});
stream1.on('end', function () {
myStream._write(rows, 'hex', function(){
rows = new Buffer(0);
return cb(null, 1);
});
});
stream1.on('error', function (error) {
debug('Error: ' + error);
return cb(error, null);
});
myStream is the object of class which inherits stream.Writable. myStream._write() will concatenate streams in a buffer and at the end, the data in buffer is stored in a file.
It works fine for small amount of data but it takes lot of time for large data.
I am using PostgreSQL 9.3.4 and NodeJS v0.10.33
The create table statement is:
CREATE TABLE table_name
(
id serial NOT NULL,
date_time timestamp without time zone,
dev_id integer,
type integer,
value1 double precision,
value2 double precision,
value3 double precision,
value4 double precision,
message character varying(255),
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
)
Here is the execution plan:
dbname=# explain (analyze, verbose, buffers) select * from table_name;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Seq Scan on public.table_name (cost=0.00..18022.19 rows=769819 width=105) (actual time=0.047..324.202 rows=769819 loops=1)
Output: id, date_time, dev_id, type, value1, value2, value3, value4, message, created_at, updated_at
Buffers: shared hit=10324
Total runtime: 364.909 ms
(4 rows)
I have saved my date as timestamp, using the below logic:
var timestamp = Math.floor(new Date().getTime()/1000);
timestamp =145161061
Can any one help me out with the query?
I need to find records between dates and my date is stored in timestamp format as shown above.
If you have specified the type of the field to be date, then even if you store the date by giving the time stamp, it will get stored as Date.
To do a range query on date you can simply do something like this:
db.events.find({"event_date": {
$gte: ISODate("2010-01-01T00:00:00Z"),
$lt: ISODate("2014-01-01T00:00:00Z"),}})
But then if you have specified it as Number, then you can simply do a range query on the number like this :
db.events.find({"event_date": {
$gte: 145161061,
$lt: 145178095,}})
You can try kind of this query:
var startTime = 145161061;
var endTime = 149161061;
Books.find({
created_at: {
$gt: startTime,
$lt: endTime
}
});