I have a server that accepts RESTful-style requests. For PUT of a certain class of objects, I want to either insert a new record with the request body data, or update a previous record. For that, I'm using Sequelize upsert.
app.put('/someModel/:id', (req, res, next) => {
const data = JSON.parse(JSON.stringify(req.body));
data.id = req.params.id;
models.SomeModel.upsert(data).then(() => {
return models.SomeModel.findByPk(req.params.id);
}).then(res.send.bind(res)).catch(next);
});
On update, I only want to update the database record if it hasn't previously been changed. That is, if a Client A fetches a record, Client B then modifies that record, and then Client A subsequently tries to update it, the request should fail since the record was updated by Client B in between Client A's fetch and update. (On the HTTP side, I'm using the If-Unmodified-Since request header.)
Basically, I need a way to:
UPDATE table
SET
field1="value1",
field2="value2"
WHERE
id=1234 AND
updatedAt="2019-04-16 17:41:10";
That way, if the record has been updated previously, this query won't modify data.
How can I use Sequelize upsert to generate SQL like this?
At least for MySql, that wouldn't be valid - upsert results in
insert into table (field1) values ('value1') on duplicate key update field1 = values(value1);
The syntax doesn't allow for a WHERE clause. Another DBMS, maybe?
With raw sql, you could fake it with the IF function:
INSERT INTO table (field1) VALUES ('value1') ON DUPLICATE KEY UPDATE
field1 = IF(updatedAt = "2019-04-16 17:41:10", values(field1), field1);
but it's kind of ugly.
Related
I am using Sequelize in my node js server. I am ending up with validation errors because my code tries to write the record twice instead of creating it once and then updating it since it's already in DB (Postgresql).
This is the flow I use when the request runs:
const latitude = req.body.latitude;
var metrics = await models.user_car_metrics.findOne({ where: { user_id: userId, car_id: carId } })
if (metrics) {
metrics.latitude = latitude;
.....
} else {
metrics = models.user_car_metrics.build({
user_id: userId,
car_id: carId,
latitude: latitude
....
});
}
var savedMetrics = await metrics();
return res.status(201).json(savedMetrics);
At times, if the client calls the endpoint very fast twice or more the endpoint above tries to save two new rows in user_car_metrics, with the same user_id and car_id, both FK on tables user and car.
I have a constraint:
ALTER TABLE user_car_metrics DROP CONSTRAINT IF EXISTS user_id_car_id_unique, ADD CONSTRAINT user_id_car_id_unique UNIQUE (car_id, user_id);
Point is, there can only be one entry for a given user_id and car_id pair.
Because of that, I started seeing validation issues and after looking into it and adding logs I realize the code above adds duplicates in the table (without the constraint). If the constraint is there, I get validation errors when the code above tries to insert the duplicate record.
Question is, how do I avoid this problem? How do I structure the code so that it won't try to create duplicate records. Is there a way to serialize this?
If you have a unique constraint then you can use upsert to either insert or update the record depending on whether you have a record with the same primary key value or column values that are in the unique constraint.
await models.user_car_metrics.upsert({
user_id: userId,
car_id: carId,
latitude: latitude
....
})
See upsert
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key. Otherwise, first unique constraint/index will be selected, which can satisfy conflict key requirements.
As far as I know, with PUT one can create a resource if it doesn't exist or it is going to replace the old one with a new one.
I want to create a resource and being able to update it, not create more resources, using Node.js/Express and MongoDB.
So, I wrote this code:
app.put('/entries/:entry_id/type', (req, res) => {
const entry = new Entry (req.body);
entry.save();
res.end();
})
in Postman there is a PUT request, having the url: localhost:5000/entries/2/type
After sending it once, it creates an entry in the dabatase. All good!
But let's try to send the same request again. Now there are 2 entries in the database. I would expect to be one, because the same request was sent.
In the database they have the same data, same schema but they do have an extra field,
"_id":{"$oid":"5e8909e60c606c002axxxxxx"},, which is has the last character different.
Why are there created more entries of the same data while I was expecting to have only one entry in the database?
Mongo automatically creates a default index named _id on every collection when it is created. If you insert a Document into a collection without specifying an _id it will generate a new ObjectId as the _id field.
To get around this you can use findOneAndUpdate with upsert:
Entry.findOneAndUpdate({ entry_id: req.params.entry_id }, { <content> }, { upsert: true })
However this will update the document if it exists already instead of creating a new one. If you further wish to not change the Document at all if it already exists, then you can surround your <content> with $setOnInsert.
I have an API method where the user can pass in their own query. The field in the collection is simply ns, so the user might pass something like:
v.search = function(query: Object){
// query => {ns:{$in:['foo','bar',baz]}} // valid!
// query => {ns:{$in:{}}} // invalid!
// query => {ns:/foo/} // valid!
});
is there some way to do this, like a smoke test that can fail queries that are obviously wrong?
I am hoping that some MongoDB libraries would export this functionality... but in all likelihood they validate the query only by sending it to the database, which is in fact, the real arbiter of which query is valid/invalid.
But I am looking to validate the query before sending it to the DB.
Some modules that are part of MongoDB Compass have been made open source.
There are two modules that may be of use for your use case:
mongodb-language-model
mongodb-query-parser
Although they may not fit your use case 100%, it should give you a very close validation. For example npm install mongodb-language-model, then:
var accepts = require('mongodb-language-model').accepts;
console.log(accepts('{"ns":{"$in":["foo", "bar", "baz"]}}')); // true
console.log(accepts('{"ns":{"$in":{}}}')); // false
console.log(accepts('{"ns":{"$regex": "foo"}}')); // true
Also may be of interest, npm install mongodb-query-parser to parse a string value into a JSON query. For example:
var parse = require('mongodb-query-parser');
var query = '{"ns":{"$in":["foo", "bar", "baz"]}}';
console.log(parse.parseFilter(query)); // {ns:{'$in':['foo','bar','baz']}}
I don't think it's possible to do otherwise than by reflecting query.ns object and checking every of its property / associated value
I am writing Lambda function in node.js to getitems from dynamodB. Table is employee where emo_Id is the Partition key. Below is the code snippet I am writing:
var table = "Employee_Test";
var emp_Id=event.emp_Id;
var emp_Name=event.emp_Name;
var params = {
TableName: table,
KeyConditionExpression: "#eId = :Id",
ExpressionAttributeNames:{
"#eId": "emp_Id"
},
ExpressionAttributeValues: {
":Id":emp_Id
}}
The error I am getting is :
"message": "Missing required key 'Key' in params",
"code": "MissingRequiredParameter",
I know the resolution of the error is to add:
Key:{
"emp_Id": emp_Id,
} to the code. But If I have to query the employees who have joined after a particular date then I cannot provide emp_Id as a parameter.
In the AWS release notes I have found that we can disable parameter validation,
https://aws.amazon.com/releasenotes/6967335344676381 I tried this but this is also not working.
Can somebody please help?
Thanks
Shweta
I was hit with a same error when querying the secondary indexes. Turns out that I was using the wrong API. Confused between getItem and Query.
I ran into this when I first started with DynamoDb. Such an annoying error. Turns out I had accidentally used the .get method, from a previous working getById example, instead of the .query method.
In short, you may just need to change this ...
const response = await db.get(query).promise();
... to this ...
const response = await db.query(query).promise();
Add a Global Secondary Index to your table to enable lookups by start date. First, change your item creation code (PutItem) to add an attribute representing the month and year an employee joined, like joinYearMonth=201612. Second, scan your table to find items that do not already have this attribute and add it. Third, create a Global Secondary Index with a partition key of joinYearMonth and a sort key of joinTimestamp. This way, you can issue query requests on the GSI for the years and months you require to find those that joined.
I am using IBM cloudant's update handlers to add timestamp on document when it is created/updated. I am able to use the following function to add timestamp to the documents in the update handlers' database.
function(doc, req) {
if (!doc) {
doc = {_id: req.uuid};
}
var body = JSON.parse(req.body);
for (key in body){
doc[key] = body[key];
}
doc.timestamp = + new Date();
return [doc, JSON.stringify(doc)];
}
However, I would like to keep all the history in another database (saying HISTORY database). How could I insert a document from current database's update handlers to another database? Thank you.
One potential solution might be to set up continuous replication and define the update handler on the target database. The replication source database would be your HISTORY database containing the original documents and the target database store the time-stamped documents.