Use Bookshelf patch to set default values for unreferenced columns - node.js

Here's my goal: Check if an entry exists in the postgres database. If the entry exists, update some of the columns in the entry with data I provide while setting unreferenced column values to null (or the default value). Otherwise just save a new entry.
I've been trying to understand what the patch argument actually does in bookshelf. It seems I can pass it with true or false and nothing changes in the behavior. From what I understand though it seems that passing it a false value should update all columns.
Let's say I have a table called table with columns user_id, timestamp, property1, property2. What I want to achieve with the below code should update the entry with the supplied values, but set property2 to the default value.
Models.table
.where({user_id: session.userId})
.fetch()
.then(tableEntry => {
let newTableEntry = {
user_id: session.userId,
timestamp: new Date(),
property1: "a string"
}
if (tableEntry){
return tableEntry.save(newTableEntry, {patch: false})
}
else {
return new Models.table().save(newTableEntry)
}
})
The only way I can seem to do this is explicitly set the other columns to null.

Why the patch option seems to make no difference
The reason you're not seeing any difference when using either true or false in the patch option is because you're already passing most of the attributes to update in the save call (user_id, timestamp and property1), and I assume that property2 already has a default value set in the database. So, patch will only update those 3 columns but since you're not passing the fourth one it will remain with its current value. If you don't use the patch option then the update query will use the value of property2 already set on the model which is the same as what's already on the database, therefore the patch option is almost useless in this case.
Patch's main difference is in the generated queries that are more efficient.
Updating an attribute with NULL
Now, if you explicitly pass an attribute value as null in the update statement it will set it to NULL in the database, but if you don't include that attribute then it won't be set:
tableEntry.save({timestamp: new Date(), property1: null}, {patch: true})
// UPDATE my_table SET timestamp = '2018...', property1 = NULL WHERE user_id = 1
tableEntry.save({timestamp: new Date()}, {patch: true})
// UPDATE my_table SET timestamp = '2018...' WHERE user_id = 1
Updating an attribute with a DEFAULT value
There's currently no support for that feature on Bookshelf. You can open a new feature request for that if you want.
However there's a similar feature that will allow you to use some default values in an update statement but you will have to provide these values yourself, without relying on the database to do it for you:
tableEntry.save({timestamp: new Date()}, {patch: true, defaults: true})
This requires that the model has an attribute with the default values:
var MyModel = bookshelf.Model.extend({
defaults: {property1: 'foo', property2: 'bar'},
tableName: 'my_table'
})
This feature isn't properly documented unfortunately, but it should work as intended.
When to use patch
This method is usually used when you fetch a model from the database like you did, but you're only interested in updating some of the attributes of the model. This is useful because by default Bookshelf will generate a query that will try to update all the attributes that are set on a model, not only those that are passed to save:
Models.table.forge({user_id: 1}).fetch().then(tableEntry => {
// tableEntry has all attributes set with a value
return tableEntry.save()
// This will generate a query like:
// UPDATE my_table SET user_id = 1, timestamp = '2018...',
// property1 = 'foo', property2 = 'bar', WHERE user_id = 1;
})
Using patch:
Models.table.forge({user_id: 1}).fetch().then(tableEntry => {
// tableEntry has all attributes set with a value
return tableEntry.save({property2: 'something'}, {patch: true})
// This will generate a query like:
// UPDATE my_table SET property2 = 'something', WHERE user_id = 1;
})

Related

Sequelize INSERT handling Object that has sometimes value of NULL as an attribute

I am trying to insert multiple objects to a Postgres database using Sequelize ORM. These objects are obtained from another project that I personally can't change or modify. These objects will have attributes that have object inside them.
Example object:
{
id:1,
name:'foo',
created:{user:{'insertattributehere'},date:'insertdatehere'},
modify:{user:{'insertattributehere'},date:'insertdatehere'},
}
For simplicity purposes, I have created a table that has each attribute of object as a column which will have a datatype of String(id as string, name as string, created_user_attribute, created_date etc).
So when inserting the object, I would simply the following INSERT.
const new_user = await ClassName.create({id: object.id, name: object.name, created_user_attribute: object.user.attribute ...})
However, sometimes, the attribute that contains another object can be null, for example
{
id:2,
name:'bar',
created:{date:'insertdatehere'}, notice that created doesnt have attribute User
modify:{user:{'insertattributehere'},date:'insertdatehere'},
}
This will result on TypeError, since 'created' doesn't have 'user' attribute. What I want is a method to somehow will handle this TypeError, and insert a NULL value (or "" for the string)
I could, as a last resort, manually check every attribute for a null value to handle the TypeError, and then create a nested statement such that I will insert a no value string instead. However, this looks very repetitive and inelegant.
Is there a way to handle this problem in a better way? Note that I can't change the objects that I want to insert to my database.
You can use lodash function omitBy along with the optional chaining operator to get only defined props and if omitted props has no default value in DB they will have null values by default:
const new_user = await ClassName.create(
_.omitBy({
id: object.id,
name: object.name,
created_user_attribute: object.user?.attribute
}, _.isUndefined)
)

Add lower() to a field in addIndex() in sequelize migration

To create a unique index in sequelize migrations we can do something like below,
await queryInterface.addIndex(SCHOOL_TABLE, {
fields: ['name', 'school_id'],
unique: true,
name: SCHOOL_NAME_ID_UNIQUE_INDEX,
where: {
is_deleted: false
},
transaction,
})
The problem is It allows duplicates due to case senstivity.
In the doc here, It is mentioned that fields should be an array of attributes.
How can I apply lower() to name field so that It can become case insensitive?
I am using a workaround for now, using raw query. I don't think addIndex() supports using functions on fields.
await queryInterface.sequelize.query(`CREATE UNIQUE INDEX IF NOT EXISTS ${SCHOOL_NAME_ID_UNIQUE_INDEX}
ON ${SCHEMA}.schools USING btree (lower(name), school_id) where is_deleted = false;
`, { transaction});

Proper Sequelize flow to avoid duplicate rows?

I am using Sequelize in my node js server. I am ending up with validation errors because my code tries to write the record twice instead of creating it once and then updating it since it's already in DB (Postgresql).
This is the flow I use when the request runs:
const latitude = req.body.latitude;
var metrics = await models.user_car_metrics.findOne({ where: { user_id: userId, car_id: carId } })
if (metrics) {
metrics.latitude = latitude;
.....
} else {
metrics = models.user_car_metrics.build({
user_id: userId,
car_id: carId,
latitude: latitude
....
});
}
var savedMetrics = await metrics();
return res.status(201).json(savedMetrics);
At times, if the client calls the endpoint very fast twice or more the endpoint above tries to save two new rows in user_car_metrics, with the same user_id and car_id, both FK on tables user and car.
I have a constraint:
ALTER TABLE user_car_metrics DROP CONSTRAINT IF EXISTS user_id_car_id_unique, ADD CONSTRAINT user_id_car_id_unique UNIQUE (car_id, user_id);
Point is, there can only be one entry for a given user_id and car_id pair.
Because of that, I started seeing validation issues and after looking into it and adding logs I realize the code above adds duplicates in the table (without the constraint). If the constraint is there, I get validation errors when the code above tries to insert the duplicate record.
Question is, how do I avoid this problem? How do I structure the code so that it won't try to create duplicate records. Is there a way to serialize this?
If you have a unique constraint then you can use upsert to either insert or update the record depending on whether you have a record with the same primary key value or column values that are in the unique constraint.
await models.user_car_metrics.upsert({
user_id: userId,
car_id: carId,
latitude: latitude
....
})
See upsert
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key. Otherwise, first unique constraint/index will be selected, which can satisfy conflict key requirements.

node.js to check out duplication value in mongoose

Now I'd like to save my json data into mongoose but the duplicate value had to be filtered.
my_json = [
{"name":"michael","age":21,"sports":"basketball"},
{"name":"nick","age":31,"sports":"golf"},
{"name":"joan","age":41,"sports":"soccer"},
{"name":"henry","age":51,"sports":"baseball"},
{"name":"joe","age":61,"sports":"dance"},
];
Database data is :
{
"name":"joan","age":41,"sports":"soccer"
}
Is there some specific method to avoid duplicate data insert to mongoose directly? It might be saved 4 of values except "joan" value.
Once I suppose to try to use "for statement", it was fine.
However I just want to make a simple code for that what could happen in a variety possible code.
for(var i = 0; i < my_json.length; i++){
// to check out duplicate value
db.json_model.count({"name":my_json[i].name}, function(err, cat){
if(cat.length == 0){
my_json_vo.savePost(function(err) {
});
}
})
};
As you see I need to use count method whether the value is duplicated or not. I don't want to use count method but make it more simple..
Could you give me an advice for that?
You can mark field as unique in mongoose schema:
var schema = new Schema({
name: {type: String, required: true, unique: true}
//...
});
Also, you can add unique index for name field into your database:
db.js_model.createIndex( {"name": 1}, { unique: true, background: true } );
then, if new entity with the same name will be asked to save - mongo won't save it, and respond an error.
In Addition to #Alex answer about adding unique key on the name field.
You can use insertMany() method with ordered parameter set to
false. Like this...
let my_json = [
{"name":"michael","age":21,"sports":"basketball"},
{"name":"nick","age":31,"sports":"golf"},
{"name":"joan","age":41,"sports":"soccer"},
{"name":"henry","age":51,"sports":"baseball"},
{"name":"joe","age":61,"sports":"dance"},
];
User.insertMany(my_json ,{ordered :false});
This query will successfully run and insert unique documents, And also
produces error later after successful insertion. So You will come to
know that there were duplicate records But now in the database, all
records are unique.
Reference InsertMany with ordered parameter

Does Mongoose upsert operation update/renew default schema values?

Mongoose Schema:
new Schema({
...
createDate: { type: Date, default: Date.now },
updateDate: { type: Date, default: Date.now }
});
Upsert operation:
const upsertDoc = {
...
}
Model.update({ key: 123 }, upsertDoc, { upsert: true })
when I upsert with update or findOneAndUpdate the default schema values createDate and updateDate are always renewed no matter document is inserted or updated. It's same when I use $set (in which of course I don't pass dates).
I don't seem to find anything to tell if it's an expected behavior. I expect dates to be added only on insert and not update, unless explicitly set.
If you are looking for "proof" of the expected behavior, then look no further than the source code itself. Particularly within the schema.js main definition:
updates.$setOnInsert = {};
updates.$setOnInsert[createdAt] = now;
}
return updates;
};
this.methods.initializeTimestamps = function() {
if (createdAt && !this.get(createdAt)) {
this.set(createdAt, new Date());
}
if (updatedAt && !this.get(updatedAt)) {
this.set(updatedAt, new Date());
}
return this;
};
this.pre('findOneAndUpdate', _setTimestampsOnUpdate);
this.pre('update', _setTimestampsOnUpdate);
this.pre('updateOne', _setTimestampsOnUpdate);
this.pre('updateMany', _setTimestampsOnUpdate);
}
function _setTimestampsOnUpdate(next) {
var overwrite = this.options.overwrite;
this.update({}, genUpdates(this.getUpdate(), overwrite), {
overwrite: overwrite
});
applyTimestampsToChildren(this);
next();
}
So there you can see all the 'pre' middleware handlers being registered for each of the "update" method variants and to the same functional code. These all essentially modify the $set operator in any "update" you issue to include the updatedAt field, or whatever name you mapped to that key in the schema options.
The actual statement sent with "upsert" actions uses $setOnInsert for the createdAt field or mapped option name ( see the top of the listing ). This action only applies when an "upsert" actually occurs, so documents that exist and are merely matches for any of the "update" methods are never actually touched by this value.
Those operators are part of how MongoDB works and not really to do with mongoose, but the code shown here shows how mongoose "adjusts" your "update" actions in order to include these additional operations.
For reference the whole main function in schema.js which works out what to apply currently begins at Line #798 for the genUpdates() function as called in the bottom part of the listing shown here yet the top part is the last few lines of that function where the keys of $setOnInsert get defined.
So in summary, YES every "update" action is intentional that the updatedAt mapped field has the current Date value assigned, and also that the "updates" are modified to include the $setOnInsert action which only applies when a new document is created as the result of an "upsert" action for the createdAt mapped field.
Well, I'd always recommend to use the provided and recommended way to manage createdAt and updatedAt by mongoose. Simply by passing timeStamp: true as schema options.
This is always a best practice and lets you not to be worried about such behaviors.
I use it and I never see a problem with timestamps using update or findOneAndUpdate.
Here is how you use it
new Schema({
... //Your schema
},{ timestamps: true})

Resources