In Sequelize tutorials, it is said that a single model is generated in this way:
const User = sequelize.define('user', {
firstName: {
type: Sequelize.STRING
},
lastName: {
type: Sequelize.STRING
}
});
And than saved (i.e. create table) like this :
User.sync().then(() => {
// do whatever
});
But I expect to do that just once, I need to create tables just once. So the next time I run the script how to just retrieve models (i.e. tables) that were defined before with the above code.
in sync method you can pass an option to avoid sync of database tables every time. This option will make sure that your application checks for table in database, if it exist then it will not create otherwise it will create table.
User.sync(
{force: false}).then(() => {
// do whatever
});
Let me know if you still face issue. I am using sequalize and i am not getting this issue.
Related
I am using Dynamodb with nodejs for my reservation system. And Dynamoose as ORM. I have two tables i.e Table and Reservation. To create relation between them, I have added tableId attribute in Reservation which is of type Model type (of type Table type), as mentioned in the dyanmoose docs. Using the document.populate I am able to get the Table data through the tableId attribute from Reservation table. But how can I retrieve all Reservation for a Table? (Reservation and Table has one to many relation)?
These are my Model:
Table Model:
const tableSchema = new Schema ({
tableId: {
type: String,
required: true,
unique: true,
hashKey: true
},
name: {
type: String,
default: null
},
});
*Reservation Model:*
const reservationSchema = new Schema ({
id: {
type: Number,
required: true,
unique: true,
hashKey: true
},
tableId: table, \\as per doc attribute of Table (Model) type
date: {
type: String
}
});
This is how I retrieve table data from reservation model
reservationModel.scan().exec()
.then(posts => {
return posts.populate({
path: 'tableId',
model: 'Space'
});
})
.then(populatedPosts => {
console.log('pp',populatedPosts);
return {
allData: {
message: "Executedddd succesfully",
data: populatedPosts
}
}
})
Anyone please help to retrieve all Reservation data from Table??
As of v2.8.2, Dynamoose does not support this. Dynamoose is focused on one directional simple relationships. This is partly due to the fact that we discourage use of model.populate. It is important to note that model.populate does another completely separate request to DynamoDB. This increases the latency and decreases the performance of your application.
DynamoDB truly requires a shift in how you think about modeling your data compared to SQL. I recommend watching AWS re:Invent 2019: Data modeling with Amazon DynamoDB (CMY304) for a great explanation of how you can model your data in DynamoDB in a highly efficient manner.
At some point Dynamoose might add support for this, but it's really hard to say if we will.
If you truly want to do this, I'd recommend adding a global index to your tableId property in your reservation schema. Then you can run something like the following:
async function code(id) {
const reservation = await reservationModel.get(id);
const tables = await tableModel.query("tableId").eq(id).exec(); // This will be an array of `table` entries where `"tableId"=id`. Remember, it is required you add an index for this to work.
}
Remember, this will cause multiple calls to DynamoDB and isn't as efficient. I'd highly recommend watching the video I linked above to get more information about how to model your data in an more efficient manner.
Finally, I'd like to point out that your unique: true code does nothing. As seen in the Dynamoose Attribute Settings Documentation, unique is not a valid setting. In your case since you don't have a rangeKey, it's not possible for two items to have the same hashKey, so technically it's already a unique property based on that. However it is important to note that you can overwrite existing items when creating an item. You can set overwrite to false for document.save or Model.create to prevent that behavior and throw an error instead of overwriting your document.
I'm new to Sequelize.js and Databases in general, i haven't used migrations before, but i know that they can be used to make changes to the tables structure in a non-destructive way.
However i'm not sure where to declare column options (notNull, references, validate, ENUM values, etc...)
Should i declare such options in the the model file, or migration file? or both?
Wouldn't adding the options to both model and migration cause duplicate code?
(keep in mind that i'm talking about the initial migrations that create tables to the database, not the migrations that add columns and stuff...)
Any help would be appreciated!
I see three options you can take. The first two options might be edge cases but it helps to understand.
Destructive option
You want to prototype a project and you don't mind losing your data, then you could potentially do not care about migration files and synchronize your database according to your model with:
await sequelize.sync({ force: true });
It will execute on all your models:
DROP TABLE IF EXISTS "your_model" CASCADE;
CREATE TABLE IF NOT EXISTS "your_model" (...)
This command can be executed at the start of your application for example.
Static option
As you mentioned you don't want to add columns and stuff, it's probably a good option.
Now if you don't want to lose data, you could simply use the sync method without the force option:
await sequelize.sync({ });
It will only generate:
CREATE TABLE IF NOT EXISTS "your_model" (...)
Hence, your tables are created according to your models and you don't have to create migration files.
However, if you want to modify your model and it's the most frequent usecase, the new column will not be generated in the table dynamically that's why you need migration scripts.
Flexible option
You will have to define both migration file and your model. That's what the cli does. Here is an example:
# npx sequelize-cli init or create migrations and models folders
npx sequelize-cli model:generate --name User --attributes firstName:string,email:string
Now you will have two more files:
// migrations/<date>-create-user.js
module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.createTable('Users', {
id: {
allowNull: false,
autoIncrement: true,
primaryKey: true,
type: Sequelize.INTEGER
},
firstName: {
type: Sequelize.STRING
},
email: {
type: Sequelize.STRING
},
createdAt: {
allowNull: false,
type: Sequelize.DATE
},
updatedAt: {
allowNull: false,
type: Sequelize.DATE
}
});
},
down: (queryInterface, Sequelize) => {
// I usually remove this and create the table only if not exists
return queryInterface.dropTable('Users');
}
};
// models/users.js
module.exports = (sequelize, DataTypes) => {
const User = sequelize.define('User', {
firstName: DataTypes.STRING,
email: DataTypes.STRING
}, {});
User.associate = function(models) {
// associations can be defined here
};
return User;
};
You could refactor the code from the migration and the model, however it will be rather cumbersome because some migration files will only add one column, so merging all of them into the model might probably be less clear.
You should do it in both because as time goes by your models and inital migration will differ from each other. So I suppose you should determine a final structure in models and after that create an initial migration.
Constraints are defined and run on SQL level while validations are run on application level. Sequelize supports having validations and constraints on Models, only constraints can be defined in migrations.
My opinion is to put all constraints in migrations, and validations in Models. That way you have some kind of separation of concern as validations are run before query is made to the database - where constraints are run. You can read more on Sequelize's Validations and Constraints Validations and Constraints
I have an app that allows users to use their own custom data, so I can't know what the data is. However, I do want to allow them to sort the data.
This can be a significant amount of data, and mongodb ends up giving me memory errors (32MB limit)
What would be the best way to approach this? How can I allow the user to sort a large amount of data by an unknown field?
MongoDB allows you to design the schema in such a way that it can store Objects and Object relation in a schema, So you can allow the user to store any kind of information. As #kevinadi said, there is a limit of 32MB. As of sorting is concerned it can be done on your serverside.
This is an example I tried when storing objects in MongoDB and Mongoose ORM
var mongoose = require("mongoose");
var userSchema = new mongoose.Schema({
email: {
type: String,
unique: true,
required: true,
lowercase: true,
trim: true,
match: [/^\w+([\.-]?\w+)*#\w+([\.-]?\w+)*(\.\w{2,3})+$/, "Please fill a valid email address"]
},
custInfo:{
type:Object,
required: true
}
isConfirmed: {
type: Boolean,
required: true,
default: false
},
confirmedOn:{
type: Date,
required: true,
default: Date.now()
}
});
module.exports = mongoose.model("user",userSchema);
Since you have tagged this question Meteor I assume, you have the default Meteor environment, where you can use the client-side lightweight Mongo collections.
This gives you the opportunity to publish (Publication) / return (Method) your data mostly unsorted and let the client's handle this task.
Think this: just 100 clients asking for a publication that updates on every sort action (because the subscription parameters change, so the publication changes, too).
This causes already your server to consume a high amount of RAM to keep the observers (OPLOG etc.) running for 100 publications, each querying huge amounts of documents.
Possible performant solutions are described below. Please keep in mind, that they are not bound to any front-end and rather a conceptual description. You will have to include reactivity etc., based on your frontend environment.
Option A - Publish unsorted, let clients sort
server
Meteor.publish('hugeData', function () {
return MyCollection.find({ ...})
})
client
const handle = Meteor.subscribe('hugeData')
if (handle.ready()) {
const sortedData = MyCollection.find({ ... }, {sort: { someField: -1 } })
}
A big plus is here, that you can inform the clients about the completeness status, if using cursor.observeChanges.
Note, that if you want to scan backwards (return docs, with the newest) you can use the hint option on find:
Meteor.publish('hugeData', function () {
return MyCollection.find({ ...}, { hint: { $natural : -1 })
})
This is way more performant than { sort: { fieldName: -1} }.
Option B - return unsorted from Method, let clients sort
Now there may still be a problem with solution A, since it still has a lot of RAM to consume if there are lots of subscribers. An alternative (especially if live-data changes are not so relevant) is to use the Meteor Methods:
server
Meteor.method('hugeData', function () {
return MyCollection.find({ ...}).fetch()
})
Note that this requires to fetch the docs, otherwise and unhandledPromiseRejection is thrown.
client
This requires a LocalCollection on the client, that is not in sync with your server side collection, or you will get problems with document syncing:
const HugeData = new LocalCollection(null) // note the null as collection name!
const insertUpdate = document => {
if (LocalCollection.findOne(document._id)) {
delete document._id
return LocalCollection.update(document._id, document)
} else {
return LocalCollection.insert(document)
}
}
Meteor.call('hudeData', (err, data) => {
data.forEach(insertUpdate)
})
Then you can use the LocalCollection on the client for any projection of the received data.
All in all it is a good tradeoff to move the load to the clients. As long as you keep them informed when projections take a while it should be okay.
My current thought is an additional indexed collection holding 1.entity id, 2 fields name 3.field value.
Have that collection indexed, and then pull ordered entity ids from there, later on loading the full relevant documets by ID.
I'm trying to create a basic MEAN stack CRUD api to add shops into my database. I want every shop to have a unique name (to avoid adding duplicates). So far, everything gets saved into the database even if I post the same request 10 times. Went trough the code a couple of times and can't figure out what's wrong, if anyone could point me in the right direction I'd be very grateful.
shop model:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var bcrypt = require('bcrypt-nodejs');
//shop schema
var ShopSchema = new Schema({
name: { type: String, required: true, index: { unique: true }},
address: { type: String, required: true, index: { unique: true }}
});
module.exports = mongoose.model('Shop', ShopSchema);
post function:
apiRouter.route('/shops')
//create a shop
.post(function(req, res) {
//new instance of shop model
var shop = new Shop();
//set the shop information
shop.name = req.body.name;
shop.address = req.body.address;
//save shop and check for errors
shop.save(function(err) {
if(err) {
//duplicate entry
if(err.code == 11000) {
return res.json({ success: false, message: 'A shop with that name already exists.'});
}
else {
return res.send(err);
}
}
else {
res.json({ message:'Shop created! '});
}
});
})
I do not receive errors of any kind, like I said everything just gets written into the database.
Thanks for the help.
Basically your writes haven't finished before the new entries are saved. You can read more about creating unique keys Here, but the gist is below. The solution is to create an index over the unique fields ahead of time.
When we declare a property to be unique, we’re actually declaring that we want a database-level index on that property. Some database abstraction layers will issue a query to see if a there’s another record with the same value for the unique property, and if that query comes back empty, it allows the save or update to proceed. If you trust this method, you either have incredibly low traffic or you’re about to learn about race conditions, because 2 or more requests could have their checks to the database occur before any writes go out, and you end up with non-unique data in your DB.
In between the time that check query is issued, another insert could come along doing the exact same thing, and you still end up with duplication. Uniqueness can’t be correctly validated at the application level. So it’s good that Mongoose tries to create an index for us.
I have a basic Mongoose model with a Meeting and Participants array:
var MeetingSchema = new Schema({
description: {
type: String
},
maxNumberOfParticipants: {
type: Number
},
participants: [ {
type: Schema.ObjectId,
ref: 'User'
} ]
});
Let's say I want to validate that the number of participants added doesn't exceed the maxNumberOfParticipants for that meeting.
I've thought through a few options:
Custom Validator - which I can't do because I have to validate one attribute (participants length) against another (maxNumberOfParticipants).
Middleware - i.e., pre-save. I can't do this either because my addition of participants occurs via a findOneAndUpdate (and these don't get called unless I use save).
Add validation as part of my addParticipants method. This seems reasonable, but I'm not sure how to pass back a validation error from the model.
Note that I don't want to implement the validation in the controller (express, MEAN.js stack) because I'd like to keep all logic and validations on the model.
Here is my addParticipants method:
MeetingSchema.methods.addParticipant = function addParticipant(params, callback) {
var Meeting = mongoose.model('Meeting');
if (this.participants.length == this.maxNumberOfParticipants) {
// since we already have the max length then don't add one more
return ????
}
return Meeting.findOneAndUpdate({ _id: this.id },
{ $addToSet: { participants: params.id } },
{new: true})
.populate('participants', 'displayName')
.exec(callback);
};
Not sure how to return a validation error in this case or even if this pattern is a recommended approach.
I wouldn't think that's it's common practice for this to be done at the mongoose schema level. Typically you will have something in between the function getting called and the database layer (your schema) that performs some kind of validation (such as checking max count). You would want your database layer to be in charge of just doing simple/basic data manipulation that way you don't have to worry about any extra dependencies when/if anything else calls it. This may mean you'd need to go with route 1 that you suggested, yes you would need to perform a database request to find out what your current number of participants but I think it the long run it will help you :)