In which of the MEAN stack level is it best to load bulk data? I have about 200 - 800 entries of 2 - 3 different types (i.e. they would require 2 - 3 different Mongoose schemas).
Here are the options to load these data (feel free to point out any misunderstandings, I'm new):
Client side: Angular level
Automate lots of user inputs
Server side: Nodejs + Express + Mongoose
Define the schema in Mongoose, create the objects, save each one
Database side: Mongodb
Make a json file with the data, and import it directly into Mongo:
mongoimport -d db_name -c collection_name --jsonArray --file jsonfilename.json
The third way is the purest and perhaps fastest, but I don't know if it's good to do it at a such low level.
Which one is the best? If there is not an optimal choice, what would be the advantages and disadvantages of each?
It depends on what you're bulk loading and if you require validations to be done.
Client side: Angular level
If you require the user to do the bulk loading and require some human readable error messages that's your choice
Server side: Nodejs + Express + Mongoose
You can bulk import from a file
Expose a REST endpoint to trigger bulk import of your data
You can use Mongoose for validation (see validation in mongoose)
Mongoose supports creating multiple documents with one call (see Model.create)
Database side: Mongodb
Fast, No code needed
No flexible validation
I'd choose the option that fits your understanding of the bulk data import best: If it requires a UI your option is 1 combined with 2, if you see this as part of your "business" logic and you're importing data from a external file or want other systems to trigger that import your option is 2, if you see it as a one time action to import data or you don't require any validation or logic related to the import the best choice is option 3.
Loading it via the client side will require you to write more code to handle importing and to send to the backend, then handle it in Node.js.
The fastest method out of all of them would be to directly import it the data using mongoimport.
Related
It's been a while since I used node and express and I was sure that this was possible, but i'm having an issue of figuring it out now.
I have a simple postgres database with sequelize. I am building a back end and don't have a populated database yet. I want to be able to provide fake data to use to build the front end and to test with. Is there a way to populate a database when the node server is started? Maybe by reading a json file into the database?
I know that I could point to this fake data using a setting in the environment file, but I don't see how to read in the data on startup. Is there a way to create a local database, read in the data, and point to that?
You can use fake factory package, I think it can solve your problem.
https://www.npmjs.com/package/faker-factory
FakerJs provides that solution.
import { faker } from '#faker-js/faker';
const randomName = faker.name.findName();
const randomEmail = faker.internet.email();
with the above, you can run a loop for loop to be specific to create
the desired data you may need for your project.
Also, check on the free web-API that provides fake or real data to workon
As per the title, can a Node.js package require a database connection?
For example, I have written a specific piece of middlware functionality that I plan to publish via NPM, however, it requires a connection to a NoSQL database. The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value.
Is this considered bad practice?
It is not a bad practice as long as you require the DB needed and also explicitly state it clearly in your Readme.md file, it's only a bad practice when you go ahead and work without provide a comment in your codes or a readme.md file that will guide any other person going through your codes.
Example:
//require your NoSQL database eg MongoDB
const mongoose = require('mongoose');
// to connect to the database. **boy** is the database name
mongoose.connect('mongodb://localhost/boy', function(err) {
if (err) {
console.log(err);
} else {
console.log("Success");
}
});
You generally have two choices when your module needs a database and wants to remain as independently useful as possible:
You can load a preferred database in your code and use it.
You can provide the developer using your module a means of passing in a database that meets your specification to be used by your module. The usual way of passing in the database would be for the developer using your module to pass your module the data in a module constructor function.
In the first case, you may need to allow the developer to specify a disk store path to be used. In the second case, you have to be very specific in your documentation about what kind of database interface is required.
There's also a hybrid option where you offer the developer the option of configuring and passing you a database, but if not provided, then you load your own.
The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value. Is this considered bad practice?
No, it's not a bad practice. This would be an implementation of option number 1 above. As long as your customers (the developers using your module) don't mind you loading and using Mongoose, then this is perfectly fine.
We always need to perform validation and some business logic both in the server and client side. As we are using Angular2 and node with typescript (TS), we have conclude that a vast amount of the code and class model will be equal except by the persistence model (socket from client to server, sql from node to database).
As the objects can persist itself, we should be able to override just the save() method and so use codes like bellow
<input [ngModel]=‘person.getSomeProperty()’ (ngModelChange)=“person.setSomeProperty($event)“>
...
person.save()
person.save() would send the information to node that would rehydrate the class and perform the same person.save(), but by this time performing the dml into the database.
From this perspective, I have some question regarding the strategy:
* Is it sound reasonable to reuse code in such way? The uses cases do match.
* Is it correct to create a node module with all the business logic (like 20 classes) and export it to be used by the components?
* Can I two way bind to the same person.someMethod() from different components and views?
I have a few PHP scripts which I am in the process of migrating to Node.js. I am using Sails.js for this and I would like to know how I can change databases for each request based on a request parameter.
Currently I have 3-4 identical PostgreSQL databases. Let's just say that each database corresponds to a different client.
Below is a segment of the current PHP script where the database connection is established:
$database = $_GET['db'];
$conn_details = "host=localhost port=5432 dbname=$database user=****** password=******";
$dbconn = pg_connect($conn_details);
Here you can see that the database name is coming from the request parameter "db".
I would like to have a similar functionality in my sails.js controller. I know that i can declare multiple databases in the connections.js and that I can have models use different databases but what i am after is for the models to stay the same and only the database to change based on each request.
I have found 2 similar questions but they have both stayed unanswered for quite some time now. (Here and here)
I think you are looking for something like sub apps
sails-hook-subapps
but it's experimental module. So i wouldn't recommend using it on production. Other option also not good is multiplying your Models like that:
One main model with all methods, attributes and "stuff"
Many models with connections config
In 'parent' model you will select to which model you want to send send action. For example write method:
getModel: function(dbName){
return models[dbName];
}
in models Object you will store all Models with different connections. Not sure how validators will works in this scenario. You need to test if it will not be required do do something like this in child Models
attributes: parentModel.attributes
Good Afternoon,
I am new with node.js and I try to develope an only command app.
For this app I need an ORM and I wish to use WATERLINE as standalone but not in express framework.
I looked at the example and I succeed to see my different collections.
// Our collections (i.e. models):
ontology.collections;
console.log(ontology.collections);
// Our connections (i.e. databases):
ontology.connections;
I am stucked after this. I can't find a way to return my models and make queries.
If someone could help me taht would be great.
Thanks
If you initialized Waterline in the ontology variable and got the collections successfully loaded as you say, now you can access each collection loaded (with loadCollection()) like this:
ontology.collection.mycollection
Where mycollection is the identity defined in your model.
Then you can make queries:
ontology.collection.mycollection.find(...)