sails.js - I want to add DB connection dynamically after sails lift - node.js

During sails lift I don't yet have all the connection information for my DB.
Is there a way to either have config values dependent on promises or dynamically create a connection after sails lift has completed?
I would obviously have to add a policy or hook to handle requests to routes needing the model if it wasn't available yet, but at this point I don't see how to even let the sails lift until I already know the connection info (it must be in the configs).
I'm hoping I'm missing a way to dynamically create connections and wire models to them.

Update: In Sails v1.0 / Waterline v0.13, this can be accomplished by accessing the stateless, underlying driver; e.g. sails.getDatastore().driver. This can be used in any database adapter that supports the new stateless driver interface, including MySQL, PostgreSQL, and MongoDB.
Prior to Sails v1.0, this was not officially supported in Sails or Waterline directly, but depending on your use case there are a couple of good solutions for this. If your use case is a handful of dynamic connections for the purpose of development (e.g. in an auto-reload plugin), and you're willing to live on the edge, you can take advantage of a private API as an immediate-term workaround: sails.hook.orm.reload(). However you definitely don't want to use that in production since it literally flushes the entire ORM.
If you are going to be dealing with a larger number (let's say > 10 unique configurations) of runtime-dynamic datastore configurations during the lifetime of the running Node process, that's a different story. In that case, I would recommend using the relevant raw driver (e.g. https://github.com/felixge/node-mysql) to summon/release those dynamic connections from a pool directly via a service. You can still use your normal models in your app for connections which are static-- you will just be best off implementing dynamic database connections separately in your service. For example, if you were building a hosted version of phpMyAdmin you might use a lower-level NPM package to dynamically fetch information about users' tables, but you'd still probably want to have Account and Database models that refer to tables/collections stored in your own database.
A more integrated solution for Sails is in the works. This ability to tap into the raw connection lifecycle and access it from userland is a prerequisite for built-in transaction support, which is something we expect to land in Sails/Waterline some time in the second half of 2016. In the mean time, if you encapsulate your logic to summon/release connections via service methods as suggested above, you'll have a working solution for now and your business logic should be more or less future proof (when you upgrade, you'll just need to swap out the implementation in your service). Hope that helps!

Yes; two things in sails.js allow you to do this. One currently exists, and one is upcoming.
https://github.com/sgress454/sails-hook-autoreload. This module watches for config file changes on disk and will re-load your ORM models when a file changes.
I am working on this exact feature right now, and my plan is to publish my work at the end of next week. I agree that it will be very useful.
The API will allow you to define new Models and Connections in the database on the fly. sails.js lifecycle callbacks handle updating the ORM and adapters and so forth. It is event-based and will allow you to manually fire events to update the ORM/Connections, like so:
sails.emit('hook:dynamic-orm:reload')
Is this what you need?

I have found a workaround for MySql DB
Important: In my case, I will be changing database but all database would have the same schema only difference is in their name and data they contain and make sure to add any error handling you need
In config/connection.js --------
disable Pooling
mysql_database: {
adapter: 'sails-mysql',
host: 'localhost',
user: 'root', //optional
password: '12345', //optional
database: 'db1', //optional
pool: false
},
Now navigate to
node_modules/sails-mysql/lib/connections/spawn.js
Add connectionObject.config = sails.SwitchDbConfig
connectionObject.config = sails.SwitchDbConfig
var conn = mysql.createConnection(connectionObject.config);
conn.connect(function (err) {
afterwards(err, conn);
});
Now Finally Set sails.SwitchDbConfig form anywhere (service , controller ,etc)
as
sails.SwitchDbConfig = {
pool: false,
connectionLimit: 5,
waitForConnections: true,
adapter: 'sails-mysql',
host: 'localhost',
user: 'root',
password: '12345',
database: sails.DB_NAME,
identity: 'mysql_database'
}
And at last if you find something wrong for needs to be updated .... please ping

Related

Loopback4 challenges

I am very new to this Loopback 4. When I am setting up my project I am having some setup issues. Below are few things.
Environment based datasource loading
There is no direct way to load the datasource based on the environment.
Some configurations/constant variables need to be defined on a JSON file to access into the entire application, again this is also based on the environment.
Not able to connect MongoDB Atlas database. In an express application I am able connect, but not in Loopback. Below is the error it is returning.
url.dbName || self.settings.database,
^
TypeError: Cannot read property 'dbName' of null
Not able to achieve model relations.
I don't want to return the entire Model in my API response. How can I customize my API response using the Model?
I want to write my business logic in a separate file, not in a controller/repository. is it a good idea OR where should I return the business logic? and best practices.
I don't find proper documentation on Loopback4 to solve these issues. any help would be appreciated.
Let me try and help you with a few of these.
1 - You can do env based ds config loading by adding below to the constructor of your datasource.ts file.
constructor(
#inject('datasources.config.pgdb', {optional: true})
dsConfig: object = config,
) {
// Override data source config from environment variables
Object.assign(dsConfig, {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE,
schema: process.env.DB_SCHEMA,
});
super(dsConfig);
}
After this you can use packages like dotenv to keep env vars out of your repo.
2 - Use dotenv. Load dotenv config in application.ts. Add this to the end of application.ts.
dotenv.config();
You may need to import dotenv like this
import * as dotenv from 'dotenv';
3 - Not sure about this, but check if it is supported in data source generator here.
4 - There are currently only 3 type of relations supported. And, in my experience, it's enough for most of the applications - belongsTo, hasMany, hasOne. Refer docs here for details.
5 - You can return any custom model you want. Just make sure that it extends Entity class from #loopback/repository. Also, make sure you define property types using #property decorator.
6 - You can move your business logic to service classes or create providers as well. We used to keep the DB specific operational logic like custom queries, etc, in the repository and rest of the business logic inside the controller. But if there is a big complex logic, create a provider class and do there. Refer docs for providers here.
We also created a boiler plate starter project on github to help community members like you to get a kick start with some of the basic stuff. Most of the above mentioned stuff is implemented there. You can just clone it, change the remote url and all set to go. Take a look here.

Can a Node package require a DB connection?

As per the title, can a Node.js package require a database connection?
For example, I have written a specific piece of middlware functionality that I plan to publish via NPM, however, it requires a connection to a NoSQL database. The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value.
Is this considered bad practice?
It is not a bad practice as long as you require the DB needed and also explicitly state it clearly in your Readme.md file, it's only a bad practice when you go ahead and work without provide a comment in your codes or a readme.md file that will guide any other person going through your codes.
Example:
//require your NoSQL database eg MongoDB
const mongoose = require('mongoose');
// to connect to the database. **boy** is the database name
mongoose.connect('mongodb://localhost/boy', function(err) {
if (err) {
console.log(err);
} else {
console.log("Success");
}
});
You generally have two choices when your module needs a database and wants to remain as independently useful as possible:
You can load a preferred database in your code and use it.
You can provide the developer using your module a means of passing in a database that meets your specification to be used by your module. The usual way of passing in the database would be for the developer using your module to pass your module the data in a module constructor function.
In the first case, you may need to allow the developer to specify a disk store path to be used. In the second case, you have to be very specific in your documentation about what kind of database interface is required.
There's also a hybrid option where you offer the developer the option of configuring and passing you a database, but if not provided, then you load your own.
The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value. Is this considered bad practice?
No, it's not a bad practice. This would be an implementation of option number 1 above. As long as your customers (the developers using your module) don't mind you loading and using Mongoose, then this is perfectly fine.

Feathers JS nested Routing or creating alternate services

The project I'm working on uses the feathers JS framework server side. Many of the services have hooks (or middleware) that make other calls and attach data before sending back to the client. If I have a new feature that needs to query a database but for a only few specific things I'm thinking I don't want to use the already built out "find" method for this database query as that "find" method has many other unneeded hooks and calls to other databases to get data I do not need for this new query on my feature.
My two solutions so far:
I could use the standard "find" query and just write if statements in all hooks that check for a specific string parameter that can be passed in on client side so these hooks are deactivated on this specific call but that seems tedious especially if I find this need for several other different services that have already been built out.
I initialize a second service below my main service so if my main service is:
app.use('/comments', new JHService(options));
right underneath I write:
app.use('/comments/allParticipants', new JHService(options));
And then attach a whole new set of hooks for that service. Basically it's a whole new service with the only relation to the origin in that the first part of it's name is 'comments' Since I'm new to feathers I'm not sure if that is a performant or optimal solution.
Is there a better solution then those options? or is option 1 or option 2 the most correct way to solve my current issue?
You can always wrap the population hooks into a conditional hook:
const hooks = require('feathers-hooks-common');
app.service('myservice').after({
create: hooks.iff(hook => hook.params.populate !== false, populateEntries)
});
Now population will only run if params.populate is not false.

Can I change databases on each request (Sails.js)

I have a few PHP scripts which I am in the process of migrating to Node.js. I am using Sails.js for this and I would like to know how I can change databases for each request based on a request parameter.
Currently I have 3-4 identical PostgreSQL databases. Let's just say that each database corresponds to a different client.
Below is a segment of the current PHP script where the database connection is established:
$database = $_GET['db'];
$conn_details = "host=localhost port=5432 dbname=$database user=****** password=******";
$dbconn = pg_connect($conn_details);
Here you can see that the database name is coming from the request parameter "db".
I would like to have a similar functionality in my sails.js controller. I know that i can declare multiple databases in the connections.js and that I can have models use different databases but what i am after is for the models to stay the same and only the database to change based on each request.
I have found 2 similar questions but they have both stayed unanswered for quite some time now. (Here and here)
I think you are looking for something like sub apps
sails-hook-subapps
but it's experimental module. So i wouldn't recommend using it on production. Other option also not good is multiplying your Models like that:
One main model with all methods, attributes and "stuff"
Many models with connections config
In 'parent' model you will select to which model you want to send send action. For example write method:
getModel: function(dbName){
return models[dbName];
}
in models Object you will store all Models with different connections. Not sure how validators will works in this scenario. You need to test if it will not be required do do something like this in child Models
attributes: parentModel.attributes

Limit MongoClient to read only operations

Is it possible to connect to MongoDB in a read-only mode?
I'm currently using the driver for Node.js to create a new client with MongoClient.connect:
require('mongodb').MongoClient.connect(url, {
// options object
}, function(err, client) {
// ...
});
I don't see anywhere in the docs how to create a client in read-only mode.
It is possible? how?
Background:
I'm building an app which connects to a MongoDB. Other developers in my team extend this app with plugins that consume data. A plugin is supplied with a client object to access the databse. I want to prevent other developers from accidentally making changes to the database.
One of the most clever workarounds for this scenario requires the usage of replica set. Simply connect only to one of the secondary node(s) will prevent write operation and achieve a read-only behaviour.

Resources