I am using loopback to develop API, and having problem with config by env in lb3, i have datasources.json file as below
{
"SSOSystem": {
"host": "x.y.z.o",
"port": 27017,
"url": "mongodb://SSO-System-mongo:U5KckMwrWs9EGyAh#x.y.z.o/SSO-System",
"database": "SSO-System",
"password": "U5KckMwrWs9EGyAh",
"name": "SSOSystem",
"connector": "mongodb",
"user": "SSO-System-mongo"
}
}
and datasources.local.js as below
module.exports = {
SSOSytem: {
connector: 'mongodb',
hostname: process.env.SSO_DB_HOST || 'localhost',
port: process.env.SSO_DB_PORT || 27017,
user: process.env.SSO_DB_USERNAME,
password: process.env.SSO_DB_PASSWORD,
database: process.env.SSO_DB_NAME,
url: `mongodb://${process.env.SSO_DB_USERNAME}:${process.env.SSO_DB_PASSWORD}#${process.env.SSO_DB_HOST}/${process.env.SSO_DB_NAME}`
}
}
but when I run my app with env local
NODE_ENV=local node .
loopback only loads datasources from datasources.json file, did I do something wrong in datasources config? Does anyone have same problem with me?
Many thanks,
Sorry, this is my typo mistake not problem of loopback SSOSystem in datasources.json and SSOSytem in file datasources.local.js
Related
I'm having trouble setting up a project with the config npm package.
My config.js looks like this:
require('dotenv').config();
const { DB_HOST, DB_USERNAME, DB_PASSWORD, SENDGRID_API_KEY } = process.env;
const env = process.env.NODE_ENV;
const config = {
development: {
username: DB_USERNAME,
password: DB_PASSWORD,
database: "database_development",
host: DB_HOST,
dialect: "postgres",
sendgrid: {
base_url: 'https://localhost:3002',
sendgrid_api_key: SENDGRID_API_KEY,
sender_email: '',
enabled: true,
},
},
test: {
username: DB_USERNAME,
password: DB_PASSWORD,
database: "database_test",
host: DB_HOST,
dialect: "postgres",
sendgrid: {
base_url: 'https://localhost:3002',
sendgrid_api_key: SENDGRID_API_KEY,
sender_email: '',
enabled: true,
},
},
production: {
username: DB_USERNAME,
password: DB_PASSWORD,
database: "database_production",
host: DB_HOST,
dialect: "postgres",
sendgrid: {
base_url: 'https://localhost:3002',
sendgrid_api_key: SENDGRID_API_KEY,
sender_email: '',
enabled: true,
},
}
};
module.exports = config[env];
In 1 service driver file I have the following line:
const dialect = config.get('dialect');
I get the following error: "Error: Configuration property "dialect" is not defined".
I also tried using 'development.dialect' but that doesn't help either.
Is it possible that the require('config'); doesn't work?
In my Sequelize's index.js file I've got
const config = require(__dirname + '/../config/config.js'); and that seems to work fine.
This github repository provides a good example of using config package.
https://github.com/basarbk/tdd-nodejs
Notes:
for diffrent configs, you must have a file named whatever you use for NODE_ENV. for example for "start": "cross-env NODE_ENV=production node index" create a file called production.js
check config folder in root directory
if you are using window you should use cross-env.
I am not sure what to title my question.
Its been a adventure with node.js and a helpful person pointed me to ioredis. Currently I have:
var Redis = require("ioredis");
const DBConfig = require(__dirname+'/../Config.json');
var cluster = new Redis.Cluster([
{
port: 6001,
host: "10.0.0.6",
},
{
port: 6002,
host: "10.0.0.5",
},
{
port: 6003,
host: "10.0.0.4",
},
{
port: 6004,
host: "10.0.0.3",
},
{
port: 6005,
host: "10.0.0.2",
},
{
port: 6006,
host: "10.0.0.1",
},
]);
But to me this seems it would be better in a json config file like...
Config.json:
{
"configA" : "abc",
"someotherconfigB" : "Stuff",
"foo" : "bar"
}
{
"port": 6001,
"host": "10.0.0.6",
},
{
"port": 6002,
"host": "10.0.0.5",
},
{
"port": 6003,
"host": "10.0.0.4",
},
{
"port": 6004,
"host": "10.0.0.3",
},
{
"port": 6005,
"host": "10.0.0.2",
},
{
"port": 6006,
"host": "10.0.0.1",
},
}
I am so new and this I just not sure how to implement this without syntax errors.
var Redis = require("ioredis");
const DBConfig = require(__dirname+'/../Config.json');
var cluster = new Redis.Cluster([DBConfig.redis]);
I am not sure how to implement "var cluster = new Redis.Cluster([DBConfig.redis]);" properly
You should declare those settings in as an array under a key
{
"configA" : "abc",
"someotherconfigB" : "Stuff",
"foo" : "bar",
"redisCluster": [
{
"port": 6001,
"host": "10.0.0.6"
},
{
"port": 6002,
"host": "10.0.0.5"
},
{
"port": 6003,
"host": "10.0.0.4"
}
]
}
Then use that key to access that value inside the required config file.
const DBConfig = require('../Config.json');
const cluster = new Redis.Cluster(DBConfig.redisCluster);
First, you need to have a proper config file. Your file seems to contain some config information and node information. I would suggest:
Config.json file:
{
"configs": {
"configA": "abc",
"someotherconfigB": "Stuff",
"foo": "bar"
},
"nodes": [
{
"port": 6001,
"host": "10.0.0.6"
},
{
"port": 6002,
"host": "10.0.0.5"
},
{
"port": 6003,
"host": "10.0.0.4"
},
{
"port": 6004,
"host": "10.0.0.3"
},
{
"port": 6005,
"host": "10.0.0.2"
},
{
"port": 6006,
"host": "10.0.0.1"
}
]
}
Then your file should look like:
const Redis = require('ioredis');
const DBConfig = require(__dirname + '/Config.json');
const cluster = new Redis.Cluster(DBConfig.nodes);
Object.entries(DBConfig.configs).map(([key, value]) => {
cluster.set(key, value);
});
DBConfig.nodes it's already an array. No need to put brackets around it
Object.entries(DBConfig.configs) will give you an array of [key, value] pairs of your DBConfig.configs's properties
Resources:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries
I use node.js , TS and typeorm for back-end project.
I need to connect to a different database in the middleware according to the parameter I send.
And I've got to send the query to the database.
ormconfig
[
{
"name": "default",
"type": "postgres",
"host": "localhost",
"port": 5432,
"username": "postgres",
"password": "12345",
"database": "dbOne"
},
{
"name": "second-connection",
"type": "postgres",
"host": "localhost",
"port": 5432,
"username": "postgres",
"password": "12345",
"database": "dbTwo"
}
]
That's my connection settings.
After I do that, I'm trying to connect to the middleware.
const connectionOptions = await getConnectionOptions("second-connection");
const conTwo = await createConnection(connectionOptions);
const managerTwo = getManager("second-connection");
const resultTwo = await managerTwo
.createQueryBuilder(SysCompany, "company")
.getOne();
console.log(resultTwo);
I think I can connect to the database, but I'm having trouble with the repository.
Error
EntityMetadataNotFound: No metadata for "SysCompany" was found.
#Entity()
export class SysCompany extends CoreEntityWithTimestamp {
#Column({ length: 100 })
name: string;
// FK
// SysPersonnel
#OneToMany(type => SysPersonnel, personnel => personnel.sysCompany)
sysPersonnels: SysPersonnel[];
}
Maybe typeORM cannot find your JavaScript entity. I had that problem some time ago. You can do the following:
Check your destination folder after you built the project. Is your SysCompany.js available?
Set the entities property in the configuration. It must contain the path to your JS entities. The typeORM docs state that "Each entity must be registered in your connection options".
{
"name": "second-connection",
"type": "postgres",
"host": "localhost",
"port": 5432,
"username": "postgres",
"password": "12345",
"database": "dbTwo"
"entities": ["<path to entities>/**/*.js"]
}
I would also recommend to use a JavaScript configuration file. Your ormconfig.js can then use __dirname (directory name of the current module) to set the path. So if your directories look like this:
project/ormconfig.js
project/dist/entity/SysCompany.js
project/dist/entity/OtherEntity.js
You can use a configuration like this:
import {join} from "path";
...
entities: [
join(__dirname, "dist/entity/**/*.js")
],
...
You could also prevent duplication by using a base configuration object.
import {join} from "path";
const baseOptions = {
type: "postgres",
host: "localhost",
port: 5432,
username: "postgres",
password: "12345",
entities: [
join(__dirname, "dist/entity/**/*.js")
]
}
const defaultConfig = Object.assign({
name: "default",
database: "dbOne",
}, baseOptions);
const secondConfig = Object.assign({
name: "second-connection",
database: "dbTwo",
}, baseOptions);
module.exports = [ defaultConfig, secondConfig ];
In the file where you open the connection you could use an import:
import { secondConfig } from "<path to file>/ormconfig";
const conTwo = await createConnection(secondConfig);
The simplest way to use multiple databases is to create different connections:
import {createConnections} from "typeorm";
const connections = await createConnections([{
name: "db1Connection",
type: "mysql",
host: "localhost",
port: 3306,
username: "root",
password: "admin",
database: "db1",
entities: [__dirname + "/entity/*{.js,.ts}"],
synchronize: true
}, {
name: "db2Connection",
type: "mysql",
host: "localhost",
port: 3306,
username: "root",
password: "admin",
database: "db2",
entities: [__dirname + "/entity/*{.js,.ts}"],
synchronize: true
}]);
This approach allows you to connect to any number of databases you have and each database will have its own configuration, own entities and overall ORM scope and settings.
For each connection a new Connection instance will be created. You must specify a unique name for each connection you create.
The connection options can also be loaded from an ormconfig file. You can load all connections from the ormconfig file:
import {createConnections} from "typeorm";
const connections = await createConnections();
or you can specify which connection to create by name:
import {createConnection} from "typeorm";
const connection = await createConnection("db2Connection");
When working with connections you must specify a connection name to get a specific connection:
import {getConnection} from "typeorm";
const db1Connection = getConnection("db1Connection");
// you can work with "db1" database now...
const db2Connection = getConnection("db2Connection");
// you can work with "db2" database now...
Benefit of using this approach is that you can configure multiple connections with different login credentials, host, port and even database type itself. Downside for might be that you'll need to manage and work with multiple connection instances.
Background
I am creating a boilerplate express application. I have configured a database connection using pg and sequelize. When I add the cli and try to run sequlize db:migrate I get this error,
ERROR: The dialect [object Object] is not supported. Supported
dialects: mssql, mysql, postgres, and sqlite.
Replicate
Generate a new express application. Install pg, pg-hstore, sequelize and sequelize-cli.
Run sequelize init.
Add a config.js file to the /config path that was created from sequelize init.
Create the connection in the config.js file.
Update the config.json file created by sequelize-cli.
Run sequelize db:migrate
Example
/config/config.js
const Sequelize = require('sequelize');
const { username, host, database, password, port } = require('../secrets/db');
const sequelize = new Sequelize(database, username, password, {
host,
port,
dialect: 'postgres',
operatorsAliases: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
});
module.exports = sequelize;
/config/config.js
{
"development": {
"username": "user",
"password": "pass",
"database": "db",
"host": "host",
"dialect": "postgres"
},
"test": {
"username": "user",
"password": "pass",
"database": "db",
"host": "host",
"dialect": "postgres"
},
"production": {
"username": "user",
"password": "pass",
"database": "db",
"host": "host",
"dialect": "postgres"
}
}
Problem
I expect the initial migrations to run but instead get an error,
ERROR: The dialect [object Object] is not supported. Supported
dialects: mssql, mysql, postgres, and sqlite.
Versions
Dialect: postgres
Dialect version: "pg":7.4.3
Sequelize version: 4.38.0
Sequelize-Cli version: 4.0.0
Package Json
"pg": "^7.4.3",
"pg-hstore": "^2.3.2",
"sequelize": "^4.38.0"
Installed globally
npm install -g sequelize-cli
Question
Now that the major rewrite has been released for sequelize, what is the proper way to add the dialect so the migrations will run?
It is important to note that my connection is working fine. I can query the database without problems, only sequelize-cli will not work when running migrations.
i ran into same problem. there is a few thing that you need to change. first, i am not sure why you had 2 config/config.js file. i assumed the second file is config.json. the reason run into this problem is that
const sequelize = new Sequelize(database, username, password, {
host,
port,
dialect: 'postgres',
operatorsAliases: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
});
these lines of code is used for the node server to access db, not for sequlize-cli to migrate. you need to follow exact the sequlize-cli instruction. here is the link: instruction
my code:
config/db.js
const {sequlize_cli} = require('../config.json');
module.exports = sequlize_cli;
config.json
{
"sequlize_cli":{
"development":{
"username":"root",
"password":"passowrd",
"database":"monitor",
"host":"127.0.0.1",
"dialect": "postgres"
},
"test": {
"username":"root",
"password":"passowrd",
"database":"monitor",
"host":"127.0.0.1",
"dialect": "postgres"
},
"production": {
"username":"root",
"password":"passowrd",
"database":"monitor",
"host":"127.0.0.1",
"dialect": "postgres"
}
}
}
the main point i guess is to export the json object directly instead of exporting a sequelize object. In addition, this is only the problem with postges , i tested with mysql, your code works perfectly with mysql.
I'm following the instructions here to use Ghost as an NPM module, and attempting to setup Ghost for production.
I'm running Google cloud sql proxy locally. When I run NODE_ENV=production knex-migrator init --mgpath node_modules/ghost I get this error message:
NAME: RollbackError
CODE: ER_ACCESS_DENIED_ERROR
MESSAGE: ER_ACCESS_DENIED_ERROR: Access denied for user 'root'#'cloudsqlproxy~[SOME_IP_ADDRESS]' (using password: NO)
Running knex-migrator init --mgpath node_modules/ghost works just fine, and I can launch the app locally with no problems. It's only the I try to setup the app for production that I get problems.
EDIT: I can connect to the db via MySQL Workbench, using the same credentials I'm using in the config below
Here's my config.production.json (with private data removed):
{
"production": {
"url": "https://MY_PROJECT_ID.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/MY_INSTANCE_CONNECTION_NAME",
"user": "USER",
"password": "PASSWORD",
"database": "DATABASE_NAME",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "2368"
},
"paths": {
"contentPath": "content/"
}
}
}
And app.yaml:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
env_variables:
MYSQL_USER: ******
MYSQL_PASSWORD: ******
MYSQL_DATABASE: ******
# e.g. my-awesome-project:us-central1:my-cloud-sql-instance-name
INSTANCE_CONNECTION_NAME: ******
beta_settings:
# The connection name of your instance on its Overview page in the Google
# Cloud Platform Console, or use `YOUR_PROJECT_ID:YOUR_REGION:YOUR_INSTANCE_NAME`
cloud_sql_instances: ******
# Setting to keep gcloud from uploading not required files for deployment
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.ts$
- ^(.*/)?config\.development\.json$
The file ghost.prod.config.js isn't something Ghost recognises - I'm not sure where that file name came from, but Ghost < 1.0 used config.js with all environments in one file, and Ghost >= 1.0 uses config.<env>.json with each environment in its own file.
Your config.production.json file doesn't contain your MySQL connection info, and therefore the knex-migrator tool is not able to connect to your DB.
If you merge the contents of ghost.prod.config.js into config.producton.json this should work fine.
Your config.production.json should look something like this:
{
"url": "https://something.appspot.com",
"database": {
"client": "mysql",
"connection": {
"socketPath": "path",
"user": "user",
"password": "password",
"database": "dbname",
"charset": "utf8"
}
}
}
The caveat here is that the new JSON format cannot contain code or logic, only explicit values, e.g. process.env.PORT || "2368" is no longer permitted.
Instead, you'll need to use either arguments or environment variables to provide dynamic configuration. Documentation for how to use environment variables is here: https://docs.ghost.org/docs/config#section-running-ghost-with-config-env-variables
E.g. NODE_ENV=production port=[your port] database__connection__user=[your user] ...etc... knex-migrator init --mgpath node_modules/ghost
You'd need to add an environment variable for every dynamic variable in the config.
I figured out the problem.
My config file shouldn't have the "production" property. My config should look like this:
{
"url": "https://MY_PROJECT_ID.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/MY_INSTANCE_CONNECTION_NAME",
"user": "USER",
"password": "PASSWORD",
"database": "DATABASE_NAME",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "8080"
},
"paths": {
"contentPath": "content/"
}
}
It now overrides the default config.
The only issue is that you can't use knex-migrator with the "socketPath" property set, but this is needed to run the app in the cloud.