AWS: nodejs app can't connect to pg databse - node.js

I am trying to upload a node app online on aws.
When I launch the app on local it works perfectly fine because my app finds access to postgres.
However when I upload my server, it then can't connect to the database.
My app uses loopback.io.
Here is the server/config.json :
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 3000,
"remoting": {
"context": false,
"rest": {
"handleErrors": false,
"normalizeHttpPath": false,
"xml": false
},
"json": {
"strict": false,
"limit": "100kb"
},
"urlencoded": {
"extended": true,
"limit": "100kb"
},
"cors": false
},
"legacyExplorer": false,
"logoutSessionsOnSensitiveChanges": true
}
And here is /server/datasources.json
{
"db": {
"name": "db",
"connector": "memory"
},
"postgres": {
"host": "localhost",
"port": 5432,
"url": "",
"database": "postgres",
"password": "postgresseason",
"name": "postgres",
"user": "postgres",
"connector": "postgresql"
}
}
I have done researches and I think I have to change an url so it doesn't try to look for a "local" way, but don't manage to make it work.
I tried using the url postgress://postgres:postgresseason#db:5432/postgres without success.
The error I am getting are either :
Web server listening at: http://0.0.0.0:8080
Browse your REST API at http://0.0.0.0:8080/explorer
Connection fails: Error: getaddrinfo ENOTFOUND db db:5432
It will be retried for the next request.
Or :
Web server listening at: http://0.0.0.0:3000
Browse your REST API at http://0.0.0.0:3000/explorer
Connection fails: Error: connect ECONNREFUSED 127.0.0.1:5432
It will be retried for the next request.
Any help how to make it work?
Thanks

You need to make sure the postgres server is installed and reachable by aws.
By default it cannot reach your locally installed postgres (without complicate port forwarding etc... )
If you are using ec2 you can install a postgres server locally and use localhost.
Or setting postgres in another aws service like this one: https://aws.amazon.com/rds/postgresql/
Just make sure the nodejs server / service has the required permissions to reach and query the postgres.

Related

Using KrakenD with local nodejs server

I have a up and running nodejs server (with one API) on my local machine .
I have created the new docker container for krakend using
docker run -p 8080:8080 -v $PWD:/etc/krakend/ devopsfaith/krakend run --config /etc/krakend/krakend.json
Although, I have to make some changes in above command because I am working on windows.
I have created a krakend.json file and it's content are
{
"version": 3,
"timeout": "3s",
"cache_ttl": "300s",
"port": 8080,
"default_hosts": ["http://localhost:3001"],
"endpoints": [
{
"endpoint": "/contacts",
"output_encoding": "json",
"extra_config": {
"qos/ratelimit/router": {
"max_rate": 5000
}
},
"backend": [
{
"host": [
"http://localhost:3001",
"http://cotacts:3001"
],
"url_pattern": "/contacts",
"is_collection": "true",
"encoding": "json",
"extra_config": {
"backend/http": {
"return_error_details": "backend_alias"
}
}
}
]
}
]
}
But when I am hitting the url http://localhost:8080/contacts using postman I am getting
[KRAKEND] 2022/03/14 - 07:26:30.305 ▶ ERROR [ENDPOINT: /contacts] Get "http://localhost:3001/contacts": dial tcp 127.0.0.1:3001: connect: connection refused
I found a relevant one over here
connection refused error with Krakend api-gateway?
but, I am not getting what to change in my case
Inside the backend you have two hosts in the load balancer. KrakenD will try one and the other in a round-robin fashion.
"host": [
"http://localhost:3001",
"http://cotacts:3001"
],
If you have started KrakenD as you have written in your message, then neither of the names are available.
localhost is KrakenD itself (not the host machine starting KrakenD). KrakenD does not have any port 3001 so it's expected that it cannot connect. You should write your host IP.
I am guessing cotacts:3001 is some outside service. If you need to access this service by name you need to use it through a docker compose.
The problem you have is Docker connectivity, and is not related to KrakenD. KrakenD is just complaining that it cannot connect to those services.
Finally, the "default_hosts" is something that it does not exist in KrakenD, it makes no effect in the configuration, you can delete that line. If you want to have a default host without needing to declare it in every backend use just host. In summary, your configuration should look like:
{
"$schema": "https://www.krakend.io/schema/v3.json",
"version": 3,
"timeout": "3s",
"cache_ttl": "300s",
"port": 8080,
"host": [
"http://1.2.3.4:3001"
],
"endpoints": [
{
"endpoint": "/contacts",
"extra_config": {
"qos/ratelimit/router": {
"max_rate": 5000
}
},
"backend": [
{
"url_pattern": "/contacts",
"is_collection": "true",
"extra_config": {
"backend/http": {
"return_error_details": "backend_alias"
}
}
}
]
}
]
}
And replace 1.2.3.4 with the IP of the machine running the Node.

Failing to run migration Postgres on Heroku

Well, when I enter heroku bash and try to run npx typeorm migration:run it just throws me an error:
What's weird is that locally it works when the DATABASE is on localhost like this in .env file:
DATABASE_URL=postgres://postgres:docker#localhost:5432/gittin
This is my ormconfig.js:
module.exports = {
"type": "postgres",
"url": process.env.DATABASE_URL,
"entities": ["dist/entities/*.js"],
"cli": {
"migrationsDir": "src/database/migrations",
"entitiesDir": "src/entities"
}
}
Yes, I added the heroku postgres addon to the app.
PS: If needed, this is the repo of the project: https://github.com/joaocasarin/gittin
As I was discussing with Carlo in the comments, I had to add the ssl property in the ormconfig.js, but not only setting it to true when the environment was production. So according to this, I had to put { rejectUnauthorized: false } when production mode, and just false when not.
So the ormconfig.js is like this right now:
module.exports = {
"type": "postgres",
"ssl": process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false,
"url": process.env.DATABASE_URL,
"entities": ["dist/entities/*.js"],
"cli": {
"migrationsDir": "src/database/migrations",
"entitiesDir": "src/entities"
}
}

Problem with config env for datasource in loopback 3

I am using loopback to develop API, and having problem with config by env in lb3, i have datasources.json file as below
{
"SSOSystem": {
"host": "x.y.z.o",
"port": 27017,
"url": "mongodb://SSO-System-mongo:U5KckMwrWs9EGyAh#x.y.z.o/SSO-System",
"database": "SSO-System",
"password": "U5KckMwrWs9EGyAh",
"name": "SSOSystem",
"connector": "mongodb",
"user": "SSO-System-mongo"
}
}
and datasources.local.js as below
module.exports = {
SSOSytem: {
connector: 'mongodb',
hostname: process.env.SSO_DB_HOST || 'localhost',
port: process.env.SSO_DB_PORT || 27017,
user: process.env.SSO_DB_USERNAME,
password: process.env.SSO_DB_PASSWORD,
database: process.env.SSO_DB_NAME,
url: `mongodb://${process.env.SSO_DB_USERNAME}:${process.env.SSO_DB_PASSWORD}#${process.env.SSO_DB_HOST}/${process.env.SSO_DB_NAME}`
}
}
but when I run my app with env local
NODE_ENV=local node .
loopback only loads datasources from datasources.json file, did I do something wrong in datasources config? Does anyone have same problem with me?
Many thanks,
Sorry, this is my typo mistake not problem of loopback SSOSystem in datasources.json and SSOSytem in file datasources.local.js

Sequelize migrate failing due to dialect object Object not supported error

Background
I am creating a boilerplate express application. I have configured a database connection using pg and sequelize. When I add the cli and try to run sequlize db:migrate I get this error,
ERROR: The dialect [object Object] is not supported. Supported
dialects: mssql, mysql, postgres, and sqlite.
Replicate
Generate a new express application. Install pg, pg-hstore, sequelize and sequelize-cli.
Run sequelize init.
Add a config.js file to the /config path that was created from sequelize init.
Create the connection in the config.js file.
Update the config.json file created by sequelize-cli.
Run sequelize db:migrate
Example
/config/config.js
const Sequelize = require('sequelize');
const { username, host, database, password, port } = require('../secrets/db');
const sequelize = new Sequelize(database, username, password, {
host,
port,
dialect: 'postgres',
operatorsAliases: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
});
module.exports = sequelize;
/config/config.js
{
"development": {
"username": "user",
"password": "pass",
"database": "db",
"host": "host",
"dialect": "postgres"
},
"test": {
"username": "user",
"password": "pass",
"database": "db",
"host": "host",
"dialect": "postgres"
},
"production": {
"username": "user",
"password": "pass",
"database": "db",
"host": "host",
"dialect": "postgres"
}
}
Problem
I expect the initial migrations to run but instead get an error,
ERROR: The dialect [object Object] is not supported. Supported
dialects: mssql, mysql, postgres, and sqlite.
Versions
Dialect: postgres
Dialect version: "pg":7.4.3
Sequelize version: 4.38.0
Sequelize-Cli version: 4.0.0
Package Json
"pg": "^7.4.3",
"pg-hstore": "^2.3.2",
"sequelize": "^4.38.0"
Installed globally
npm install -g sequelize-cli
Question
Now that the major rewrite has been released for sequelize, what is the proper way to add the dialect so the migrations will run?
It is important to note that my connection is working fine. I can query the database without problems, only sequelize-cli will not work when running migrations.
i ran into same problem. there is a few thing that you need to change. first, i am not sure why you had 2 config/config.js file. i assumed the second file is config.json. the reason run into this problem is that
const sequelize = new Sequelize(database, username, password, {
host,
port,
dialect: 'postgres',
operatorsAliases: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
});
these lines of code is used for the node server to access db, not for sequlize-cli to migrate. you need to follow exact the sequlize-cli instruction. here is the link: instruction
my code:
config/db.js
const {sequlize_cli} = require('../config.json');
module.exports = sequlize_cli;
config.json
{
"sequlize_cli":{
"development":{
"username":"root",
"password":"passowrd",
"database":"monitor",
"host":"127.0.0.1",
"dialect": "postgres"
},
"test": {
"username":"root",
"password":"passowrd",
"database":"monitor",
"host":"127.0.0.1",
"dialect": "postgres"
},
"production": {
"username":"root",
"password":"passowrd",
"database":"monitor",
"host":"127.0.0.1",
"dialect": "postgres"
}
}
}
the main point i guess is to export the json object directly instead of exporting a sequelize object. In addition, this is only the problem with postges , i tested with mysql, your code works perfectly with mysql.

Issue using Ghost with Google Cloud SQL

I'm following the instructions here to use Ghost as an NPM module, and attempting to setup Ghost for production.
I'm running Google cloud sql proxy locally. When I run NODE_ENV=production knex-migrator init --mgpath node_modules/ghost I get this error message:
NAME: RollbackError
CODE: ER_ACCESS_DENIED_ERROR
MESSAGE: ER_ACCESS_DENIED_ERROR: Access denied for user 'root'#'cloudsqlproxy~[SOME_IP_ADDRESS]' (using password: NO)
Running knex-migrator init --mgpath node_modules/ghost works just fine, and I can launch the app locally with no problems. It's only the I try to setup the app for production that I get problems.
EDIT: I can connect to the db via MySQL Workbench, using the same credentials I'm using in the config below
Here's my config.production.json (with private data removed):
{
"production": {
"url": "https://MY_PROJECT_ID.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/MY_INSTANCE_CONNECTION_NAME",
"user": "USER",
"password": "PASSWORD",
"database": "DATABASE_NAME",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "2368"
},
"paths": {
"contentPath": "content/"
}
}
}
And app.yaml:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
env_variables:
MYSQL_USER: ******
MYSQL_PASSWORD: ******
MYSQL_DATABASE: ******
# e.g. my-awesome-project:us-central1:my-cloud-sql-instance-name
INSTANCE_CONNECTION_NAME: ******
beta_settings:
# The connection name of your instance on its Overview page in the Google
# Cloud Platform Console, or use `YOUR_PROJECT_ID:YOUR_REGION:YOUR_INSTANCE_NAME`
cloud_sql_instances: ******
# Setting to keep gcloud from uploading not required files for deployment
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.ts$
- ^(.*/)?config\.development\.json$
The file ghost.prod.config.js isn't something Ghost recognises - I'm not sure where that file name came from, but Ghost < 1.0 used config.js with all environments in one file, and Ghost >= 1.0 uses config.<env>.json with each environment in its own file.
Your config.production.json file doesn't contain your MySQL connection info, and therefore the knex-migrator tool is not able to connect to your DB.
If you merge the contents of ghost.prod.config.js into config.producton.json this should work fine.
Your config.production.json should look something like this:
{
"url": "https://something.appspot.com",
"database": {
"client": "mysql",
"connection": {
"socketPath": "path",
"user": "user",
"password": "password",
"database": "dbname",
"charset": "utf8"
}
}
}
The caveat here is that the new JSON format cannot contain code or logic, only explicit values, e.g. process.env.PORT || "2368" is no longer permitted.
Instead, you'll need to use either arguments or environment variables to provide dynamic configuration. Documentation for how to use environment variables is here: https://docs.ghost.org/docs/config#section-running-ghost-with-config-env-variables
E.g. NODE_ENV=production port=[your port] database__connection__user=[your user] ...etc... knex-migrator init --mgpath node_modules/ghost
You'd need to add an environment variable for every dynamic variable in the config.
I figured out the problem.
My config file shouldn't have the "production" property. My config should look like this:
{
"url": "https://MY_PROJECT_ID.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/MY_INSTANCE_CONNECTION_NAME",
"user": "USER",
"password": "PASSWORD",
"database": "DATABASE_NAME",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "8080"
},
"paths": {
"contentPath": "content/"
}
}
It now overrides the default config.
The only issue is that you can't use knex-migrator with the "socketPath" property set, but this is needed to run the app in the cloud.

Resources