Using KrakenD with local nodejs server - node.js

I have a up and running nodejs server (with one API) on my local machine .
I have created the new docker container for krakend using
docker run -p 8080:8080 -v $PWD:/etc/krakend/ devopsfaith/krakend run --config /etc/krakend/krakend.json
Although, I have to make some changes in above command because I am working on windows.
I have created a krakend.json file and it's content are
{
"version": 3,
"timeout": "3s",
"cache_ttl": "300s",
"port": 8080,
"default_hosts": ["http://localhost:3001"],
"endpoints": [
{
"endpoint": "/contacts",
"output_encoding": "json",
"extra_config": {
"qos/ratelimit/router": {
"max_rate": 5000
}
},
"backend": [
{
"host": [
"http://localhost:3001",
"http://cotacts:3001"
],
"url_pattern": "/contacts",
"is_collection": "true",
"encoding": "json",
"extra_config": {
"backend/http": {
"return_error_details": "backend_alias"
}
}
}
]
}
]
}
But when I am hitting the url http://localhost:8080/contacts using postman I am getting
[KRAKEND] 2022/03/14 - 07:26:30.305 ▶ ERROR [ENDPOINT: /contacts] Get "http://localhost:3001/contacts": dial tcp 127.0.0.1:3001: connect: connection refused
I found a relevant one over here
connection refused error with Krakend api-gateway?
but, I am not getting what to change in my case

Inside the backend you have two hosts in the load balancer. KrakenD will try one and the other in a round-robin fashion.
"host": [
"http://localhost:3001",
"http://cotacts:3001"
],
If you have started KrakenD as you have written in your message, then neither of the names are available.
localhost is KrakenD itself (not the host machine starting KrakenD). KrakenD does not have any port 3001 so it's expected that it cannot connect. You should write your host IP.
I am guessing cotacts:3001 is some outside service. If you need to access this service by name you need to use it through a docker compose.
The problem you have is Docker connectivity, and is not related to KrakenD. KrakenD is just complaining that it cannot connect to those services.
Finally, the "default_hosts" is something that it does not exist in KrakenD, it makes no effect in the configuration, you can delete that line. If you want to have a default host without needing to declare it in every backend use just host. In summary, your configuration should look like:
{
"$schema": "https://www.krakend.io/schema/v3.json",
"version": 3,
"timeout": "3s",
"cache_ttl": "300s",
"port": 8080,
"host": [
"http://1.2.3.4:3001"
],
"endpoints": [
{
"endpoint": "/contacts",
"extra_config": {
"qos/ratelimit/router": {
"max_rate": 5000
}
},
"backend": [
{
"url_pattern": "/contacts",
"is_collection": "true",
"extra_config": {
"backend/http": {
"return_error_details": "backend_alias"
}
}
}
]
}
]
}
And replace 1.2.3.4 with the IP of the machine running the Node.

Related

docker-compose: Connect to mongodb using node

I'm looking for some help on how I can connect to a mongodb using node in two different containers.
I have three services set up in my docker compose:
webserver (irrelevant to question)
nodeJs
mongo database
The nodejs container is essentially an api which I can use to communicate with mongodb:
require('dotenv').config();
const express = require('express');
const cors = require('cors')
const app = express();
var MongoClient = require('mongodb').MongoClient;
var mongodb = require('mongodb');
app.use(express.json())
app.use(cors())
app.post('/api/fetch-items', (req, res) => {
if (req.headers.apikey !== process.env.API_KEY) return res.sendStatus(401)
// URL is in the format: mongodb://user:pwd#database:27017
MongoClient.connect(process.env.MONGODB_URL, function(err, db) {
if (err) return res.status(500).send(err);
var dbo = db.db("db");
dbo.collection("col").find({}).toArray(function(err, result) {
if (err) return res.status(500).send(err);
db.close();
return res.status(200).send(result);
});
});
})
app.listen(4000)
This all works perfectly fine if I run node as a standalone container (not using docker-compose) and use localhost in the URL.
However, when I use the image in docker-compose I receive the response:
{
"name": "MongoNetworkError"
}
when sending a request to the API.
I am currently using the hostname 'database' in the URL and this does not work. I have also tried using localhost.
There are also no errors as a result of the command node server.
If needed my Dockerfile for the node server is:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
RUN chown node:node ./package*.json
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 4000
CMD [ "node", "server" ]
My docker-compose.yml file:
version: "3.1"
services:
mongodb:
image: mongo
restart: always
container_name: database
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: xxxxxxxx
# Web server stuff
node:
image: created-node-server
container_name: node
ports:
- 4000:4000
Finally, the output of docker network inspect:
[
{
"Name": "network_default",
"Id": "3e51a90a23f2785cfc405243ad4c73991852f52826fd1cd0b14da5d4eaa180e4",
"Created": "2021-01-12T01:07:42.656013002Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"418876a06c3f8fa430804ae77c66cca986a49dbc88374266346463f7f448baa7": {
"Name": "database",
"EndpointID": "ac08c5a439edd43e612723d269714e9dfbae29dbdb50790b61c66207287d70c8",
"MacAddress": "02:42:ac:17:00:04",
"IPv4Address": "172.23.0.4/16",
"IPv6Address": ""
},
"7b6dcbb8f76618575c988a026ac0308075a116f79a2e58d8a146e33fb5d7674c": {
"Name": "node",
"EndpointID": "e6beb412a2fe97ae7d04d2484a7ca3634bfa37c82680becc412d1f44502da72f",
"MacAddress": "02:42:ac:17:00:03",
"IPv4Address": "172.23.0.3/16",
"IPv6Address": ""
},
"f2ea250bccdb2c6a0c4d7818912ddbf29196eff072dad699e8dbcef466cd38a3": {
"Name": "webserver",
"EndpointID": "f6617aab4001032069e68300c5303fa730f3458e2fe0092ace45a9f67e16d7c5",
"MacAddress": "02:42:ac:17:00:02",
"IPv4Address": "172.23.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "proj",
"com.docker.compose.version": "1.27.4"
}
}
]
Essentially, I am retrieving the MongoNetworkError when trying to communicate with mongodb through node, both of which are docker containers created using docker-compose.
I hope all the above makes sense, sorry if it is a bit wordy, I have tried to include as much info as possible. Comment if you need any more info
Thanks :)
You just need to include an environmental variable under the node service MONGODB_URL=mongodb://database:27017

Issue using Ghost with Google Cloud SQL

I'm following the instructions here to use Ghost as an NPM module, and attempting to setup Ghost for production.
I'm running Google cloud sql proxy locally. When I run NODE_ENV=production knex-migrator init --mgpath node_modules/ghost I get this error message:
NAME: RollbackError
CODE: ER_ACCESS_DENIED_ERROR
MESSAGE: ER_ACCESS_DENIED_ERROR: Access denied for user 'root'#'cloudsqlproxy~[SOME_IP_ADDRESS]' (using password: NO)
Running knex-migrator init --mgpath node_modules/ghost works just fine, and I can launch the app locally with no problems. It's only the I try to setup the app for production that I get problems.
EDIT: I can connect to the db via MySQL Workbench, using the same credentials I'm using in the config below
Here's my config.production.json (with private data removed):
{
"production": {
"url": "https://MY_PROJECT_ID.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/MY_INSTANCE_CONNECTION_NAME",
"user": "USER",
"password": "PASSWORD",
"database": "DATABASE_NAME",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "2368"
},
"paths": {
"contentPath": "content/"
}
}
}
And app.yaml:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
env_variables:
MYSQL_USER: ******
MYSQL_PASSWORD: ******
MYSQL_DATABASE: ******
# e.g. my-awesome-project:us-central1:my-cloud-sql-instance-name
INSTANCE_CONNECTION_NAME: ******
beta_settings:
# The connection name of your instance on its Overview page in the Google
# Cloud Platform Console, or use `YOUR_PROJECT_ID:YOUR_REGION:YOUR_INSTANCE_NAME`
cloud_sql_instances: ******
# Setting to keep gcloud from uploading not required files for deployment
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.ts$
- ^(.*/)?config\.development\.json$
The file ghost.prod.config.js isn't something Ghost recognises - I'm not sure where that file name came from, but Ghost < 1.0 used config.js with all environments in one file, and Ghost >= 1.0 uses config.<env>.json with each environment in its own file.
Your config.production.json file doesn't contain your MySQL connection info, and therefore the knex-migrator tool is not able to connect to your DB.
If you merge the contents of ghost.prod.config.js into config.producton.json this should work fine.
Your config.production.json should look something like this:
{
"url": "https://something.appspot.com",
"database": {
"client": "mysql",
"connection": {
"socketPath": "path",
"user": "user",
"password": "password",
"database": "dbname",
"charset": "utf8"
}
}
}
The caveat here is that the new JSON format cannot contain code or logic, only explicit values, e.g. process.env.PORT || "2368" is no longer permitted.
Instead, you'll need to use either arguments or environment variables to provide dynamic configuration. Documentation for how to use environment variables is here: https://docs.ghost.org/docs/config#section-running-ghost-with-config-env-variables
E.g. NODE_ENV=production port=[your port] database__connection__user=[your user] ...etc... knex-migrator init --mgpath node_modules/ghost
You'd need to add an environment variable for every dynamic variable in the config.
I figured out the problem.
My config file shouldn't have the "production" property. My config should look like this:
{
"url": "https://MY_PROJECT_ID.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/MY_INSTANCE_CONNECTION_NAME",
"user": "USER",
"password": "PASSWORD",
"database": "DATABASE_NAME",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "8080"
},
"paths": {
"contentPath": "content/"
}
}
It now overrides the default config.
The only issue is that you can't use knex-migrator with the "socketPath" property set, but this is needed to run the app in the cloud.

AWS: nodejs app can't connect to pg databse

I am trying to upload a node app online on aws.
When I launch the app on local it works perfectly fine because my app finds access to postgres.
However when I upload my server, it then can't connect to the database.
My app uses loopback.io.
Here is the server/config.json :
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 3000,
"remoting": {
"context": false,
"rest": {
"handleErrors": false,
"normalizeHttpPath": false,
"xml": false
},
"json": {
"strict": false,
"limit": "100kb"
},
"urlencoded": {
"extended": true,
"limit": "100kb"
},
"cors": false
},
"legacyExplorer": false,
"logoutSessionsOnSensitiveChanges": true
}
And here is /server/datasources.json
{
"db": {
"name": "db",
"connector": "memory"
},
"postgres": {
"host": "localhost",
"port": 5432,
"url": "",
"database": "postgres",
"password": "postgresseason",
"name": "postgres",
"user": "postgres",
"connector": "postgresql"
}
}
I have done researches and I think I have to change an url so it doesn't try to look for a "local" way, but don't manage to make it work.
I tried using the url postgress://postgres:postgresseason#db:5432/postgres without success.
The error I am getting are either :
Web server listening at: http://0.0.0.0:8080
Browse your REST API at http://0.0.0.0:8080/explorer
Connection fails: Error: getaddrinfo ENOTFOUND db db:5432
It will be retried for the next request.
Or :
Web server listening at: http://0.0.0.0:3000
Browse your REST API at http://0.0.0.0:3000/explorer
Connection fails: Error: connect ECONNREFUSED 127.0.0.1:5432
It will be retried for the next request.
Any help how to make it work?
Thanks
You need to make sure the postgres server is installed and reachable by aws.
By default it cannot reach your locally installed postgres (without complicate port forwarding etc... )
If you are using ec2 you can install a postgres server locally and use localhost.
Or setting postgres in another aws service like this one: https://aws.amazon.com/rds/postgresql/
Just make sure the nodejs server / service has the required permissions to reach and query the postgres.

Laravel Echo Server can not be authenticated, got HTTP status 500

I've installed both Laravel echo server and Laravel echo client.
Following is the laravel-echo-server.json configuration.
{
"authHost": "http://taxation.com",
"authEndpoint": "/broadcasting/auth",
"clients": [
{
"appId": "APP_ID",
"key": "someKey"
}
],
"database": "redis",
"databaseConfig": {
"redis": {},
"sqlite": {
"databasePath": "/database/laravel-echo-server.sqlite"
}
},
"devMode": true,
"host": "127.0.0.1",
"port": "3000",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": "",
"sslCertChainPath": "",
"sslPassphrase": ""
}
The following script listens for channel events. It builds fine with npm run dev.
import Echo from 'laravel-echo'
let token = document.head.querySelector('meta[name="token"]');
if (token) {
window.axios.defaults.headers.common['X-CSRF-TOKEN'] = token.content;
} else {
console.error('CSRF token not found: https://laravel.com/docs/csrf#csrf-x-csrf-token');
}
window.Echo = new Echo({
broadcaster: 'socket.io',
host: '127.0.0.1:3000',
reconnectionAttempts: 5
});
window.Echo.join('checked-in-1')
.listen('.user.checked_in', (e) => {
console.log(e);
});
When trying to listen for any event on start laravel-echo-server command. It keeps throwing Client can not be authenticated, got HTTP status 500.
Note :
I really didn't find anything helpful on laravel-echo-serve nor on google.
Any help will be appreciated a lot.
Laravel V5.4
Thanks
Just getting the issue because of CSRF token. Didn't passed the token to the echo.
window.Echo = new Echo({
broadcaster: 'socket.io',
host: '127.0.0.1:3000',
reconnectionAttempts: 5,
csrfToken: token.content <--
});

Uable to access to localhost:8080 after upgrading NodeJS to 8.1.4

Our development group is starting a new React project and I have been trying to use Nightwatch + Selenium to do the e2e testing. I got it to work when running everything using NodeJS 6.9.4. Now we have been forced to upgrade NodeJS to 8.1.4 and I'm facing an issue that is stopping me to proceed with testing. When using Selenium with Chrome as browser, I keep getting a 'This site can't be reached' message (but the page can be accessed if I open manually a Chrome window. Any idea what can be going on? Here you have the test result log and my nightwatch.conf.js
Test Result:
INFO Request: GET /wd/hub/session/fc36e7a7-4909-4dfd-a853-6d769accb085/element/0/text
- data:
- headers: {"Accept":"application/json"}
INFO Response 200 GET /wd/hub/session/fc36e7a7-4909-4dfd-a853-6d769accb085/element/0/text (16ms) { state: 'success',
sessionId: 'fc36e7a7-4909-4dfd-a853-6d769accb085',
hCode: 972983271,
value: 'This site can’t be reached',
class: 'org.openqa.selenium.remote.Response',
status: 0 }
Nightwatch Conf
const SCREENSHOT_PATH = "./screenshots/";
const BIN_PATH = './node_modules/nightwatch/bin/';
``
// we use a nightwatch.conf.js file so we can include comments and helper functions
module.exports = {
"src_folders": [
"__tests__/e2e/specs"// Where you are storing your Nightwatch e2e tests
],
"output_folder": "./reports", // reports (test outcome) output by nightwatch
"selenium": { // downloaded by selenium-download module (see readme)
"start_process": false, // tells nightwatch to start/stop the selenium process
"server_path": "./node_modules/nightwatch/bin/selenium.jar",
"host": "127.0.0.1",
"port": 4444, // standard selenium port
"cli_args": { // chromedriver is downloaded by selenium-download (see readme)
"webdriver.chrome.driver" : "./node_modules/nightwatch/bin/chromedriver"
}
},
"test_settings": {
"default": {
"screenshots": {
"enabled": true, // if you want to keep screenshots
"path": './screenshots' // save screenshots here
},
"globals": {
"waitForConditionTimeout": 5000 // sometimes internet is slow so wait.
},
"desiredCapabilities": { // use Chrome as the default browser for tests
"browserName": "chrome",
"javascriptEnabled": true, // turn off to test progressive enhancement
"chromeOptions" : {
"args": ['--disable-web-security', 'no-sandbox', '--disable-async-dns']
}
}
},
"chrome": {
"desiredCapabilities": {
"browserName": "chrome",
"javascriptEnabled": true, // turn off to test progressive enhancement
"chromeOptions" : {
"args": ['--disable-web-security', 'no-sandbox', '--disable-async-dns']
}
}
}
},
"params": {
"baseUrl": "http://localhost:8080/",
}
}
Sorry for having the files attached instead of expanded on the comment but tho I have been using StackOverflow for a long time, this is my first request. StackOverflow
Apparently my localhost is not visible to the Selenium Chrome that is running. Needed to make my localhost accesible from outside my machine in order to get this running

Resources