How to pass the SSL config like truststore location and password? - node.js

Right now I have configured my Kafka server with a self-signed certificate.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- 2181:2181
hostname: zookeeper
kafka:
image: wurstmeister/kafka:2.11-2.0.0
command: [start-kafka.sh]
ports:
- 9093:9093
hostname: kafka
environment:
KAFKA_LISTENERS: SSL://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: SSL://alfrescokafka.leafycode.com:9093
KAFKA_SSL_KEYSTORE_LOCATION: /home/amur42s/ssl/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: oE4KJ9FVMjMXGpgpp0qwLzUDy0uz
KAFKA_SSL_KEY_PASSWORD: oE4KJ9FVMjMXGpgpp0qwLzUDy0uz
KAFKA_SSL_TRUSTSTORE_LOCATION: /home/amur42s/ssl/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: 123
KAFKA_ADVERTISED_HOST_NAME: 116.203.65.132 # docker-machine ip
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_CREATE_TOPICS: ""
KAFKA_SSL_CLIENT_AUTH: 'required'
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: 'SSL'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/ssl:/home/ssl
depends_on:
- "zookeeper"
Unfortunately, I'm unable to connect to it using Kafka-node TimeoutError: Request timed out after 30000ms. Looks like I need to set the ssl.truststore.location and ssl.trsutstore.password. How can I do this?
export const kafkaClientOptions = {
kafkaHost: process.env.KAFKA_PRODUCER_HOST,
ssl: true,
sslOptions: {
rejectUnauthorized: false
}
};
const client = new kafka.KafkaClient(kafkaClientOptions);
const Producer = kafka.Producer;
const producer = new Producer(client);

you shouldn't get timeout error because of SSL miss-configuration, but here are the configs to setup Kafka client with SSL
ssl:true,
sslOptions: {
key: fileManager.readFile("path/to/key"),
cert: fileManager.readFile("path/to/cert"),
ca: fileManager.readFile("path/to/ca"),
passphrase: "your_passphrase"
}

Related

Can't connect externally to Neo4j server in AKS cluster using Neo4j browser

I have an AKS cluster with a Node.js server connecting to a Neo4j-standalone instance all deployed with Helm.
I installed an ingress-nginx controller, referenced a default Let's Encrypt certificate and habilitated TPC ports with Terraform as
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
# repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "tcp.7687"
value = "default/cluster:7687"
}
set {
name = "tcp.7474"
value = "default/cluster:7474"
}
set {
name = "tcp.7473"
value = "default/cluster:7473"
}
set {
name = "tcp.6362"
value = "default/cluster-admin:6362"
}
set {
name = "tcp.7687"
value = "default/cluster-admin:7687"
}
set {
name = "tcp.7474"
value = "default/cluster-admin:7474"
}
set {
name = "tcp.7473"
value = "default/cluster-admin:7473"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
I then have an Ingress with paths pointing to Neo4j services so on https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ I can get to the browser.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
# nginx.ingress.kubernetes.io/rewrite-target: /
# certmanager.k8s.io/acme-challenge-type: http01
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
ingress.kubernetes.io/ssl-redirect: "true"
# kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- xxxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
# - host: xxx.westeurope.cloud.app.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
# 502 bad gateway
# /any character 502 bad gatway
- path: /neo4j-tcp-bolt(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
# /browser/ show browser
#/any character shows login to xxx.westeurope.cloudapp.azure.com:443 from https, :80 from http
- path: /neo4j-tcp-http(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-http
number: 7474
- http:
paths:
- path: /neo4j-tcp-https(/|$)(.*)
# 502 bad gateway
# /any character 502 bad gatway
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-https
number: 7473
I can get to the Neo4j Browser on the https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ but using the Connect Url bolt+s//server.bolt it won't connect to the server with the error ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver..
Now I'm guessing that is because Neo4j bolt connector is not using the Certificate used by the ingress-nginxcontroller.
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe secret tls-secret
Name: tls-secret
Namespace: default
Labels: controller.cert-manager.io/fao=true
Annotations: cert-manager.io/alt-names: xxx.westeurope.cloudapp.azure.com
cert-manager.io/certificate-name: tls-certificate
cert-manager.io/common-name: xxx.westeurope.cloudapp.azure.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-issuer
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 5648 bytes
tls.key: 1679 bytes
I tried to use it overriding the chart values, but then the Neo4j driver from Node.js server won't connect to the server ..
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret # we set up the template to grab `private.key` from this secret
subPath: tls.key # we specify the privateKey value name to get from the secret
publicCertificate:
secretName: tls-secret # we set up the template to grab `public.crt` from this secret
subPath: tls.crt # we specify the publicCertificate value name to get from the secret
trustedCerts:
sources: [ ] # a sources array for a projected volume - this allows someone to (relatively) easily mount multiple public certs from multiple secrets for example.
revokedCerts:
sources: [ ] # a sources array for a projected volume
https:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
Is there a way to use it or should I setup another certificate just for Neo4j? If so what would it be the dnsNames to set on it?
Is there something else I'm doing wrong?
Thank you very much.
From what I can gather from your information, the problem seems to be that you're trying to expose the bolt port behind an ingress. Ingresses are implemented as an L7 (protocol aware) reverse proxy and manage load-balancing etc. The bolt protocol has its load balancing and routing for cluster applications. So you will need to expose the network service directly for every instance of neo4j you are running.
Check out this part of the documentation for more information:
https://neo4j.com/docs/operations-manual/current/kubernetes/accessing-neo4j/#access-outside-k8s
Finally after a few days of going in circles I found what the problems were..
First using a Staging certificate will cause Neo4j bolt connection to fail, as it's not Trusted, with error:
ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
found here https://grishagin.com/neo4j/2022/03/29/neo4j-websocket-issue.html
Then I was missing to assign a general listening address to the bolt connector as it's listening by default only to 127.0.0.0:7687 https://neo4j.com/docs/operations-manual/current/configuration/connectors/
To listen for Bolt connections on all network interfaces (0.0.0.0)
so I added server.bolt.listen_address: "0.0.0.0:7687" to Neo4j chart values config.
Next, as I'm connecting the default neo4j ClusterIP service tcp ports to the ingress controller's exposed TCP connections through the Ingress as described here https://neo4j.com/labs/neo4j-helm/1.0.0/externalexposure/ as an alternative to using a LoadBalancer, the Neo4j LoadBalancer services is not needed so the services:neo4j:enabled gets set to "false", in my tests I actually found that if you leave it enabled bolt won't connect despite setting everything correctly..
Other Neo4j missing config where server.bolt.enabled : "true", server.bolt.tls_level: "REQUIRED", dbms.ssl.policy.bolt.client_auth: "NONE" and dbms.ssl.policy.bolt.enabled: "true" the complete list of config options is here https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
Neo4j chart's values for ssl config were fine.
So now I can use the (renamed for brevity) path /neo4j/browser/ to serve the Neo4j Browser app, and either the /bolt path as the browser Connect URL, or PublicIP's <DSN>:<bolt port>.
You are connected as user neo4j
to bolt+s://xxxx.westeurope.cloudapp.azure.com/bolt
Connection credentials are stored in your web browser.
Hope this explanation and the code recap below will help others.
Cheers.
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
namespace = "default"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
set {
name = "version"
value = "4.4.2"
}
### expose tcp connections for neo4j service
### bolt url connection port
set {
name = "tcp.7687"
value = "default/neo4j:7687"
}
### http browser app port
set {
name = "tcp.7474"
value = "default/neo4j:7474"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
spec:
ingressClassName: nginx
tls:
- hosts:
- xxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
- path: /bolt(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
- path: /neo4j(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-http
number: 7474
Values.yaml (Umbrella chart)
neo4j-db: #chart dependency alias
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "xxxx" # this will be the label: app: value for the service selector
password: "xxxxx"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser, this will create "cluster-neo4j" svc
neo4j:
enabled: false
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ] # a sources array for a projected volume

Cannot Proxy Requests From React Node.js App to Express Backend (Docker)

I'm having issues understanding how to proxy my requests to Express routes in my backend while accounting for the local development env. use case and the docker containerized use case. What I'm trying to setup up is a situation in which I have "proxy" configured for "http://localhost:8080" on my local env and http://api:8080 configured for my container. What I have thus far is createProxyMiddleware configured like so...
module.exports = function(app) {
console.log(process.env.API_URL);
app.use(
'/api',
createProxyMiddleware({
target: process.env.API_URL,
changeOrigin: true,
})
);
};
And my docker-compose file is configured like so...
version: "3.7"
services:
client:
image: webapp-client
build: ./client
restart: always
environment:
- API_URL=http://api:8080
volumes:
- ./client:/client
- /client/node_modules
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.rule=PathPrefix(`/`)"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.port=3000"
depends_on:
- api
networks:
- webappnetwork
api:
image: webapp-api
build: ./api
restart: always
ports:
- "8080:8080"
volumes:
- ./api:/api
- /api/node_modules
networks:
- webappnetwork
traefik:
image: "traefik:v2.5"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- webappnetwork
networks:
webappnetwork:
external: true
volumes:
pg-data:
pgadmin:
Upon startup, the container logs out...
[HPM] Proxy created: / -> http://api:8080
My axios calls look like this...
const <config_name> = {
method: 'post',
url: '/<route>',
headers: {
'Content-Type': 'application/json'
},
data: dataInput
}
As you can see, I set the environment variable and pass that into the createProxyMiddleWare method, but for some reason, this config doesn't work and gives a 404 when I try to hit a route. Any help with this would be greatly appreciated!

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

How to deploy Express Gateway to Azure

I am able to run an express gateway Docker container and a Redis Docker container locally and would like to deploy this to Azure. How do I go about it?
This is my docker-compose.yml file:
version: '2'
services:
eg_redis:
image: redis
hostname: redis
container_name: redisdocker
ports:
- "6379:6379"
networks:
gateway:
aliases:
- redis
express_gateway:
build: .
container_name: egdocker
ports:
- "9090:9090"
- "8443:8443"
- "9876:9876"
volumes:
- ./system.config.yml:/usr/src/app/config/system.config.yml
- ./gateway.config.yml:/usr/src/app/config/gateway.config.yml
networks:
- gateway
networks:
gateway:
And this is my system.config.yml file:
# Core
db:
redis:
host: 'redis'
port: 6379
namespace: EG
# plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
secret: keyboard cat
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
And this is my gateway.config.yml file:
http:
port: 9090
admin:
port: 9876
hostname: 0.0.0.0
apiEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/apiEndpoints
api:
host: '*'
paths: '/ip'
methods: ["POST"]
serviceEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/serviceEndpoints
httpbin:
url: 'https://httpbin.org/'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- request-transformer
pipelines:
# see: https://www.express-gateway.io/docs/configuration/gateway.config.yml/pipelines
basic:
apiEndpoints:
- api
policies:
- request-transformer:
- action:
body:
add:
payload: "'Test'"
headers:
remove: ["'Authorization'"]
add:
Authorization: "'new key here'"
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
Mounting the YAML files and then hitting the /ip endpoint is where I am stuck.
According to the configuration file you've posted I'd say you need to instruct Express Gateway to listen on 0.0.0.0 if run from a container, otherwise it won't be able to listed to external connections.

Docker - SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306

I'm trying to get my nodejs application up and running using a docker container. I have no clue what might be wrong. The credentials seems to be passed correctly when I debug the credentials with the console. Also firing up sequel pro and connecting directly with the same username and password seems to work. When node starts in the container I get the error message:
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
The application itself is loading correctly on port 3000, however no data is retrieved from the database. If have also tried adding the environment variables directly to the docker compose file, but this also doesn't seem to work.
My project code is hosted over here: https://github.com/pietheinstrengholt/rssmonster
The following database.js configuration is used. When I add console.log(config) the correct credentials from the .env file are displayed.
require('dotenv').load();
const Sequelize = require('sequelize');
const fs = require('fs');
const path = require('path');
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname + '/../config/config.js'))[env];
if (config.use_env_variable) {
var sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
var sequelize = new Sequelize(config.database, config.username, config.password, config);
}
module.exports = sequelize;
When I do a console.log(config) inside the database.js I get the following output:
{
username: 'rssmonster',
password: 'password',
database: 'rssmonster',
host: 'localhost',
dialect: 'mysql'
}
Following .env:
DB_HOSTNAME=localhost
DB_PORT=3306
DB_DATABASE=rssmonster
DB_USERNAME=rssmonster
DB_PASSWORD=password
And the following docker-compose.yml:
version: '2.3'
services:
app:
depends_on:
mysql:
condition: service_healthy
build:
context: ./
dockerfile: app.dockerfile
image: rssmonster/app
ports:
- 3000:3000
environment:
NODE_ENV: development
PORT: 3000
DB_USERNAME: rssmonster
DB_PASSWORD: password
DB_DATABASE: rssmonster
DB_HOSTNAME: localhost
working_dir: /usr/local/rssmonster/server
env_file:
- ./server/.env
links:
- mysql:mysql
mysql:
container_name: mysqldb
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: "rssmonster"
MYSQL_USER: "rssmonster"
MYSQL_PASSWORD: "password"
ports:
- "3307:3306"
volumes:
- /var/lib/mysql
restart: unless-stopped
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 10
volumes:
dbdata:
Error output:
{ SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at Promise.tap.then.catch.err (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:128:19)
app_1 | From previous event:
app_1 | at ConnectionManager.connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:125:13)
app_1 | at sequelize.runHooks.then (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:50)
app_1 | From previous event:
app_1 | at ConnectionManager._connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:8)
app_1 | at ConnectionManager.getConnection (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:247:46)
app_1 | at Promise.try (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:564:34)
app_1 | From previous event:
app_1 | at Promise.resolve.retryParameters (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:464:64)
app_1 | at /usr/local/rssmonster/server/node_modules/retry-as-promised/index.js:60:21
app_1 | at new Promise (<anonymous>)
Insteaf of localhost point to mysql which is the service name (DNS) that nodejs will resolve into the MySQL container:
DB_HOSTNAME: mysql
And
{
...
host: 'mysql',
...
}
Inside of the container you should reference the container by the name you gave in your docker-compose.yml file.
In this case you should use
DB_HOSTNAME: mysql
After searching and digging up through several googling attempt, the culprit of the problem soon appear. In this context, the database server is not in the same machine. In other words, the MySQL Database Server address is not localhost. So, how can the above MySQL database configuration by default is pointing to localhost address. Well, it seems that if there is no further definition of the host address, it will connect to the localhost address by default. Read the article for further reference about sequelize syntax pattern in this link.
So, in order to solve the problem, just modify the file with the right configuration database. The following is the correction of the configuration database :
const sequelize = require("sequelize")
const db = new sequelize("db_master","db_user","password", {
host : "10.0.2.2",
dialect: "mysql"
});
db.sync({});
module.exports = db;
Actually, the NodeJS application is running in a virtual server. It is a guest machine run in a VirtualBox application. On the other hand, MySQL Database server exist outside the guest machine. It is available in the host machine where the VirtualBox application is running. The host machine IP address is 10.0.2.2. So, in order to connect to MySQL Database Server in the host machine, the IP address of the host is 10.0.2.2.
use your connection string as :
mysql://username:password#mysql:(port_running_on_container)or(exposed_port)/db_name
Answers already exist, but to provide some further explanation:
You can't use 127.0.0.1 (localhost) to access other services/containers since each container will view that as inside itself. When running docker-compose, all your services will be entered into the same docker network. All services inside the same docker network, are able to reach eachother by service name.
hence, as already stated in previous answers: in your configuration, change db hostname from localhost to mysql.
three things to check before
make sure your service name must be MySQL
in Configure DB_HOST also a MySQL
And your backend service depends on mysql in docker-compose.yml
here is my success code
export const db = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
port: process.env.DB_PORT,
host:'mysql',
dialect: "mysql",
logging: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
},
}
);

Resources