Cannot connect to Cloud SQL Proxy via Docker - Error: connect ENOENT - node.js

I can't seem to connect to the CloudSQL using Docker container.
Firstly here is my file paths: https://imgur.com/a/Nmx41o6
Dockerfile.dev:
FROM node:14-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
Dockerfile.sql
RUN mkdir /cloudsql
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY ./cloud_sql_proxy ./
COPY ./service_acct.json ./
version: '3.8'
services:
cloud-sql-proxy:
build:
context: .
dockerfile: DockerFile.sql
volumes:
- /cloudsql:/cloudsql
- /service_acct.json:/app/service_acct.json
command: ./cloud_sql_proxy -dir=/cloudsql -instances=test-game-199281:us-east1:testgame -credential_file=/app/service_acct.json
app:
build:
context: .
dockerfile: DockerFile.dev
env_file:
- ./.env
volumes:
# since we copied root into host in dockerfile, we can map the whole directory with app.
- "./src:/app/src"
ports:
- "5000:5001"
command: sh -c "npm run dev"
My node index.js file. I don't think there is anything wrong, maybe I am entering the wrong connection string format? The password and user is correct as far as I can tell.
const express = require('express');
const { Pool, Client } = require('pg')
const app = express();
require('dotenv').config({path:'../.env'})
const pool = new Pool({
user: 'postgres',
host: '/cloudsql/test-game-199281:us-east1:testgame',
database: 'TestDB',
password: '********',
port: 5432
})
app.get('/', (req, res) => {
pool.connect(function(err, client, done) {
if (err) {
console.log("not able to get connection " + err);
res.status(400).send(err);
return
}
client.query("SELECT * FROM company", [1], (err, result) =>{
done();
if (err) {
console.log(err);
res.status(400).send(err);
}
res.status(200).send(result.rows);
});
});
});
Error I get:
Hello world listening on port 5001
app_1 | Error: connect ENOENT /cloudsql/test-game-199281:us-east1:testgame
/.s.PGSQL.5432
app_1 | at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
app_1 | errno: -2,
app_1 | code: 'ENOENT',
app_1 | syscall: 'connect',
app_1 | address: '/cloudsql/test-game-199281:us-east1:testgame
/.s.PGSQL.5432'
app_1 | }
SOLVED: I switched to TCP. screw unix socket. so confusing.

You've instructed the Cloud SQL Auth proxy to listen to 0.0.0.0:5432 with this flag -instances=test-game-199281:us-east1:testgame=tcp:0.0.0.0:5432.
But then you've instructed your app to connect to /cloudsql/<INSTANCE_CONNCECTION_NAME>, which is a unix socket.
You need to pick one, and make sure you are consistent between you app and proxy.
If you use TCP, you'll have to map the port in the container to a port on your machine (or somewhere in your docker-compose network that your app can reach it.) You'll have to update your app to connect on 127.0.0.1 (or whatever its docker IP is in the network). You can check out more on docker-compose networking here.
If you use Unix Domain sockets, you'll need to volume share the folder containing the socket so that both apps can access it. So if it's in /cloudsql, you'll need to share /cloudsql between your proxy container and your app container. You can check out more on docker-compose volumes here.
Cloud SQL's Managing Database Connections page has examples of connecting with both TCP and Unix domain sockets.

You can try to connect via service name cloud-sql-proxy:5432 instead of localhost:5432 when connecting between different dockers.
Each docker is an isolated network so you cannot use localhost since localhost will refer to the docker container's own local network.

The ENOENT error means that the connector utility cannot find the host to connect to your database. Here's a good answer that further explains it.
On your docker-compose file, the Cloud SQL Proxy is listening via TCP but your code is trying to connect via Unix socket. Your code can't connect to the host because the socket doesn't exist.
The solution is to configure your proxy to create and listen to a Unix Socket. Change the command to:
/cloud_sql_proxy -instances=INSTANCE_CONNECTION_NAME -dir=/cloudsql -credential_file=/tmp/keys/keyfile.json
No need to expose any ports to connect via Unix Sockets. I also suggest building your pool connection with a config object like in the above link or as specified by pg-pool, rather than a DB URL to avoid a possible issue where you cannot connect to a Unix Socket using connectionString URL.

Related

localhost didn’t send any data on Docker and Nodejs app

I've searched this answer on the StackOverflow community and none of them resulted so I ask one here.
I have a pretty simple nodejs app that has a server.js file, having the following.
'use strict'
require('dotenv').config();
const app = require('./app/app');
const main = async () => {
try {
const server = await app.build({
logger: true,
shopify: './Shopify',
shopifyToken: process.env.SHOPIFY_TOKEN,
shopifyUrl: process.env.SHOPIFY_URL
});
await server.listen(process.env.PORT || 3000);
} catch (err) {
console.log(err)
process.exit(1)
}
}
main();
If I boot the server locally works perfect and I able to see a json on the web browser.
Log of the working server when running locally:
{"level":30,"time":1648676240097,"pid":40331,"hostname":"Erick-Macbook-Air.local","msg":"Server listening at http://127.0.0.1:3000"}
When I run my container, and I go to localhost:3000 I see a blank page with the error message:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
I have my Dockerfile like this:
FROM node:16
WORKDIR /app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["node", "server.js"]
This is how I run my container:
docker run -d -it --name proxyservice -p 3000:3000 proxyserver:1.0
And when I run it I see the container log working:
{"level":30,"time":1648758470430,"pid":1,"hostname":"03f5d00d762b","msg":"Server listening at http://127.0.0.1:3000"}
As you can see it boot's up right, but when going to localhost:3000 I see that error message. Any idea of what am I missing/doing wrong?
Thanks!
can you add 0.0.0.0 in the host section of your service,
something like this?
server.listen(3000, '0.0.0.0');
give it a try then.
Since you want your service to be accessible from outside the container you should give the address as 0.0.0.0

Nodemailer cannot send email from within Docker container

I've been searching/reading/trying everywhere on the Internet for about 3 weeks before posting here ...
Context:
developing little website app
technologies:
Next JS (ReactJs, HTML, CSS) for both frontend and backend (Node)
Linux as host (Ubuntu 20.04 LTS)
Docker's container to encapsulate app (based on node:alpine image) (Docker version 20.10.6)
Nodemailer Node's module to send email
this is the code using Nodemailer to send the e-mail message:
import type { NextApiRequest, NextApiResponse } from "next";
import * as nodemailer from "nodemailer";
export default async (req: NextApiRequest, res: NextApiResponse) => {
res.statusCode = 200;
let transporter = nodemailer.createTransport({
host: process.env.NM_HOST,
port: parseInt(process.env.NM_PORT),
secure: true,
auth: {
user: process.env.NM_USER,
pass: process.env.NM_PASS,
},
tls: {
rejectUnauthorized: false,
},
});
// console.log("User:");
// console.log(process.env.NM_USER);
let info = await transporter.sendMail({
from: "Website <xxx#xxx.com>",
to: "Website <xxx#xxx.com>",
subject: "New contact",
text: "NAME:\n" + req.body.data.name + "\n----------\nEMAIL:\n" + req.body.data.email + "\n----------\nBODY:\n" + req.body.data.body,
}, function (err, info) {
if (err) {
console.log(err)
} else {
console.log(info);
}
});
console.log("Message sent: %s", info);
res.json({
a: req.body.data.name,
b: req.body.data.email,
c: req.body.data.body,
});
};
Issue:
when I try to send e-mail using Nodemailer launching my app from Linux host as "npm run start" or "npm run dev", mails get delivered
when I try to send e-mail using Nodemailer launching my app from Docker's container, i get following error (from app's output itself)
Error: connect ECONNREFUSED 127.0.0.1:465
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ESOCKET',
syscall: 'connect',
address: '127.0.0.1',
port: 465,
command: 'CONN'
}
What I already tried and what I observed:
ping google.com (and many others) works from within container (using docker exec -ti container-name sh command)
starting container with docker run --dns 8.8.8.8 ... -> same result (error above)
container's and host' /etc/resolv.conf are different (but I think that this might not be the point, as ping command correctly resolves, but feel free to say me wrong if I am)
I am not a sys admin (i am a developer), so I don't know if iptables or ufw (firewall) may be implied in this thing (btw, it's difficult to install non pre-installed packages on node:alpine)
Email server authentication is correct (both username, hostname, password) as it works correctly when i launch my app as npm run start or npm run dev
switch container's network between bridge (default) bridge (custom with docker-compose) and host ... same issue (error above)
Anyone willing to help is really appreciated.
Found out what wasn't working: I was using docker-compose WITHOUT --env-file option.
That way all the environment variables (e.g. PORT, HOST, PSWD, USR) I was trying to access within my app, were left undefined (this was because those environment variables weren't already built in during the building step - design choice, but rather accessed at runtime with process.env)
SOLUTION (change .env file part as suits your situation):
docker-compose --env-file ./.env.production
Useful official resource (docker-compose)
Docker-compose using --env-file option

Docker: Not able to connect to Redis when using docker run instead of docker-compose up

I'm using docker tool belt on windows home edition.
I'm trying to use Node with Redis using docker-compose, it is working well when I'm running the image using docker-compose up (in the same source directory), but when I try to run it using docker run -it myusername/myimage, my Node app is not isn't able to connect to Redis.
throwing:
Error: Redis connection to redis-server:6379 failed - getaddrinfo ENOTFOUND redis-server
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:60:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'redis-server'
}
which I believe is because my node app is not able to find Redis, also even though the app is running when I use docker-compose up, i'm not able to access it on the respective port, i.e. localhost:3000.
this is my docker-compose.yml
version: '3'
services:
my_api:
build: .
ports:
- "3000:3000"
image: my_username/myimage
links:
- redis-server:redis-server
redis-server:
image: "redis:alpine"
there are two issues i'm facing and I believe both of them are interrelated.
EDIT
could this be because of virtualization issue of windows home edition? because it doesn't uses Hyper V, I've just try my hands on docker so I don't know about it much, but David's answer makes much sense that it maybe because of various networks and I need to connect to the valid bridge or so.
here is what I get when I do docker network ls
NETWORK ID NAME DRIVER SCOPE
5802daa117b1 bridge bridge local
7329d018df1b collect_api_mod_default bridge local
5491bfee5551 host host local
be1353789426 none null local
When you run the whole stack in the same docker-compose.yml file, Compose automatically creates a Docker network for you, and this makes cross-service DNS requests work.
If you are trying to manually docker run a container, and you don't specify a --net option at all, you get a thing Docker calls the default bridge network, which is distinctly less useful. You need to make sure your container is attached to the same Docker-internal network as the Redis server.
You can run docker network ls to get a listing of Docker networks; given that docker-compose.yml file there will probably be one named something like source_directory_default. Take that name and pass it to your docker run command (before the image name)
docker run --net source_directory_default -p 3000:3000 my_username/my_api
working index.js for lates version of node and lates version of redis, both working with docker, hope it helps
const express = require('express');
const redis = require('redis');
const app = express()
const client = redis.createClient({
url: 'redis://redis-server', // redis:// + docker-compose service name
port: 6379 // redis default port
});
client.connect()
client.on('error', (err) => console.log('Redis Client Error', err));
client.on('connect', async () => {
await client.set('visits', 0)
console.log('Redis Client Connected');
});
app.get('/', async (req, res) => {
const value = await client.get('visits');
await client.set('visits', parseInt(value) + 1);
res.send('Number of visits: ' + value);
});
app.listen(8081, () => {
console.log('Listening on port 8080')
})

Connecting to MongoDB in Docker from external app

Is it possible to connect to a docker container running a MongoDB image from an external nodejs application running locally? I've tried connecting via localhost:27017. Here's the docker compose file I'm using:
version: '3'
services:
mongodb:
image: 'bitnami/mongodb:3.6.8'
ports:
- "27017:27017"
environment:
- MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD
- MONGODB_USERNAME=$MONGODB_USERNAME
- MONGODB_PASSWORD=$MONGODB_PASSWORD
- MONGODB_DATABASE=$MONGODB_DATABASE
volumes:
- /data/db:/bitnami
I try connecting to it with the following url with no luck:
mongodb://${process.env.MONGODB_USERNAME}:${process.env.MONGODB_PASSWORD}#localhost:27017
EDIT: Connecting via mongodb://localhost:27017 works, but the authentication url errors out. I printed out the result of this string and there's nothing particularly wrong with it. I verified that the username and password match the users inside mongo in the docker container.
app.listen(port, () => {
console.log(`Example app listening on port ${port}!`);
const url = (() => {
if(process.env.MONGODB_USERNAME && process.env.MONGODB_PASSWORD) {
return `mongodb://${process.env.MONGODB_USERNAME}:${process.env.MONGODB_PASSWORD}#localhost:27017/`;
}
console.log('could not find environment vars for mongodb');
})();
MongoClient.connect(url, (err, client) => {
if(err) {
console.log('DB connection error');
} else {
console.log("Connected successfully to server");
client.close();
}
});
});
If the external nodejs application is also running in a docker container then you need to link the containers. Here is an example of a docker run cmd that links containers. I added environment variables to illustrate what host name and port you would use from inside the container.
docker run -d -it -e DEST_PORT=27017 -e DEST_HOST='mongodb' --link mongodb external-application:latest
It's important to always check the result of docker logs <container-name> --tail 25 -f. From my point of view, I think it is an issue related to permissions on this directory '/bitnami/mongodb'. Check out sameersbn comment how to fix this permission issue.
I'll assume it's the compose specification then. Try the following configuration
environment:
MONGODB_ROOT_PASSWORD:$MONGODB_ROOT_PASSWORD
MONGODB_USERNAME:$MONGODB_USERNAME
MONGODB_PASSWORD:$MONGODB_PASSWORD
MONGODB_DATABASE:$MONGODB_DATABASE
volumes:
- '/data/db:/data/db'
The issue turned out to be that I had changed the password in MONGODB_PASSWORD (it had an # in it so I thought it would have interfered with the string parsing, so I consequently changed it). The problem is, when the container restarts it references the same volume (as it should), so the users were never updated and as a result I was logging in with the wrong credentials.

Can't communicate with simple Docker Node.js web app [duplicate]

This question already has answers here:
Containerized Node server inaccessible with server.listen(port, '127.0.0.1')
(2 answers)
Closed 9 months ago.
I'm just trying to learn Node.js and Docker at the same time. I have a very simple Node.js app that listens on a port and returns a string. The Node app itself runs fine when running locally. I'm now trying to get it running in a Docker container but I can't seem to reach it.
Here's my Node app:
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
var count = 0;
var server = http.createServer(function(req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end("Here's the current value: " + count);
console.log('Got a request: ', req.url);
count++;
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
My Dockerfile:
FROM node:latest
MAINTAINER Jason
ENV PORT=3000
COPY . /var/www
WORKDIR /var/www
EXPOSE $PORT
ENTRYPOINT ["node", "app.js"]
My build command:
docker build -t jason/node .
And my run command:
docker run -p 3000:3000 jason/node
The app.js file and Dockerfile live in the same directory where I'm running the commands. Doing a docker ps shows the app running but I just get a site cannot be reached error when navigating to 127.0.0.1:3000 in the browser. I've also confirmed that app.js was properly added to the image and I get the message "Server running at http://127.0.0.1:3000/" after running.
I think I'm missing something really simple, any ideas?
Omit hostname or use '0.0.0.0' on listen function. Make it server.listen(port, '0.0.0.0', () => { console.log(Server running..); });
If You use docker on Windows 7/8 you most probably have a docker-machine running then You would need to access it on something like 192.168.99.100 or whatever ip your docker-machine has.
To see if you are running a docker-machine just issue the command
docker-machine ls

Resources