Nodemailer cannot send email from within Docker container - node.js

I've been searching/reading/trying everywhere on the Internet for about 3 weeks before posting here ...
Context:
developing little website app
technologies:
Next JS (ReactJs, HTML, CSS) for both frontend and backend (Node)
Linux as host (Ubuntu 20.04 LTS)
Docker's container to encapsulate app (based on node:alpine image) (Docker version 20.10.6)
Nodemailer Node's module to send email
this is the code using Nodemailer to send the e-mail message:
import type { NextApiRequest, NextApiResponse } from "next";
import * as nodemailer from "nodemailer";
export default async (req: NextApiRequest, res: NextApiResponse) => {
res.statusCode = 200;
let transporter = nodemailer.createTransport({
host: process.env.NM_HOST,
port: parseInt(process.env.NM_PORT),
secure: true,
auth: {
user: process.env.NM_USER,
pass: process.env.NM_PASS,
},
tls: {
rejectUnauthorized: false,
},
});
// console.log("User:");
// console.log(process.env.NM_USER);
let info = await transporter.sendMail({
from: "Website <xxx#xxx.com>",
to: "Website <xxx#xxx.com>",
subject: "New contact",
text: "NAME:\n" + req.body.data.name + "\n----------\nEMAIL:\n" + req.body.data.email + "\n----------\nBODY:\n" + req.body.data.body,
}, function (err, info) {
if (err) {
console.log(err)
} else {
console.log(info);
}
});
console.log("Message sent: %s", info);
res.json({
a: req.body.data.name,
b: req.body.data.email,
c: req.body.data.body,
});
};
Issue:
when I try to send e-mail using Nodemailer launching my app from Linux host as "npm run start" or "npm run dev", mails get delivered
when I try to send e-mail using Nodemailer launching my app from Docker's container, i get following error (from app's output itself)
Error: connect ECONNREFUSED 127.0.0.1:465
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ESOCKET',
syscall: 'connect',
address: '127.0.0.1',
port: 465,
command: 'CONN'
}
What I already tried and what I observed:
ping google.com (and many others) works from within container (using docker exec -ti container-name sh command)
starting container with docker run --dns 8.8.8.8 ... -> same result (error above)
container's and host' /etc/resolv.conf are different (but I think that this might not be the point, as ping command correctly resolves, but feel free to say me wrong if I am)
I am not a sys admin (i am a developer), so I don't know if iptables or ufw (firewall) may be implied in this thing (btw, it's difficult to install non pre-installed packages on node:alpine)
Email server authentication is correct (both username, hostname, password) as it works correctly when i launch my app as npm run start or npm run dev
switch container's network between bridge (default) bridge (custom with docker-compose) and host ... same issue (error above)
Anyone willing to help is really appreciated.

Found out what wasn't working: I was using docker-compose WITHOUT --env-file option.
That way all the environment variables (e.g. PORT, HOST, PSWD, USR) I was trying to access within my app, were left undefined (this was because those environment variables weren't already built in during the building step - design choice, but rather accessed at runtime with process.env)
SOLUTION (change .env file part as suits your situation):
docker-compose --env-file ./.env.production
Useful official resource (docker-compose)
Docker-compose using --env-file option

Related

Connecting to azure flexible postgres server via node pg

I am using the free subscription at Azure and have successfully created a Ubuntu Server and a Flexible Postgres Database.
Until recently I accessed the DB directly from my Windows 10 desktop. Now I want to route all access through the Ubuntu Server.
For this I have installed Open SSH Client and Open SSH Server on my Windows 10 machine and done the necessary local port forwarding with ssh -L 12345:[DB IP]:5432 my_user#[Ubuntu IP]
The connection works, I confirmed it with pgcli on my desktop with pgcli -h 127.0.0.1 -p 12345 -u my_user -d my_db
But when I am trying to connect via node-pg I receive the following error
UnhandledPromiseRejectionWarning: error: no pg_hba.conf entry for host "[Ubuntu IP]", user "my_user", database "my_db", SSL off
I have already added a Firewall Rule in Azure with the [Ubuntu IP], and the error remains. What bugs me further is that in the Azure Portal of the DB I have enabled "Allow public access from any Azure service within Azure to this server", so the extra Firewall should not even be necessary for this connection.
For the last week, I have been stuck on this and now the connection is finally established, but not accessible by my code. Pretty frustrating. I would be glad about ANY pointers on how to fix this.
Edit #1:
I can't post the pg_hba.conf file. Because the Postgres DB is managed by Azure, I do not have access to pg_hba, which makes the situation more difficult to understand.
My node.js code for testing the connection:
const pg = require("pg");
const passwd = "...";
const client = new pg.Client({
user: 'admin',
host: '127.0.0.1',
database: 'test',
password: passwd,
port: 12345
});
client.connect()
client.on('uncaughtException', function (err) {
console.error(err.stack);
});
const query = "SELECT * FROM test";
try {client.query(query, (err,res) => {
if (err) {
console.error(err);
}
console.log(res);
})}
catch (e) {
console.error(e)
}
The comment by #jjanes helped me in understanding the issue, thank you.
This edited pg.Client config solved my problem:
const client = new pg.Client({
user: 'admin',
host: '127.0.0.1',
database: 'test',
password: passwd,
port: 12345,
ssl: {rejectUnauthorized: false}
});
I found this specific SSL option here https://node-postgres.com/features/ssl

Cannot connect to Cloud SQL Proxy via Docker - Error: connect ENOENT

I can't seem to connect to the CloudSQL using Docker container.
Firstly here is my file paths: https://imgur.com/a/Nmx41o6
Dockerfile.dev:
FROM node:14-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
Dockerfile.sql
RUN mkdir /cloudsql
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY ./cloud_sql_proxy ./
COPY ./service_acct.json ./
version: '3.8'
services:
cloud-sql-proxy:
build:
context: .
dockerfile: DockerFile.sql
volumes:
- /cloudsql:/cloudsql
- /service_acct.json:/app/service_acct.json
command: ./cloud_sql_proxy -dir=/cloudsql -instances=test-game-199281:us-east1:testgame -credential_file=/app/service_acct.json
app:
build:
context: .
dockerfile: DockerFile.dev
env_file:
- ./.env
volumes:
# since we copied root into host in dockerfile, we can map the whole directory with app.
- "./src:/app/src"
ports:
- "5000:5001"
command: sh -c "npm run dev"
My node index.js file. I don't think there is anything wrong, maybe I am entering the wrong connection string format? The password and user is correct as far as I can tell.
const express = require('express');
const { Pool, Client } = require('pg')
const app = express();
require('dotenv').config({path:'../.env'})
const pool = new Pool({
user: 'postgres',
host: '/cloudsql/test-game-199281:us-east1:testgame',
database: 'TestDB',
password: '********',
port: 5432
})
app.get('/', (req, res) => {
pool.connect(function(err, client, done) {
if (err) {
console.log("not able to get connection " + err);
res.status(400).send(err);
return
}
client.query("SELECT * FROM company", [1], (err, result) =>{
done();
if (err) {
console.log(err);
res.status(400).send(err);
}
res.status(200).send(result.rows);
});
});
});
Error I get:
Hello world listening on port 5001
app_1 | Error: connect ENOENT /cloudsql/test-game-199281:us-east1:testgame
/.s.PGSQL.5432
app_1 | at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
app_1 | errno: -2,
app_1 | code: 'ENOENT',
app_1 | syscall: 'connect',
app_1 | address: '/cloudsql/test-game-199281:us-east1:testgame
/.s.PGSQL.5432'
app_1 | }
SOLVED: I switched to TCP. screw unix socket. so confusing.
You've instructed the Cloud SQL Auth proxy to listen to 0.0.0.0:5432 with this flag -instances=test-game-199281:us-east1:testgame=tcp:0.0.0.0:5432.
But then you've instructed your app to connect to /cloudsql/<INSTANCE_CONNCECTION_NAME>, which is a unix socket.
You need to pick one, and make sure you are consistent between you app and proxy.
If you use TCP, you'll have to map the port in the container to a port on your machine (or somewhere in your docker-compose network that your app can reach it.) You'll have to update your app to connect on 127.0.0.1 (or whatever its docker IP is in the network). You can check out more on docker-compose networking here.
If you use Unix Domain sockets, you'll need to volume share the folder containing the socket so that both apps can access it. So if it's in /cloudsql, you'll need to share /cloudsql between your proxy container and your app container. You can check out more on docker-compose volumes here.
Cloud SQL's Managing Database Connections page has examples of connecting with both TCP and Unix domain sockets.
You can try to connect via service name cloud-sql-proxy:5432 instead of localhost:5432 when connecting between different dockers.
Each docker is an isolated network so you cannot use localhost since localhost will refer to the docker container's own local network.
The ENOENT error means that the connector utility cannot find the host to connect to your database. Here's a good answer that further explains it.
On your docker-compose file, the Cloud SQL Proxy is listening via TCP but your code is trying to connect via Unix socket. Your code can't connect to the host because the socket doesn't exist.
The solution is to configure your proxy to create and listen to a Unix Socket. Change the command to:
/cloud_sql_proxy -instances=INSTANCE_CONNECTION_NAME -dir=/cloudsql -credential_file=/tmp/keys/keyfile.json
No need to expose any ports to connect via Unix Sockets. I also suggest building your pool connection with a config object like in the above link or as specified by pg-pool, rather than a DB URL to avoid a possible issue where you cannot connect to a Unix Socket using connectionString URL.

Docker: Not able to connect to Redis when using docker run instead of docker-compose up

I'm using docker tool belt on windows home edition.
I'm trying to use Node with Redis using docker-compose, it is working well when I'm running the image using docker-compose up (in the same source directory), but when I try to run it using docker run -it myusername/myimage, my Node app is not isn't able to connect to Redis.
throwing:
Error: Redis connection to redis-server:6379 failed - getaddrinfo ENOTFOUND redis-server
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:60:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'redis-server'
}
which I believe is because my node app is not able to find Redis, also even though the app is running when I use docker-compose up, i'm not able to access it on the respective port, i.e. localhost:3000.
this is my docker-compose.yml
version: '3'
services:
my_api:
build: .
ports:
- "3000:3000"
image: my_username/myimage
links:
- redis-server:redis-server
redis-server:
image: "redis:alpine"
there are two issues i'm facing and I believe both of them are interrelated.
EDIT
could this be because of virtualization issue of windows home edition? because it doesn't uses Hyper V, I've just try my hands on docker so I don't know about it much, but David's answer makes much sense that it maybe because of various networks and I need to connect to the valid bridge or so.
here is what I get when I do docker network ls
NETWORK ID NAME DRIVER SCOPE
5802daa117b1 bridge bridge local
7329d018df1b collect_api_mod_default bridge local
5491bfee5551 host host local
be1353789426 none null local
When you run the whole stack in the same docker-compose.yml file, Compose automatically creates a Docker network for you, and this makes cross-service DNS requests work.
If you are trying to manually docker run a container, and you don't specify a --net option at all, you get a thing Docker calls the default bridge network, which is distinctly less useful. You need to make sure your container is attached to the same Docker-internal network as the Redis server.
You can run docker network ls to get a listing of Docker networks; given that docker-compose.yml file there will probably be one named something like source_directory_default. Take that name and pass it to your docker run command (before the image name)
docker run --net source_directory_default -p 3000:3000 my_username/my_api
working index.js for lates version of node and lates version of redis, both working with docker, hope it helps
const express = require('express');
const redis = require('redis');
const app = express()
const client = redis.createClient({
url: 'redis://redis-server', // redis:// + docker-compose service name
port: 6379 // redis default port
});
client.connect()
client.on('error', (err) => console.log('Redis Client Error', err));
client.on('connect', async () => {
await client.set('visits', 0)
console.log('Redis Client Connected');
});
app.get('/', async (req, res) => {
const value = await client.get('visits');
await client.set('visits', parseInt(value) + 1);
res.send('Number of visits: ' + value);
});
app.listen(8081, () => {
console.log('Listening on port 8080')
})

How to address backend host with axios, when frontend and backend are in virtual docker network

I'm building a simple Website with login, and my vue-frontend needs to retrieve user data from my nodejs-backend which connects to a sql database.
I decided to use docker-compose for this, and as I understand, docker-compose sets up a network automatically for the services that are mentioned in my docker-compose.yml.
What doesn't seem to work, is the way I address the backend in my code.
I suspect that it might be because of the way I use axios to send a request to my backend.
I have inspected the default docker-network and was able to ping from my frontend to my backend using the dns names I found in the network-configuration.
But using the same names inside my code didn't work.
What does work, is mapping a host port to my exposed api port and using http://localhost:5000 as address, but this defeats the purpose of a docker network.
my docker-compose.yml:
version: '3.3'
services:
vue-frontend:
image: flowmotion/vue-js-frontend
ports:
- 8070:80
depends_on:
- db-user-api
db-user-api:
image: flowmotion/user-db-api
environment:
- PORT=5000
ports:
- 5000:5000 #only needed if docker network connection can't be established
the Vue-fontend files in question:
Login.vue
methods: {
async login() {
try {
const response = await authenticationService.login({
email: this.email,
password: this.password
});
this.$store.dispatch("setToken", response.data.token);
this.$store.dispatch("setUser", response.data.user);
this.$router.push({ path: "/" });
} catch (error) {
this.showError = true;
this.error = error.response.data.error;
}
}
}
};
</script>
authenticationService.js
import api from "#/services/api";
export default {
login(credentials) {
return api().post("login", credentials);
}
};
api.js
import axios from 'axios';
import config from '../config/config';
export default () => {
return axios.create({
baseURL: config.userBackendServer
});
};
config.js ()
module.exports = {
userBackendServer: 'http://cl-dashboard_db-user-api_1:5000' //this doesn't seem to work
};
//using 'http://localhost:5000' works if ports are mapped to host machine.
expected result would be my backend doing a sql lookup.
actuel result is, instead of connecting to my backend my frontend gives me a 404 status and my backend is never reached
you are correct assuming containers in the docker network can talk to each other without opening any ports to the outer world.
point is- your vue app is not in any container- it is served from a container as a js script file to your browser, which is the one sending the requests to your node backend. since your browser is by any means not inside the docker network - you must use the outer port mapping (localhost:5000 in your case) to reach the backend.
let me know if you have any more questions about that.

Right way to connect to Google Cloud SQL from Node.JS

I followed the example on how to set up Node.JS to work with Cloud SQL, and generally got it to work, but with some workarounds on how to connect to the SQL server. I am unable to connect in the proper way passing the INSTANCE_CONNECTION_NAME to the socketPath option of the options variable for the createConnection() method. Instead, as a temporary workaround, I currently specify the server's IP address and put my VM IP address into the server's firewall settings to let it through.
This all works, but I'm now trying put it together properly before publishing to AppEngine.
How can I get it to work?
The following code works fine:
function getConnection ()
{
const options =
{
host: "111.11.11.11", //IP address of my Cloud SQL Server
user: 'root',
password: 'somePassword',
database: 'DatabaseName'
};
return mysql.createConnection(options);
}
But the following code, which I am combining from the Tutorial and from the Github page, which is referred to in the Tutorial, is giving errors:
function getConnection ()
{
const options =
{
user: 'root',
password: 'somePassword',
database: 'DatabaseName',
socketPath: '/cloudsql/project-name-123456:europe-west1:sql-instance-name'
};
return mysql.createConnection(options);
}
Here's the error that I'm getting:
{ [Error: connect ENOENT /cloudsql/project-name-123456:europe-west1:sql-instance-name]
code: 'ENOENT',
errno: 'ENOENT',
syscall: 'connect',
address: 'cloudsql/project-name-123456:europe-west1:sql-instance-name',
fatal: true }
What am I doing wrong? I am concerned that if I publish the app to AppEngine with the IP address, I won't be able to allow the incoming traffic into the SQL server?
I met similar error while testing 'coud sql'.
error message : Error: connect ENOENT /cloudsql/xxx-proj:us-central1:xxx-instance
solution :
+----------------------------------------------------------+ wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O
cloud_sql_proxy chmod +x cloud_sql_proxy sudo mkdir /cloudsql;
sudo chmod 777 /cloudsql ./cloud_sql_proxy -dir=/cloudsql &
=> now node js server can connect to mysql
refer to guide : https://cloud.google.com/appengine/docs/flexible/nodejs/using-cloud-sql
Are you deploying your AppEngine app to the same region as the SQL database? (europe-west1)
The documentation at https://cloud.google.com/sql/docs/mysql/connect-app-engine states "Your application must be in the same region as your Cloud SQL instance."

Resources