Nestjs authentication does not work in docker container - node.js

I developed an full stack app using Angular as the frontend and Nestjs as the backend. The project is organized in a monorepository with NX. The project works fine on my local machine, including authentication (Passport library from Nestjs).
So I decided to dockerize the app, but when the app is run in the docker container the protected routes from the Nestjs backend are not accessible and I am getting a 401 error despite using the JWT token that I got from the same instance running in the docker container.
I migrated the project from bcrypt to bcryptjs because I was getting an error when building the docker container. The thing that confuses me mostly is that on my local machine everything works fine, but in the docker container the protected routes are not accessible despite using the JWT token that I got from the backend.
Dockerfile
FROM node:14
ENV PORT=3333
WORKDIR /app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*","nx.json", "./"]
RUN npm install
COPY ./apps .
EXPOSE 3333
CMD npm start
docker-compose.yml
version: '3.4'
services:
myapp:
image: myapp
build:
context: .
dockerfile: ./Dockerfile
volumes:
- .:/app
depends_on:
- postgres
environment:
API_PORT: 3333
JWT_SECRET: verystrongsecret
JWT_EXPIRES_IN: 3600
DB_TYPE: postgres
DB_PORT: 5432
DB_HOST: postgres
DB_USERNAME: user
DB_PASSWORD: password
DB_NAME: db
NODE_ENV: development
TYPEORM_SYNC: 'true'
ports:
- 3333:3333
postgres:
image: postgres:10.4
ports:
- 5432:5432
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
How to solve the issue with the authentication? What could cause docker to this strange behavior?
JwtStrategy:
export class JwtStrategy extends PassportStrategy(Strategy) {
constructor(
#InjectRepository(UserRepository) private userRepository: UserRepository
) {
super({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
secretOrKey: process.env.JWT_SECRET || config.get('jwt.secret'),
});
}
async validate(payload: JwtPayload): Promise<User> {
const { email } = payload;
const user = await this.userRepository.findOne({ email });
if (!user) {
throw new UnauthorizedException();
}
return user;
}
}
to protect the routes I am using the : #UseGuards(AuthGuard())
where AuthGuard is from the passport library.
The AuthModule:
#Module({
imports: [
PassportModule.register({ defaultStrategy: 'jwt' }),
JwtModule.register({
secret: process.env.JWT_SECRET || jwtConfig.secret,
signOptions: {
expiresIn: process.env.JWT_EXPIRES_IN || jwtConfig.expiresIn,
},
}),
TypeOrmModule.forFeature([UserRepository]),
],
controllers: [AuthController],
providers: [AuthService, JwtStrategy],
exports: [AuthService, JwtStrategy, PassportModule],
})
export class AuthModule {}

Related

Post request with weird problem in docker [Solved]

Hi I working in a project using next.js in frontend and express in backend. I start to connect applications and got a weird problem, when axios try to send post request to api I received a follow error:
I say weird because get requests works, my api has cors config, I using docker in all projects and make some tests
server.ts (backend)
import express from 'express'
import { adminJs, adminJsRouter } from './adminjs'
import { sequelize } from './database'
import dotenv from 'dotenv'
import { router } from './routes'
import cors from 'cors'
dotenv.config()
const app = express()
app.use(cors())
app.use(express.static('public'))
app.use(express.json())
app.use(adminJs.options.rootPath, adminJsRouter)
app.use(router)
const PORT = process.env.SERVER_PORT || 3000
app.listen(PORT, () => {
sequelize.authenticate().then(() => console.log('DB connection sucessfull.'))
console.log(`Server started successfuly at port ${PORT}`)
})
api.ts (frontend)
import axios from "axios";
const baseURL = process.env.NEXT_PUBLIC_BASEURL!
const api = axios.create({baseURL})
export type ErrorType ={
message: string
}
export default api;
authService.ts (frontend) where problem happen
const authService = {
register: async (params: Register) => {
try {
const res = await api.post<AxiosResponse<Register>>('/auth/register', params)
console.log(res)
return res
} catch (err) {
if (!axios.isAxiosError<AxiosError<ErrorType>>(err)) throw err
console.error(JSON.stringify(err))
return err
}
}
}
export default authService
In docker I test requests using container alias and localhost and get a follow results in situations:
using container alias
get request in frontend: works
post request in frontend: problem
post request using curl inside container: works
using http://localhost
get request in frontend: problem
post request in frontend: works
post request using curl inside container: works
post request using postman: works
docker-compose.yml (frontend)
version: '3.9'
services:
front:
build:
context: .
ports:
- '3001:3001'
volumes:
- .:/onebitflix-front
command: bash start.sh
stdin_open: true
environment:
- NEXT_PUBLIC_BASEURL=http://api:3000
- STATIC_FILES_BASEURL=http://localhost:3000
networks:
- onebitflix-net
networks:
onebitflix-net:
name: onebitflix-net
external: true
docker-compose.yml (backend)
version: '3.8'
services:
api: #I use this alias in frontend
build: .
command: bash start.sh
ports:
- "3000:3000"
volumes:
- .:/onebitflix
environment:
NODE_ENV: development
SERVER_PORT: 3000
HOST: db
PORT: 5432
DATABASE: onebitflix_development
USERNAME: onebitflix
PASSWORD: onebitflix
JWT_SECRET: chave-do-jwt
depends_on:
- db
networks:
- onebitflix-net
db:
image: postgres:15.1
environment:
POSTGRES_DB: onebitflix_development
POSTGRES_USER: onebitflix
POSTGRES_PASSWORD: onebitflix
ports:
- "5432:5432"
networks:
- onebitflix-net
networks:
onebitflix-net:
name: onebitflix-net
external: true
volumes:
db:
When you connect from the browser to the API, you need to use a URL that's reachable from the browser.
The docker-compose service names are only usable on the docker network, so you can't use api as a hostname from outside the network.
So you need to change
NEXT_PUBLIC_BASEURL=http://api:3000
to
NEXT_PUBLIC_BASEURL=http://localhost:3000
in your docker-compose.yml file

Bull queue not connecting with docker redis service. Error: connect ECONNREFUSED 127.0.0.1:6379

I am trying to establish a redis connection in nestjs via docker. I am using ioredis to connect to the redis but when I start my nest application I am keep on getting ECONNREFUSED. It also looks like the bull queue is also not establishing the connection with redis.
Error: connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1300:16)
I have went through many solutions provided but nothing seems to be working.
#Module({
imports: [
ConfigModule.forRoot({
load: [redisConfig],
}),
BullModule.registerQueueAsync({
name: 'jobs',
imports: [ConfigModule.forFeature(bullQueueConfig)],
useFactory: async (configService: ConfigService) => ({
redis: {
...configService.get('bullQueue'),
},
}),
inject: [ConfigService],
}),
],
controllers: [ConfigurationController],
providers: [ConfigurationService, ConfigurationRepository],
exports: [ConfigurationService],
})
export class ConfigurationModule {}
bull queue config
export default registerAs('bullQueue', () => {
const redisURL =
process.env.NODE_ENV === 'local'
? {
host: process.env.BULL_QUEUE_REDIS_HOST,
port: parseInt(process.env.BULL_QUEUE_REDIS_PORT ?? '6379'),
}
: JSON.parse(JSON.stringify(process.env.REDIS_URL));
const env = {
...redisURL,
};
return env;
I get ECONNREFUSED error after the configuration module initialized.
In my .ts file
this.redisClient = new Redis({
...newRedisObj,
});
newRedisObj also holds the correct values
{host: 'redis', port: 6379}
Redis config
export default registerAs('redis', () => {
const redisURL =
process.env.NODE_ENV === 'local'
? {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT ?? '6379'),
}
: JSON.parse(JSON.stringify(process.env.REDIS_URL));
const env = {
...redisURL,
};
return env;
The config is returning the correct json with
{host: 'redis', port: 6379}
But it is still try to connect with 127.0.0.1:6379 and hence ECONNREFUSED.
The docker-compose has also the correct setup
redis:
container_name: redis_container
image: "bitnami/redis"
environment:
- ALLOW_EMPTY_PASSWORD=yes
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- "redis_data:/bitnami/redis/data"
I have this setup for redis and redis commander. Try it with docker compose -up
version: '3'
services:
redis:
image: 'redis:alpine'
ports:
- '6379:6379'
volumes:
- 'redis-data:/data'
redis-commander:
image: rediscommander/redis-commander:latest
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- '8081:8081'
depends_on:
- redis
volumes:
redis-data:
external: false
Iguess it's because you have to use:
export default registerAs('redis', () => {
const redisURL =
process.env.NODE_ENV === 'local'
? {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT ?? '6379'),
}
: JSON.parse(JSON.stringify(process.env.REDIS_URL));
// look here is surrounded by redis prop
const env = {
redis:...redisURL,
};
return env;
nestjs bull screenshoot

Postgres isn't accepting any connection from other containers using docker-compose

I'm trying to connect my nodejs app with postgres using docker-compose.
Here's my docker-compose.yml
version: "3.9"
services:
db:
container_name: db
image: postgres
restart: always
environment:
POSTGRES_USER: agoodusername
POSTGRES_PASSWORD: astrongpassword
POSTGRES_DB: dbname
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- 5433:5432
networks:
- postgres
pgadmin:
container_name: pgadmin4
platform: linux/amd64
image: dpage/pgadmin4
restart: always
depends_on:
- db
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- 5050:80
networks:
- postgres
be:
container_name: be
depends_on:
- db
build:
context: .
dockerfile: ./Dockerfile
ports:
- 3333:3333
networks:
- postgres
networks:
postgres:
driver: bridge
(Note that I have tried with and without networks)
My index.ts:
import { Entity, Column, PrimaryGeneratedColumn, createConnection } from "typeorm";
import express from 'express';
#Entity()
class Photo {
#PrimaryGeneratedColumn()
id?: number;
#Column()
name?: string;
#Column()
description?: string;
#Column()
filename?: string;
#Column()
views?: number;
#Column()
isPublished?: boolean;
}
const createCon = async () => {
let retries = 5;
while(retries) {
try {
const connection = await createConnection({
type: 'postgres',
url: 'postgres://agoodusername:astrongpassword#db:5433/dbname',
synchronize: false,
entities: [Photo]
});
console.log(connection.isConnected);
break;
}
catch(e) {
console.log(e);
retries -= 1;
await new Promise(res => setTimeout(res, 5000));
}
}
}
const app = express();
app.listen(3333, '0.0.0.0', () => {
createCon();
console.log("Server is running at port 3333");
})
Dockerfile:
FROM node:12.18.1-alpine
WORKDIR /app
COPY . .
RUN yarn
CMD ["yarn", "start"]
EXPOSE 3333
I ran postgres on its own docker container and node on another container (without docker-compose) everything works just fine.
Also, pgadmin container can't connect to the postgres container, I have provided the correct Hostname (db in this case) and the correct Host address (got it from running docker inspect db | grep IPAddress)
Here are the logs from the nodejs container:
yarn run v1.22.4
$ ts-node index.ts
Server is running at port 3333
Error: connect ECONNREFUSED 172.20.0.2:5433
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1141:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.20.0.2',
port: 5433
}
in case you want a full project check this repo
you are using incorrect port, 5433 port exposed outside, so it's only open on host, when using docker ip, you should use port inside running docker, that is 5432
As #crack_iT said in the comments, the port "5433" is exposed to the host machine, not to other containers, so to interact with the other container you should use the exposed port by the image (which in this case is 5432).

Not able to access value from .env file in strapi database.js file

I am dockerizing a strapi application with mongodb atlas hosted database. The image is working fine when I am hardcoding the database credentials inside config/database.js file. But I want to get those credentials from .env file. According to strapi doc I can get those variable in database.js file without using the dotenv package
https://strapi.io/documentation/developer-docs/latest/setup-deployment-guides/configurations.html#environment-variables
But this shows me the following error
error Error connecting to the Mongo database. URI does not have
hostname, domain name and tld
I tried to use dotenv and used process.env to get variables but it still shows me the same erro. Any idea how can I resolve this?
database connection code
require('dotenv').config()
const {
DATABASE_HOST,
DATABASE_USERNAME,
DATABASE_PASSWORD
} = process.env;
module.exports = ({ env }) =>
({
defaultConnection: 'default',
connections: {
default: {
connector: 'mongoose',
settings: {
host: env('DATABASE_HOST', 'open-jade-cms-0.r07jc.mongodb.net'),
srv: env.bool('DATABASE_SRV', true),
port: env.int('DATABASE_PORT', 27017),
database: env('DATABASE_NAME', 'open-jade-cms-dev'),
username: env('DATABASE_USERNAME', 'open-jade-data-admin'),
password: env('DATABASE_PASSWORD', 'uppERH7xmydTpXI8')
},
options: {
authenticationDatabase: env('AUTHENTICATION_DATABASE', null),
ssl: env.bool('DATABASE_SSL', true),
},
},
},
});
docker file
FROM strapi/base
COPY ./ ./
RUN npm install
RUN npm install dotenv
RUN npm run build
CMD ["npm","run", "start:develop"]
You won't need to install dotenv package. Just make sure you have .env in place. Something like this:
DATABASE_CLIENT=mongo
DATABASE_NAME=strapi
DATABASE_HOST=mongoexample
DATABASE_PORT=27017
DATABASE_USERNAME=strapi
DATABASE_PASSWORD=password
MONGO_INITDB_ROOT_USERNAME=strapi
MONGO_INITDB_ROOT_PASSWORD=password
It is a good idea to use docker-compose when running strapi locally
version: "3"
services:
strapiexample:
image: strapi/strapi
container_name: strapiexample
restart: unless-stopped
env_file: .env
environment:
DATABASE_CLIENT: ${DATABASE_CLIENT}
DATABASE_NAME: ${DATABASE_NAME}
DATABASE_HOST: ${DATABASE_HOST}
DATABASE_PORT: ${DATABASE_PORT}
DATABASE_USERNAME: ${DATABASE_USERNAME}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
networks:
- strapi-app-network
volumes:
- ./app:/srv/app
ports:
- "1337:1337"
The above has been taken from strapi blogs

Deploying Nuxt with Docker, env variables not registering and unexpect API call?

I am re-re-re-reading the docs on environment variables and am a bit confused.
MWE repo: https://gitlab.com/SumNeuron/docker-nf
I made a plugin /plugins/axios.js which creates a custom axios instance:
import axios from 'axios'
const apiVersion = 'v0'
const api = axios.create({
baseURL: `${process.env.PUBLIC_API_URL}/api/${apiVersion}/`
})
export default api
and accordingly added it to nuxt.config.js
import colors from 'vuetify/es5/util/colors'
import bodyParser from 'body-parser'
import session from 'express-session'
console.log(process.env.PUBLIC_API_URL)
export default {
mode: 'spa',
env: {
PUBLIC_API_URL: process.env.PUBLIC_API_URL || 'http://localhost:6091'
},
// ...
plugins: [
//...
'#/plugins/axios.js'
]
}
I set PUBLIC_API_URL to http://localhost:9061 in the .env file. Oddly, the log statement is correct (port 9061) but when trying to reach the site there is an api call to port 6091 (the fallback)
System setup
project/
|-- backend (flask api)
|-- frontend (npx create-nuxt-app frontend)
|-- assets/
|-- ...
|-- plugins/
|-- axios.js
|-- restriced_pages
|-- index.js (see other notes 3)
|-- ...
|-- nuxt.config.js
|-- Dockerfile
|-- .env
|-- docker-compose.yml
Docker
docker-compose.yml
version: '3'
services:
nuxt: # frontend
image: frontend
container_name: my_nuxt
build:
context: .
dockerfile: ./frontend/Dockerfile
restart: always
ports:
- "3000:3000"
command: "npm run start"
environment:
- HOST
- PUBLIC_API_URL
flask: # backend
image: backend
container_name: my_flask
build:
context: .
dockerfile: ./backend/Dockerfile
command: bash deploy.sh
environment:
- REDIS_URL
- PYTHONPATH
ports:
- "9061:9061"
expose:
- '9061'
depends_on:
- redis
worker:
image: backend
container_name: my_worker
command: python3 manage.py runworker
depends_on:
- redis
environment:
- REDIS_URL
- PYTHONPATH
redis: # for workers
container_name: my_redis
image: redis:5.0.3-alpine
expose:
- '6379'
Dockerfile
FROM node:10.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
COPY ./frontend ${APP_ROOT}
RUN npm install
RUN npm run build
Other notes:
The reason the site fails to load is because the new axios plugin (#/plugins/axios.js) makes a weird call xhr call when the page is loaded, triggered by commons.app.js line 464. I do not know why, this call is no where explicitly in my code.
I see this warning:
WARN Warning: connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process.
I do not know what caused it or how to correct it
I have a "restricted" page:
// Create express router
const router = express.Router()
// Transform req & res to have the same API as express
// So we can use res.status() & res.json()
const app = express()
router.use((req, res, next) => {
Object.setPrototypeOf(req, app.request)
Object.setPrototypeOf(res, app.response)
req.res = res
res.req = req
next()
})
// Add POST - /api/login
router.post('/login', (req, res) => {
if (req.body.username === username && req.body.password === password) {
req.session.authUser = { username }
return res.json({ username })
}
res.status(401).json({ message: 'Bad credentials' })
})
// Add POST - /api/logout
router.post('/logout', (req, res) => {
delete req.session.authUser
res.json({ ok: true })
})
// Export the server middleware
export default {
path: '/restricted_pages',
handler: router
}
which is configured in nuxt.config.js as
serverMiddleware: [
// body-parser middleware
bodyParser.json(),
// session middleware
session({
secret: 'super-secret-key',
resave: false,
saveUninitialized: false,
cookie: { maxAge: 60000 }
}),
// Api middleware
// We add /restricted_pages/login & /restricted_pages/logout routes
'#/restricted_pages'
],
which uses the default axios module:
//store/index.js
import axios from 'axios'
import api from '#/plugins/axios.js'
//...
const actions = {
async login(...) {
// ....
await axios.post('/restricted_pages/login', { username, password })
// ....
}
}
// ...
As you are working in SPA mode, you need your environment variables to be available during build time.
The $ docker run command is therefore already too late to define these variables, and that is what you are doing with your docker-compose's 'environment' key.
So what you need to do to make these variables available during buildtime is to define them in your Dockerfile with ENV PUBLIC_API_URL http://localhost:9061. However, if you want them to be defined by your docker-compose, you need to pass them as build args. I.e. in your docker-compose :
nuxt:
build:
# ...
args:
PUBLIC_API_URL: http://localhost:9061
and in your Dockerfile, you catch that arg and pass it to your build environment like so :
ARG PUBLIC_API_URL
ENV PUBLIC_API_URL ${PUBLIC_API_URL}
If you don't want to define the variable's value directly in your docker-compose, but rather use locally (i.e. on the machine you're lauching the docker-compose command) defined environment variables (for instance with shell $ export PUBLIC_API_URL=http://localhost:9061), you can reference it as you would in a subsequent shell command, so your docker-compose ends up like this :
nuxt:
build:
# ...
args:
PUBLIC_API_URL: ${PUBLIC_API_URL}
The Nuxt RuntimeConfig properties can be used instead of the env configuration:
https://nuxtjs.org/docs/2.x/directory-structure/nuxt-config#publicruntimeconfig
publicRuntimeConfig is available as $config for client and server side and can contain runtime environment variables by configuring it in nuxt.config.js:
export default {
...
publicRuntimeConfig: {
myEnv: process.env.MYENV || 'my-default-value',
},
...
}
Use it in your components like so:
// within <template>
{{ $config.myEnv }}
// within <script>
this.$config.myEnv
See also this blog post for further information.

Resources