Hi I working in a project using next.js in frontend and express in backend. I start to connect applications and got a weird problem, when axios try to send post request to api I received a follow error:
I say weird because get requests works, my api has cors config, I using docker in all projects and make some tests
server.ts (backend)
import express from 'express'
import { adminJs, adminJsRouter } from './adminjs'
import { sequelize } from './database'
import dotenv from 'dotenv'
import { router } from './routes'
import cors from 'cors'
dotenv.config()
const app = express()
app.use(cors())
app.use(express.static('public'))
app.use(express.json())
app.use(adminJs.options.rootPath, adminJsRouter)
app.use(router)
const PORT = process.env.SERVER_PORT || 3000
app.listen(PORT, () => {
sequelize.authenticate().then(() => console.log('DB connection sucessfull.'))
console.log(`Server started successfuly at port ${PORT}`)
})
api.ts (frontend)
import axios from "axios";
const baseURL = process.env.NEXT_PUBLIC_BASEURL!
const api = axios.create({baseURL})
export type ErrorType ={
message: string
}
export default api;
authService.ts (frontend) where problem happen
const authService = {
register: async (params: Register) => {
try {
const res = await api.post<AxiosResponse<Register>>('/auth/register', params)
console.log(res)
return res
} catch (err) {
if (!axios.isAxiosError<AxiosError<ErrorType>>(err)) throw err
console.error(JSON.stringify(err))
return err
}
}
}
export default authService
In docker I test requests using container alias and localhost and get a follow results in situations:
using container alias
get request in frontend: works
post request in frontend: problem
post request using curl inside container: works
using http://localhost
get request in frontend: problem
post request in frontend: works
post request using curl inside container: works
post request using postman: works
docker-compose.yml (frontend)
version: '3.9'
services:
front:
build:
context: .
ports:
- '3001:3001'
volumes:
- .:/onebitflix-front
command: bash start.sh
stdin_open: true
environment:
- NEXT_PUBLIC_BASEURL=http://api:3000
- STATIC_FILES_BASEURL=http://localhost:3000
networks:
- onebitflix-net
networks:
onebitflix-net:
name: onebitflix-net
external: true
docker-compose.yml (backend)
version: '3.8'
services:
api: #I use this alias in frontend
build: .
command: bash start.sh
ports:
- "3000:3000"
volumes:
- .:/onebitflix
environment:
NODE_ENV: development
SERVER_PORT: 3000
HOST: db
PORT: 5432
DATABASE: onebitflix_development
USERNAME: onebitflix
PASSWORD: onebitflix
JWT_SECRET: chave-do-jwt
depends_on:
- db
networks:
- onebitflix-net
db:
image: postgres:15.1
environment:
POSTGRES_DB: onebitflix_development
POSTGRES_USER: onebitflix
POSTGRES_PASSWORD: onebitflix
ports:
- "5432:5432"
networks:
- onebitflix-net
networks:
onebitflix-net:
name: onebitflix-net
external: true
volumes:
db:
When you connect from the browser to the API, you need to use a URL that's reachable from the browser.
The docker-compose service names are only usable on the docker network, so you can't use api as a hostname from outside the network.
So you need to change
NEXT_PUBLIC_BASEURL=http://api:3000
to
NEXT_PUBLIC_BASEURL=http://localhost:3000
in your docker-compose.yml file
Related
Hello I am new to the docker and I am trying to dockerize my application that uses React as frontend, nodejs as backend and mySQL as database. However when I try to fetch data from server from my react app, it gives me error:
Access to fetch at 'http://localhost:3001/api' from origin 'http://localhost:3000' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header has a value 'http://127.0.0.1:3000' that is not equal to the supplied origin. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My react app is rendered and also when I go to http://localhost:3001/api I receive the data I would like to get. Just the communication between react and nodejs is somehow broken.
Here are my Docker files and env files:
.env:
DB_HOST=localhost
DB_USER=root
DB_PASSWORD=123456
DB_NAME=testdb
DB_PORT=3306
MYSQLDB_USER=root
MYSQLDB_ROOT_PASSWORD=123456
MYSQLDB_DATABASE=testdb
MYSQLDB_LOCAL_PORT=3306
MYSQLDB_DOCKER_PORT=3306
NODE_LOCAL_PORT=3001
NODE_DOCKER_PORT=3001
CLIENT_ORIGIN=http://127.0.0.1:3000
CLIENT_API_BASE_URL=http://127.0.0.1:3001/api
REACT_LOCAL_PORT=3000
REACT_DOCKER_PORT=80
dockerfile for react:
FROM node:14.17.0 as build-stage
WORKDIR /frontend
COPY package.json .
RUN npm install
COPY . .
ARG REACT_APP_API_BASE_URL
ENV REACT_APP_API_BASE_URL=$REACT_APP_API_BASE_URL
RUN npm run build
FROM nginx:1.17.0-alpine
COPY --from=build-stage /frontend/build /usr/share/nginx/html
EXPOSE 80
CMD nginx -g 'daemon off;'
dockerfile for nodejs:
FROM node:14.17.0
WORKDIR /
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "server.js" ]
docker-compose.yml :
version: '3.8'
services:
mysqldb:
image: mysql
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- backend
server-api:
depends_on:
- mysqldb
build: ./
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
- CLIENT_ORIGIN=$CLIENT_ORIGIN
networks:
- backend
- frontend
frontend-ui:
depends_on:
- server-api
build:
context: ./frontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
db:
networks:
backend:
frontend:
My project folder structure is a bit weird as my server its things(node_modules, package.json...) are in the root where docker-compose, .env and Dockerfile for server is located.
React app and frontend is in /frontend folder where also Dockerfile for react is located.
In react I call fetch("http://localhost:3001/api").
Server is created with express :
const express = require('express');
const cors = require('cors');
const server = express();
var mysql = require('mysql2');
require("dotenv").config();
const port = 3001
server.use(express.static('public'));
var corsOptions = {
origin: "http://127.0.0.1:3000"
}
server.use(cors(corsOptions));
var con = mysql.createConnection({
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD
});
server.get('/api', async (req, res) => {
console.log("START");
con.connect(function (err) {
if (err) throw err;
console.log("connected !");
con.query("use testdb;", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
con.query("select * from records;", function (err, result, fields) {
if (err) throw err;
res.send(result);
});
});
});
server.listen(port, () => {
console.log(`Server listening on port ${port}`)
})
I created this thanks to This tutorial
Thanks for any help.
change this: origin: "http://127.0.0.1:3000"
to this: origin: "http://localhost:3000"
I connected with docker redis container.The redis working in the docker.If I execute the docker file with docker exec -it 96e199a8badf sh, I connected to redis server.
My node.js application like this.I use redis 4.1.0 version.
I don't know, what's going on.How can I fix this?
docker-compose.yml
version: '3'
services:
app:
container_name: delivery-app
build:
dockerfile: 'Dockerfile'
context: .
ports:
- "3000:3000"
volumes:
- .:/app,
- '/app/node_modules'
networks:
- redis
redis:
image: "redis"
ports:
- "6379:6379"
networks:
- redis
networks:
redis:
driver: bridge
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["node","server.js"]
code:
const redisClient = redis.createClient({
socket: {
port: 6379,
host: "redis"
}
});
await redisClient.connect();
redisClient.on('connect',function(){
return res.status(200).json({
success: true
})
}).on('error',function(error){
return res.status(400).json({
success: false
})
});
package.json
"redis": "^4.1.0"
I had the same issue and solved it by passing the host parameter as a socket object in createClient like this
{
socket: {
host: "redis"
}
}
This seems to be new because before you didn't have to use the socket object
Here's the documentation for node-redis createClient
https://github.com/redis/node-redis/blob/HEAD/docs/client-configuration.md
This question already has answers here:
MongoDB on with Docker "failed to connect to server [localhost:27017] on first connect "
(4 answers)
Closed 1 year ago.
Both these services work individually, Yet when I go to the docker endpoint through the container the app crashes as it fails to connect to my database. Can anyone see why?
After running docker-compose i have inspected each container individually and they work fine but just not together. Unsure as to why as for others it seems to work
Docker compose
version: '3.3'
services:
db:
container_name: db
image: mongo
# volumes:
# - ./db:/data/db
ports:
- '27017:27017'
# frontend:
# build:
# context: './frontend/'
# dockerfile: Dockerfile
# container_name: reactfront
# # depends_on: [server]
# ports:
# - '3000:3000'
# restart: always
server:
build:
context: ./backend/
dockerfile: Dockerfile
container_name: nodeserver
ports:
- '4000:4000'
restart: always
depends_on: [db]
links:
- db
volumes:
db_data: {}
Node js app
const express = require('express')
const app = express()
const port = 4000
var cors = require('cors')
const { MongoClient } = require("mongodb");
const uri = 'mongodb://localhost:27017'
const client = new MongoClient(uri);
app.use(cors())
app.get('/', (req, res) => {
res.send('Hello Worldd!')
})
app.get('/docker', async (req, res) => {
let myres;
client.connect().then(async (db, b) => {
myres = await db.db().admin().listDatabases()
console.log(await db.db().admin().listDatabases());
res.json(myres)
})
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Docker-compose will create a bridge network and the containers will be able to resolve each others' names.
Replace
const uri = 'mongodb://localhost:27017' with const uri = 'mongodb://db:27017' (db is mongo's service name).
You cant use localhost apparently as they are in different containers (seems obvious now) - I checked the IP of the container and put that into the mongo url
I am re-re-re-reading the docs on environment variables and am a bit confused.
MWE repo: https://gitlab.com/SumNeuron/docker-nf
I made a plugin /plugins/axios.js which creates a custom axios instance:
import axios from 'axios'
const apiVersion = 'v0'
const api = axios.create({
baseURL: `${process.env.PUBLIC_API_URL}/api/${apiVersion}/`
})
export default api
and accordingly added it to nuxt.config.js
import colors from 'vuetify/es5/util/colors'
import bodyParser from 'body-parser'
import session from 'express-session'
console.log(process.env.PUBLIC_API_URL)
export default {
mode: 'spa',
env: {
PUBLIC_API_URL: process.env.PUBLIC_API_URL || 'http://localhost:6091'
},
// ...
plugins: [
//...
'#/plugins/axios.js'
]
}
I set PUBLIC_API_URL to http://localhost:9061 in the .env file. Oddly, the log statement is correct (port 9061) but when trying to reach the site there is an api call to port 6091 (the fallback)
System setup
project/
|-- backend (flask api)
|-- frontend (npx create-nuxt-app frontend)
|-- assets/
|-- ...
|-- plugins/
|-- axios.js
|-- restriced_pages
|-- index.js (see other notes 3)
|-- ...
|-- nuxt.config.js
|-- Dockerfile
|-- .env
|-- docker-compose.yml
Docker
docker-compose.yml
version: '3'
services:
nuxt: # frontend
image: frontend
container_name: my_nuxt
build:
context: .
dockerfile: ./frontend/Dockerfile
restart: always
ports:
- "3000:3000"
command: "npm run start"
environment:
- HOST
- PUBLIC_API_URL
flask: # backend
image: backend
container_name: my_flask
build:
context: .
dockerfile: ./backend/Dockerfile
command: bash deploy.sh
environment:
- REDIS_URL
- PYTHONPATH
ports:
- "9061:9061"
expose:
- '9061'
depends_on:
- redis
worker:
image: backend
container_name: my_worker
command: python3 manage.py runworker
depends_on:
- redis
environment:
- REDIS_URL
- PYTHONPATH
redis: # for workers
container_name: my_redis
image: redis:5.0.3-alpine
expose:
- '6379'
Dockerfile
FROM node:10.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
COPY ./frontend ${APP_ROOT}
RUN npm install
RUN npm run build
Other notes:
The reason the site fails to load is because the new axios plugin (#/plugins/axios.js) makes a weird call xhr call when the page is loaded, triggered by commons.app.js line 464. I do not know why, this call is no where explicitly in my code.
I see this warning:
WARN Warning: connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process.
I do not know what caused it or how to correct it
I have a "restricted" page:
// Create express router
const router = express.Router()
// Transform req & res to have the same API as express
// So we can use res.status() & res.json()
const app = express()
router.use((req, res, next) => {
Object.setPrototypeOf(req, app.request)
Object.setPrototypeOf(res, app.response)
req.res = res
res.req = req
next()
})
// Add POST - /api/login
router.post('/login', (req, res) => {
if (req.body.username === username && req.body.password === password) {
req.session.authUser = { username }
return res.json({ username })
}
res.status(401).json({ message: 'Bad credentials' })
})
// Add POST - /api/logout
router.post('/logout', (req, res) => {
delete req.session.authUser
res.json({ ok: true })
})
// Export the server middleware
export default {
path: '/restricted_pages',
handler: router
}
which is configured in nuxt.config.js as
serverMiddleware: [
// body-parser middleware
bodyParser.json(),
// session middleware
session({
secret: 'super-secret-key',
resave: false,
saveUninitialized: false,
cookie: { maxAge: 60000 }
}),
// Api middleware
// We add /restricted_pages/login & /restricted_pages/logout routes
'#/restricted_pages'
],
which uses the default axios module:
//store/index.js
import axios from 'axios'
import api from '#/plugins/axios.js'
//...
const actions = {
async login(...) {
// ....
await axios.post('/restricted_pages/login', { username, password })
// ....
}
}
// ...
As you are working in SPA mode, you need your environment variables to be available during build time.
The $ docker run command is therefore already too late to define these variables, and that is what you are doing with your docker-compose's 'environment' key.
So what you need to do to make these variables available during buildtime is to define them in your Dockerfile with ENV PUBLIC_API_URL http://localhost:9061. However, if you want them to be defined by your docker-compose, you need to pass them as build args. I.e. in your docker-compose :
nuxt:
build:
# ...
args:
PUBLIC_API_URL: http://localhost:9061
and in your Dockerfile, you catch that arg and pass it to your build environment like so :
ARG PUBLIC_API_URL
ENV PUBLIC_API_URL ${PUBLIC_API_URL}
If you don't want to define the variable's value directly in your docker-compose, but rather use locally (i.e. on the machine you're lauching the docker-compose command) defined environment variables (for instance with shell $ export PUBLIC_API_URL=http://localhost:9061), you can reference it as you would in a subsequent shell command, so your docker-compose ends up like this :
nuxt:
build:
# ...
args:
PUBLIC_API_URL: ${PUBLIC_API_URL}
The Nuxt RuntimeConfig properties can be used instead of the env configuration:
https://nuxtjs.org/docs/2.x/directory-structure/nuxt-config#publicruntimeconfig
publicRuntimeConfig is available as $config for client and server side and can contain runtime environment variables by configuring it in nuxt.config.js:
export default {
...
publicRuntimeConfig: {
myEnv: process.env.MYENV || 'my-default-value',
},
...
}
Use it in your components like so:
// within <template>
{{ $config.myEnv }}
// within <script>
this.$config.myEnv
See also this blog post for further information.
Im building a node api boilerplate with docker, babel, istanbul, pm2, eslint and other features. My project works fine in dev mode with nodemon and works fine in test mode with mocha too. However when I run the project in prod mode with pm2 the docker ports don't bind.
The full project can be find here https://github.com/apandrade/node-api-boilerplate
Docker ps result after run in production mode
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d5362284957 node:latest "npm start" 15 seconds ago Up 15 seconds nodeapiboilerplate_provision_run_1
a2c79e3e47cc mongo "docker-entrypoint.s…" 52 seconds ago Up 51 seconds 0.0.0.0:27017->27017/tcp mongo
Base.yml file
version: "2"
services:
db_credentials:
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api_db
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
common: &common
image: "node:latest"
working_dir: /usr/src/app
restart: always
volumes:
- ./:/usr/src/app
- ./scripts/waitforit:/usr/bin/waitforit
ports:
- "3000:3000"
base:
<<: *common
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api_db
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
- APP_NAME=node-api-boilerplate
- PORT=3000
- DB_HOST=mongo
- DB_PORT=27017
base_test:
<<: *common
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
- PORT=3000
- DB_HOST=mongo
- DB_PORT=27017
docker-compose.yml file
version: "2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
- ./scripts/mongo-entrypoint.sh:/docker-entrypoint-initdb.d/mongo-entrypoint.sh
ports:
- "27017:27017"
extends:
file: base.yml
service: db_credentials
command: "mongod --auth"
develop:
extends:
file: base.yml
service: base
environment:
- NODE_ENV=development
- LOG_LEVEL=debug
container_name: dev_node_api
command: "npm run dev"
depends_on:
- mongo
provision:
extends:
file: base.yml
service: base
environment:
- NODE_ENV=production
- LOG_LEVEL=info
container_name: prod_node_api
command: "npm start"
depends_on:
- mongo
test:
extends:
file: base.yml
service: base_test
environment:
- NODE_ENV=test
- LOG_LEVEL=debug
container_name: test_node_api
command: "npm run test"
depends_on:
- mongo
process.json file
{
"apps" : [{
"name" : "node-api-boilerplate",
"script" : "./src/server.js",
"exec_mode" : "cluster",
"exec_interpreter": "babel-node",
"instances" : "max",
"merge_logs" :true
}]
}
server.js file
require('pretty-error').start();
require('babel-register');// eslint-disable-line import/no-extraneous-dependencies
const express = require('express');
const morgan = require('morgan');
const methodOverride = require('method-override');
const bodyParser = require('body-parser');
const createError = require('http-errors');
require('./config/database');
const router = require('./config/router');
const logger = require('./config/logger');
const allowCors = require('./config/cors');
const PORT = process.env.PORT;
const app = express();
app.disable('x-powered-by');
app.use(methodOverride());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(allowCors);
app.use(morgan('dev', {
skip: (req, res) => res.statusCode < 400,
stream: process.stderr,
}));
app.use(morgan('dev', {
skip: (req, res) => res.statusCode >= 400,
stream: process.stdout,
}));
/**
* Add and remove headers for all requests
*/
app.use((req, res, next) => {
res.setHeader('Content-Type', 'application/json');
res.setHeader('Accept', 'application/json');
next();
});
app.use('/api/v1', router);
/**
* Error Handler
*/
app.use((err, req, res, next) => {
logger.error(err.stack);
const error = createError(err);
res.status(error.status).json(error);
next();
});
app.listen(PORT, () => {
logger.info(`Listening on port ${PORT}`);
});
After a few days in search of the solution, I discovery that do not exists any problem, what happens is that to run my project I was run docker-compose run --rm <service_name> and the docker compose reference is clear
the docker-compose run command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
docker-compose run --service-ports <service_name>
However I chose to run docker-compose up <service_name>, it is enough for me because I don't have specific needs how to override a command or run only one container on different ports.