Docker - React app can not fetch from the server - node.js

Hello I am new to the docker and I am trying to dockerize my application that uses React as frontend, nodejs as backend and mySQL as database. However when I try to fetch data from server from my react app, it gives me error:
Access to fetch at 'http://localhost:3001/api' from origin 'http://localhost:3000' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header has a value 'http://127.0.0.1:3000' that is not equal to the supplied origin. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My react app is rendered and also when I go to http://localhost:3001/api I receive the data I would like to get. Just the communication between react and nodejs is somehow broken.
Here are my Docker files and env files:
.env:
DB_HOST=localhost
DB_USER=root
DB_PASSWORD=123456
DB_NAME=testdb
DB_PORT=3306
MYSQLDB_USER=root
MYSQLDB_ROOT_PASSWORD=123456
MYSQLDB_DATABASE=testdb
MYSQLDB_LOCAL_PORT=3306
MYSQLDB_DOCKER_PORT=3306
NODE_LOCAL_PORT=3001
NODE_DOCKER_PORT=3001
CLIENT_ORIGIN=http://127.0.0.1:3000
CLIENT_API_BASE_URL=http://127.0.0.1:3001/api
REACT_LOCAL_PORT=3000
REACT_DOCKER_PORT=80
dockerfile for react:
FROM node:14.17.0 as build-stage
WORKDIR /frontend
COPY package.json .
RUN npm install
COPY . .
ARG REACT_APP_API_BASE_URL
ENV REACT_APP_API_BASE_URL=$REACT_APP_API_BASE_URL
RUN npm run build
FROM nginx:1.17.0-alpine
COPY --from=build-stage /frontend/build /usr/share/nginx/html
EXPOSE 80
CMD nginx -g 'daemon off;'
dockerfile for nodejs:
FROM node:14.17.0
WORKDIR /
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "server.js" ]
docker-compose.yml :
version: '3.8'
services:
mysqldb:
image: mysql
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- backend
server-api:
depends_on:
- mysqldb
build: ./
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
- CLIENT_ORIGIN=$CLIENT_ORIGIN
networks:
- backend
- frontend
frontend-ui:
depends_on:
- server-api
build:
context: ./frontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
db:
networks:
backend:
frontend:
My project folder structure is a bit weird as my server its things(node_modules, package.json...) are in the root where docker-compose, .env and Dockerfile for server is located.
React app and frontend is in /frontend folder where also Dockerfile for react is located.
In react I call fetch("http://localhost:3001/api").
Server is created with express :
const express = require('express');
const cors = require('cors');
const server = express();
var mysql = require('mysql2');
require("dotenv").config();
const port = 3001
server.use(express.static('public'));
var corsOptions = {
origin: "http://127.0.0.1:3000"
}
server.use(cors(corsOptions));
var con = mysql.createConnection({
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD
});
server.get('/api', async (req, res) => {
console.log("START");
con.connect(function (err) {
if (err) throw err;
console.log("connected !");
con.query("use testdb;", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
con.query("select * from records;", function (err, result, fields) {
if (err) throw err;
res.send(result);
});
});
});
server.listen(port, () => {
console.log(`Server listening on port ${port}`)
})
I created this thanks to This tutorial
Thanks for any help.

change this: origin: "http://127.0.0.1:3000"
to this: origin: "http://localhost:3000"

Related

Post request with weird problem in docker [Solved]

Hi I working in a project using next.js in frontend and express in backend. I start to connect applications and got a weird problem, when axios try to send post request to api I received a follow error:
I say weird because get requests works, my api has cors config, I using docker in all projects and make some tests
server.ts (backend)
import express from 'express'
import { adminJs, adminJsRouter } from './adminjs'
import { sequelize } from './database'
import dotenv from 'dotenv'
import { router } from './routes'
import cors from 'cors'
dotenv.config()
const app = express()
app.use(cors())
app.use(express.static('public'))
app.use(express.json())
app.use(adminJs.options.rootPath, adminJsRouter)
app.use(router)
const PORT = process.env.SERVER_PORT || 3000
app.listen(PORT, () => {
sequelize.authenticate().then(() => console.log('DB connection sucessfull.'))
console.log(`Server started successfuly at port ${PORT}`)
})
api.ts (frontend)
import axios from "axios";
const baseURL = process.env.NEXT_PUBLIC_BASEURL!
const api = axios.create({baseURL})
export type ErrorType ={
message: string
}
export default api;
authService.ts (frontend) where problem happen
const authService = {
register: async (params: Register) => {
try {
const res = await api.post<AxiosResponse<Register>>('/auth/register', params)
console.log(res)
return res
} catch (err) {
if (!axios.isAxiosError<AxiosError<ErrorType>>(err)) throw err
console.error(JSON.stringify(err))
return err
}
}
}
export default authService
In docker I test requests using container alias and localhost and get a follow results in situations:
using container alias
get request in frontend: works
post request in frontend: problem
post request using curl inside container: works
using http://localhost
get request in frontend: problem
post request in frontend: works
post request using curl inside container: works
post request using postman: works
docker-compose.yml (frontend)
version: '3.9'
services:
front:
build:
context: .
ports:
- '3001:3001'
volumes:
- .:/onebitflix-front
command: bash start.sh
stdin_open: true
environment:
- NEXT_PUBLIC_BASEURL=http://api:3000
- STATIC_FILES_BASEURL=http://localhost:3000
networks:
- onebitflix-net
networks:
onebitflix-net:
name: onebitflix-net
external: true
docker-compose.yml (backend)
version: '3.8'
services:
api: #I use this alias in frontend
build: .
command: bash start.sh
ports:
- "3000:3000"
volumes:
- .:/onebitflix
environment:
NODE_ENV: development
SERVER_PORT: 3000
HOST: db
PORT: 5432
DATABASE: onebitflix_development
USERNAME: onebitflix
PASSWORD: onebitflix
JWT_SECRET: chave-do-jwt
depends_on:
- db
networks:
- onebitflix-net
db:
image: postgres:15.1
environment:
POSTGRES_DB: onebitflix_development
POSTGRES_USER: onebitflix
POSTGRES_PASSWORD: onebitflix
ports:
- "5432:5432"
networks:
- onebitflix-net
networks:
onebitflix-net:
name: onebitflix-net
external: true
volumes:
db:
When you connect from the browser to the API, you need to use a URL that's reachable from the browser.
The docker-compose service names are only usable on the docker network, so you can't use api as a hostname from outside the network.
So you need to change
NEXT_PUBLIC_BASEURL=http://api:3000
to
NEXT_PUBLIC_BASEURL=http://localhost:3000
in your docker-compose.yml file

PG Pool is not working inside Docker container

I am running a PERN application and currently trying to dockerize it. When I run the database as a container and the server and client locally I have no issues. However, when I containerize the server, client, and database respectively, I am unable to make requests. It results in 404 errors. This is the same behavior that occurs when I pass pool the wrong port or host. So I'm wondering if somehow I am giving the wrong host and/or port to pool or if I should change it when I containerize it.
This is the Pool Instance
const Pool = require('pg').Pool
const pool = new Pool({
user: 'docker',
password: 'docker',
host: "localhost",
port: 4000,
database: "docker"
})
This is part of the rest api in the server:
const express = require("express")
const router = express.Router()
const pool = require('../database/database.js')
router.get("/:login", async (req, res) => {
try {
let loginReq = JSON.parse(decodeURIComponent(req.params.login))
const user = await pool.query(
"SELECT user_id,first_name,last_name,email FROM \"user\" where email = $1 and password = $2",
[loginReq.email, loginReq.password]
)
if(user.rows.length) {
res.json(user.rows[0])
} else {
throw new Error('User Not Found')
}
} catch (err) {
res.status(404).json(err.message)
}
})
This is each of my dockerfiles for my client, server, and database
Client:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install --production
CMD ["npm","start"]
EXPOSE 3000
Server:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install --production
CMD ["node","index.js"]
EXPOSE 5000
Database;
FROM postgres:15.1-alpine
COPY init.sql /docker-entrypoint-initdb.d/
This is my docker-compose.yml
version: "3.8"
services:
client:
build: ./client
ports:
- "3000:3000"
restart: unless-stopped
server:
build: ./server
ports:
- "5000:5000"
restart: unless-stopped
database:
build: ./database
ports:
- "4000:4000"
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=docker
- PGPORT=4000
volumes:
- kurva:/var/lib/postgresql/data
restart: unless-stopped
volumes:
kurva:
I don't understand why the behavior would be different between containerizing the server and running it locally when they all use the same ports. I have tried messing with the host and changing it to 0.0.0.0 but that did not help. Any help would be appreciated!
I found that it was a host issue and I needed to change the host accessed in the pool to the service name. Additionally, I needed to add that the server depends on the database service.
This is the updated pg pool:
const pool = new Pool({
user: 'docker',
password: 'docker',
host: "database",
port: 4000,
database: "docker"
})
This is the updated docker-compose.yml
version: "3.8"
services:
client:
build: ./client
ports:
- "3000:3000"
restart: unless-stopped
server:
build: ./server
ports:
- "5000:5000"
depends_on:
- database
restart: unless-stopped
database:
build: ./database
ports:
- "4000:4000"
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=docker
- PGPORT=4000
volumes:
- kurva:/var/lib/postgresql/data
restart: unless-stopped
volumes:
kurva:

Dockerfile not working on backend with react

I'm trying to run a react app with 2 node servers. One for the front end and one for the back-end connected with a mysql data-base.
I'm trying to use docker for the container and I managed to get the database and the front-end server up. However,When the back-end server is fired it seems like it doesn't acknowledge the Dockerfile.
node_server | npm WARN exec The following package was not found and will be installed: nodemon
node_server | Usage: nodemon [nodemon options] [script.js[args]
node_server |
node_server | See "nodemon --help" for more.
node_server |
node_server exited with code 0
Dockerfile - client:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/scr/app
EXPOSE 3000
COPY package.json .
RUN npm install express body-parser nano nodemon cors
COPY . .
Dockerfile - server
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm init -y
RUN npm install express body-parser nano nodemon cors
EXPOSE 5000
CMD ["npx", "nodemon", "src/server.js"]
docker-compose
version: '3'
services:
backend:
build:
context: ./server
dockerfile: ./Dockerfile
depends_on:
- mysql
container_name: node_server
image:
raff/node_server
ports:
- "5000:5000"
volumes:
- "./server:/usr/src/app"
frontend:
build:
context: ./client
dockerfile: ./Dockerfile
container_name: node_client
image:
raff/node_client
ports:
- "3000:3000"
volumes:
- "./client:/usr/src/app"
mysql:
image: mysql:5.7.31
container_name: db
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: assignment
The server side is not done yet, but i don't believe it's causing this error.
Server.js
"use strict";
const path = require("path");
const express = require("express");
const app = express();
const bodyParser = require("body-parser");
app.use(bodyParser.urlencoded({ extended: true }));
app.use(express.json());
const mysql = require("mysql");
let con = mysql.createConnection({
host: "mysql",
port: "3306",
user: "root",
password: "admin",
});
const PORT = 5000;
const HOST = "0.0.0.0";
app.post("/posting", (req, res) => {
var topic = req.body.param1;
var data = req.body.param2;
sql_insertion(topic, data);
});
// Helper
const panic = (err) => console.error(err);
// Connect to database
con.connect((err) => {
if (err) {
panic(err);
}
console.log("Connected!");
con.query("CREATE DATABASE IF NOT EXISTS assignment", (err, result) => {
if (err) {
panic(err);
} else {
console.log("Database created!");
}
});
});
//select database
con.query("use assignment", (err, result) => {
if (err) {
panic(err);
}
});
// Create Table
let table =
"CREATE TABLE IF NOT EXISTS posts (ID int NOT NULL AUTO_INCREMENT, Topic varchar(255), Data varchar(255), Timestamp varchar(255), PRIMARY KEY(ID));";
con.query(table, (err) => {
if (err) {
panic(err);
} else {
console.log("Table created!");
}
});
app.get("*", (req, res) => {
res.sendFile(path.join(__dirname, "client/build" , "index.html"));
});
app.listen(PORT, HOST);
console.log("up!");
Modify this line
CMD ["npx", "nodemon", "src/server.js"]
By
CMD ["npx", "nodemon", "--exec", "node src/server.js"]
While putting the command in package.json under scripts section is better.
Your volumes: declarations are hiding everything that's in the image, including its node_modules directory. That's not normally required, and you should be able to trim the frontend: container definition down to
backend:
build: ./server # default `dockerfile:` location
depends_on:
- mysql
image: raff/node_server # only if you plan to `docker-compose push`
ports:
- "5000:5000"
The image then contains a fixed copy of the application, so there's no particular need to use nodemon; just run the application directly.
FROM node:latest
WORKDIR /usr/src/app # also creates the directory
COPY package.json package-lock.json .
RUN npm ci # do not `npm install` unmanaged packages
COPY . . # CHECK: `.dockerignore` must include `node_modules`
EXPOSE 5000
CMD ["node", "src/server.js"]
This apparently isn't a problem for your frontend application, because there's a typo in WORKDIR -- the image installs and runs its code in /usr/scr/app but the bind mount is over /usr/src/app, so the actual application's /usr/scr/app/node_modules directory isn't hidden.

Cannot Access Docker Container's Exposed Port

Hello I cannot access the exposed port. It's a node server (no framework). Chrome sends ERR_EMPTY_RESPONSE. Each time I change the file and test it I run docker-compose build. How can I get this up and running so my browser can ping port 3000?
EDIT: I included my server.js file incase I'm binding the port wrong in node.
Dockerfile
FROM node:8.11.1-alpine
WORKDIR /usr/src/app
VOLUME [ "/usr/src/app" ]
RUN npm install -g nodemon
EXPOSE 3000
CMD [ "nodemon", "-L", "src/index.js" ]
Docker-compose.yml
version: '3'
services:
node:
build:
context: ./node
dockerfile: Dockerfile
working_dir: /usr/src/app
volumes:
- ./node:/usr/src/app
networks:
- app-network
env_file: ./.env
environment:
- MESSAGE_QUEUE=amqp://rabbitmq
ports:
- "3000:3000"
links:
- rabbitmq
python:
build:
context: ./python
dockerfile: Dockerfile
working_dir: /usr/src/app
volumes:
- ./python:/usr/src/app
networks:
- app-network
env_file: ./.env
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3.7.4
networks:
- app-network
networks:
app-network:
driver: bridge
Server.js
const mongoose = require('mongoose')
const hostname = '127.0.0.1';
const port = 3000;
const server = require('./controllers/index');
server.listen(port, hostname, () => {
// Connect To Mongo
mongoose.connect(process.env.MONGO_URI, { keepAlive: true, keepAliveInitialDelay: 300000, useNewUrlParser: true });
mongoose.connection.on('disconnected', () => {
console.error('MongoDB Disconnected')
})
mongoose.connection.on('error', (err) => {
console.error(err)
console.error('MongoDB Error')
})
mongoose.connection.on('reconnected', () => {
console.error('MongoDB Reconnected')
})
mongoose.connection.on('connected', () => {
console.error('MongoDB Connected')
})
console.log(`Server running at http://${hostname}:${port}/`);
});
Try to bind your app to 0.0.0.0 like this
const hostname = '0.0.0.0';
it will listen on all network addresses.

Pm2 don't bind docker ports

Im building a node api boilerplate with docker, babel, istanbul, pm2, eslint and other features. My project works fine in dev mode with nodemon and works fine in test mode with mocha too. However when I run the project in prod mode with pm2 the docker ports don't bind.
The full project can be find here https://github.com/apandrade/node-api-boilerplate
Docker ps result after run in production mode
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d5362284957 node:latest "npm start" 15 seconds ago Up 15 seconds nodeapiboilerplate_provision_run_1
a2c79e3e47cc mongo "docker-entrypoint.s…" 52 seconds ago Up 51 seconds 0.0.0.0:27017->27017/tcp mongo
Base.yml file
version: "2"
services:
db_credentials:
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api_db
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
common: &common
image: "node:latest"
working_dir: /usr/src/app
restart: always
volumes:
- ./:/usr/src/app
- ./scripts/waitforit:/usr/bin/waitforit
ports:
- "3000:3000"
base:
<<: *common
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api_db
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
- APP_NAME=node-api-boilerplate
- PORT=3000
- DB_HOST=mongo
- DB_PORT=27017
base_test:
<<: *common
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
- PORT=3000
- DB_HOST=mongo
- DB_PORT=27017
docker-compose.yml file
version: "2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
- ./scripts/mongo-entrypoint.sh:/docker-entrypoint-initdb.d/mongo-entrypoint.sh
ports:
- "27017:27017"
extends:
file: base.yml
service: db_credentials
command: "mongod --auth"
develop:
extends:
file: base.yml
service: base
environment:
- NODE_ENV=development
- LOG_LEVEL=debug
container_name: dev_node_api
command: "npm run dev"
depends_on:
- mongo
provision:
extends:
file: base.yml
service: base
environment:
- NODE_ENV=production
- LOG_LEVEL=info
container_name: prod_node_api
command: "npm start"
depends_on:
- mongo
test:
extends:
file: base.yml
service: base_test
environment:
- NODE_ENV=test
- LOG_LEVEL=debug
container_name: test_node_api
command: "npm run test"
depends_on:
- mongo
process.json file
{
"apps" : [{
"name" : "node-api-boilerplate",
"script" : "./src/server.js",
"exec_mode" : "cluster",
"exec_interpreter": "babel-node",
"instances" : "max",
"merge_logs" :true
}]
}
server.js file
require('pretty-error').start();
require('babel-register');// eslint-disable-line import/no-extraneous-dependencies
const express = require('express');
const morgan = require('morgan');
const methodOverride = require('method-override');
const bodyParser = require('body-parser');
const createError = require('http-errors');
require('./config/database');
const router = require('./config/router');
const logger = require('./config/logger');
const allowCors = require('./config/cors');
const PORT = process.env.PORT;
const app = express();
app.disable('x-powered-by');
app.use(methodOverride());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(allowCors);
app.use(morgan('dev', {
skip: (req, res) => res.statusCode < 400,
stream: process.stderr,
}));
app.use(morgan('dev', {
skip: (req, res) => res.statusCode >= 400,
stream: process.stdout,
}));
/**
* Add and remove headers for all requests
*/
app.use((req, res, next) => {
res.setHeader('Content-Type', 'application/json');
res.setHeader('Accept', 'application/json');
next();
});
app.use('/api/v1', router);
/**
* Error Handler
*/
app.use((err, req, res, next) => {
logger.error(err.stack);
const error = createError(err);
res.status(error.status).json(error);
next();
});
app.listen(PORT, () => {
logger.info(`Listening on port ${PORT}`);
});
After a few days in search of the solution, I discovery that do not exists any problem, what happens is that to run my project I was run docker-compose run --rm <service_name> and the docker compose reference is clear
the docker-compose run command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
docker-compose run --service-ports <service_name>
However I chose to run docker-compose up <service_name>, it is enough for me because I don't have specific needs how to override a command or run only one container on different ports.

Resources