I'm trying to run a react app with 2 node servers. One for the front end and one for the back-end connected with a mysql data-base.
I'm trying to use docker for the container and I managed to get the database and the front-end server up. However,When the back-end server is fired it seems like it doesn't acknowledge the Dockerfile.
node_server | npm WARN exec The following package was not found and will be installed: nodemon
node_server | Usage: nodemon [nodemon options] [script.js[args]
node_server |
node_server | See "nodemon --help" for more.
node_server |
node_server exited with code 0
Dockerfile - client:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/scr/app
EXPOSE 3000
COPY package.json .
RUN npm install express body-parser nano nodemon cors
COPY . .
Dockerfile - server
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm init -y
RUN npm install express body-parser nano nodemon cors
EXPOSE 5000
CMD ["npx", "nodemon", "src/server.js"]
docker-compose
version: '3'
services:
backend:
build:
context: ./server
dockerfile: ./Dockerfile
depends_on:
- mysql
container_name: node_server
image:
raff/node_server
ports:
- "5000:5000"
volumes:
- "./server:/usr/src/app"
frontend:
build:
context: ./client
dockerfile: ./Dockerfile
container_name: node_client
image:
raff/node_client
ports:
- "3000:3000"
volumes:
- "./client:/usr/src/app"
mysql:
image: mysql:5.7.31
container_name: db
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: assignment
The server side is not done yet, but i don't believe it's causing this error.
Server.js
"use strict";
const path = require("path");
const express = require("express");
const app = express();
const bodyParser = require("body-parser");
app.use(bodyParser.urlencoded({ extended: true }));
app.use(express.json());
const mysql = require("mysql");
let con = mysql.createConnection({
host: "mysql",
port: "3306",
user: "root",
password: "admin",
});
const PORT = 5000;
const HOST = "0.0.0.0";
app.post("/posting", (req, res) => {
var topic = req.body.param1;
var data = req.body.param2;
sql_insertion(topic, data);
});
// Helper
const panic = (err) => console.error(err);
// Connect to database
con.connect((err) => {
if (err) {
panic(err);
}
console.log("Connected!");
con.query("CREATE DATABASE IF NOT EXISTS assignment", (err, result) => {
if (err) {
panic(err);
} else {
console.log("Database created!");
}
});
});
//select database
con.query("use assignment", (err, result) => {
if (err) {
panic(err);
}
});
// Create Table
let table =
"CREATE TABLE IF NOT EXISTS posts (ID int NOT NULL AUTO_INCREMENT, Topic varchar(255), Data varchar(255), Timestamp varchar(255), PRIMARY KEY(ID));";
con.query(table, (err) => {
if (err) {
panic(err);
} else {
console.log("Table created!");
}
});
app.get("*", (req, res) => {
res.sendFile(path.join(__dirname, "client/build" , "index.html"));
});
app.listen(PORT, HOST);
console.log("up!");
Modify this line
CMD ["npx", "nodemon", "src/server.js"]
By
CMD ["npx", "nodemon", "--exec", "node src/server.js"]
While putting the command in package.json under scripts section is better.
Your volumes: declarations are hiding everything that's in the image, including its node_modules directory. That's not normally required, and you should be able to trim the frontend: container definition down to
backend:
build: ./server # default `dockerfile:` location
depends_on:
- mysql
image: raff/node_server # only if you plan to `docker-compose push`
ports:
- "5000:5000"
The image then contains a fixed copy of the application, so there's no particular need to use nodemon; just run the application directly.
FROM node:latest
WORKDIR /usr/src/app # also creates the directory
COPY package.json package-lock.json .
RUN npm ci # do not `npm install` unmanaged packages
COPY . . # CHECK: `.dockerignore` must include `node_modules`
EXPOSE 5000
CMD ["node", "src/server.js"]
This apparently isn't a problem for your frontend application, because there's a typo in WORKDIR -- the image installs and runs its code in /usr/scr/app but the bind mount is over /usr/src/app, so the actual application's /usr/scr/app/node_modules directory isn't hidden.
Related
I'm trying to start up my postgresql with typeorm on a node.js server and I'm getting this error: caught error # main DriverPackageNotInstalledError: Postgres package has not been found installed. Try to install it: npm install pg --save
I've checked pg is installed in my docker container
deps versions:
"pg": "^8.8.0",
"typeorm": "^0.2.34",
My docker-compose:
version: '3.7'
services:
api:
profiles: ['api', 'web']
container_name: slots-api
stdin_open: true
build:
context: ./
dockerfile: api/Dockerfile
target: dev
environment:
NODE_ENV: development
ports:
- 8000:8000
- 9229:9229 # for debugging
volumes:
- ./api:/app/api
- /app/node_modules/ # do not mount node_modules
- /app/api/node_modules/ # do not mount node_modules
depends_on:
- database
command: yarn dev:api
database:
container_name: slots-database
image: postgres:alpine
restart: unless-stopped
ports:
- 5432:5432
depends_on:
- adminer
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: dev
POSTGRES_HOST: 127.0.0.1
my dockerfile:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
COPY yarn.lock ./
RUN yarn install
FROM node:16-alpine AS dev
WORKDIR /app
COPY --from=builder /app/ /app/
COPY . .
RUN yarn build:api
EXPOSE 8000
CMD ["yarn", "dev"]
My typeorm config:
import { Connection, createConnection, DatabaseType } from 'typeorm';
import * as entities from '#/entities';
import { PostgresConnectionOptions } from 'typeorm/driver/postgres/PostgresConnectionOptions';
export const shouldCache = (): boolean => {
return !['test', 'development'].includes(process.env.NODE_ENV ?? '');
};
export default async function postgresConnection(): Promise<Connection> {
const config = {
database: 'dev',
entities: Object.values(entities),
host: '127.0.0.1',
password: 'admin',
port: 5432,
type: 'postgres' as DatabaseType,
username: 'admin',
synchronize: false,
dropSchema:
process.env.NODE_ENV !== 'production' &&
process.env.POSTGRES_DROP_SCHEMA === 'true',
migrations: ['dist/migrations/*.js'],
migrationsRun: true,
cache: shouldCache(),
} as PostgresConnectionOptions;
return await createConnection(config);
}
my server.ts file:
...
await postgresConnection().then(async () => {
console.info('Database connected!');
});
...
It works without issues running the server locally, with the database and adminer running on docker
Hello I am new to the docker and I am trying to dockerize my application that uses React as frontend, nodejs as backend and mySQL as database. However when I try to fetch data from server from my react app, it gives me error:
Access to fetch at 'http://localhost:3001/api' from origin 'http://localhost:3000' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header has a value 'http://127.0.0.1:3000' that is not equal to the supplied origin. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My react app is rendered and also when I go to http://localhost:3001/api I receive the data I would like to get. Just the communication between react and nodejs is somehow broken.
Here are my Docker files and env files:
.env:
DB_HOST=localhost
DB_USER=root
DB_PASSWORD=123456
DB_NAME=testdb
DB_PORT=3306
MYSQLDB_USER=root
MYSQLDB_ROOT_PASSWORD=123456
MYSQLDB_DATABASE=testdb
MYSQLDB_LOCAL_PORT=3306
MYSQLDB_DOCKER_PORT=3306
NODE_LOCAL_PORT=3001
NODE_DOCKER_PORT=3001
CLIENT_ORIGIN=http://127.0.0.1:3000
CLIENT_API_BASE_URL=http://127.0.0.1:3001/api
REACT_LOCAL_PORT=3000
REACT_DOCKER_PORT=80
dockerfile for react:
FROM node:14.17.0 as build-stage
WORKDIR /frontend
COPY package.json .
RUN npm install
COPY . .
ARG REACT_APP_API_BASE_URL
ENV REACT_APP_API_BASE_URL=$REACT_APP_API_BASE_URL
RUN npm run build
FROM nginx:1.17.0-alpine
COPY --from=build-stage /frontend/build /usr/share/nginx/html
EXPOSE 80
CMD nginx -g 'daemon off;'
dockerfile for nodejs:
FROM node:14.17.0
WORKDIR /
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "server.js" ]
docker-compose.yml :
version: '3.8'
services:
mysqldb:
image: mysql
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- backend
server-api:
depends_on:
- mysqldb
build: ./
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
- CLIENT_ORIGIN=$CLIENT_ORIGIN
networks:
- backend
- frontend
frontend-ui:
depends_on:
- server-api
build:
context: ./frontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
db:
networks:
backend:
frontend:
My project folder structure is a bit weird as my server its things(node_modules, package.json...) are in the root where docker-compose, .env and Dockerfile for server is located.
React app and frontend is in /frontend folder where also Dockerfile for react is located.
In react I call fetch("http://localhost:3001/api").
Server is created with express :
const express = require('express');
const cors = require('cors');
const server = express();
var mysql = require('mysql2');
require("dotenv").config();
const port = 3001
server.use(express.static('public'));
var corsOptions = {
origin: "http://127.0.0.1:3000"
}
server.use(cors(corsOptions));
var con = mysql.createConnection({
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD
});
server.get('/api', async (req, res) => {
console.log("START");
con.connect(function (err) {
if (err) throw err;
console.log("connected !");
con.query("use testdb;", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
con.query("select * from records;", function (err, result, fields) {
if (err) throw err;
res.send(result);
});
});
});
server.listen(port, () => {
console.log(`Server listening on port ${port}`)
})
I created this thanks to This tutorial
Thanks for any help.
change this: origin: "http://127.0.0.1:3000"
to this: origin: "http://localhost:3000"
I wrote this script to count the users every time they visit.
But during the build process, it's successfully downloading and installing the dependencies
but when executing using command docker-compose up the line redis.createClient({}) is throwing error as to its not a function.
**#Dockerfile**
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
**#docker-compose.yml**
version : '3'
services:
redis-server:
restart: always
image: redis
node-app:
restart: on-failure
build: .
ports:
- "4001:8081"
**# Application Code**
const express = require('express');
const redis = require('redis');
const process = require('process');
const app = express();
const client = redis.createClient({
host: 'redis-server',
port: 6379
});
client.set('visits', 0);
app.get('/', (req, res) => {
client.get('visits', (err, visits) => {
res.send('Number of visits ' + visits);
client.set('visits', parseInt(visits) + 1);
});
});
app.listen(8081, () => {
console.log('Listening on port 8081');
});
Hello I cannot access the exposed port. It's a node server (no framework). Chrome sends ERR_EMPTY_RESPONSE. Each time I change the file and test it I run docker-compose build. How can I get this up and running so my browser can ping port 3000?
EDIT: I included my server.js file incase I'm binding the port wrong in node.
Dockerfile
FROM node:8.11.1-alpine
WORKDIR /usr/src/app
VOLUME [ "/usr/src/app" ]
RUN npm install -g nodemon
EXPOSE 3000
CMD [ "nodemon", "-L", "src/index.js" ]
Docker-compose.yml
version: '3'
services:
node:
build:
context: ./node
dockerfile: Dockerfile
working_dir: /usr/src/app
volumes:
- ./node:/usr/src/app
networks:
- app-network
env_file: ./.env
environment:
- MESSAGE_QUEUE=amqp://rabbitmq
ports:
- "3000:3000"
links:
- rabbitmq
python:
build:
context: ./python
dockerfile: Dockerfile
working_dir: /usr/src/app
volumes:
- ./python:/usr/src/app
networks:
- app-network
env_file: ./.env
links:
- rabbitmq
rabbitmq:
image: rabbitmq:3.7.4
networks:
- app-network
networks:
app-network:
driver: bridge
Server.js
const mongoose = require('mongoose')
const hostname = '127.0.0.1';
const port = 3000;
const server = require('./controllers/index');
server.listen(port, hostname, () => {
// Connect To Mongo
mongoose.connect(process.env.MONGO_URI, { keepAlive: true, keepAliveInitialDelay: 300000, useNewUrlParser: true });
mongoose.connection.on('disconnected', () => {
console.error('MongoDB Disconnected')
})
mongoose.connection.on('error', (err) => {
console.error(err)
console.error('MongoDB Error')
})
mongoose.connection.on('reconnected', () => {
console.error('MongoDB Reconnected')
})
mongoose.connection.on('connected', () => {
console.error('MongoDB Connected')
})
console.log(`Server running at http://${hostname}:${port}/`);
});
Try to bind your app to 0.0.0.0 like this
const hostname = '0.0.0.0';
it will listen on all network addresses.
Im building a node api boilerplate with docker, babel, istanbul, pm2, eslint and other features. My project works fine in dev mode with nodemon and works fine in test mode with mocha too. However when I run the project in prod mode with pm2 the docker ports don't bind.
The full project can be find here https://github.com/apandrade/node-api-boilerplate
Docker ps result after run in production mode
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d5362284957 node:latest "npm start" 15 seconds ago Up 15 seconds nodeapiboilerplate_provision_run_1
a2c79e3e47cc mongo "docker-entrypoint.s…" 52 seconds ago Up 51 seconds 0.0.0.0:27017->27017/tcp mongo
Base.yml file
version: "2"
services:
db_credentials:
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api_db
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
common: &common
image: "node:latest"
working_dir: /usr/src/app
restart: always
volumes:
- ./:/usr/src/app
- ./scripts/waitforit:/usr/bin/waitforit
ports:
- "3000:3000"
base:
<<: *common
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api_db
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
- APP_NAME=node-api-boilerplate
- PORT=3000
- DB_HOST=mongo
- DB_PORT=27017
base_test:
<<: *common
environment:
- MONGODB_ADMIN_USER=*********
- MONGODB_ADMIN_PASS=*********
- MONGODB_APPLICATION_DATABASE=node_api
- MONGODB_APPLICATION_USER=*********
- MONGODB_APPLICATION_PASS=*********
- PORT=3000
- DB_HOST=mongo
- DB_PORT=27017
docker-compose.yml file
version: "2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
- ./scripts/mongo-entrypoint.sh:/docker-entrypoint-initdb.d/mongo-entrypoint.sh
ports:
- "27017:27017"
extends:
file: base.yml
service: db_credentials
command: "mongod --auth"
develop:
extends:
file: base.yml
service: base
environment:
- NODE_ENV=development
- LOG_LEVEL=debug
container_name: dev_node_api
command: "npm run dev"
depends_on:
- mongo
provision:
extends:
file: base.yml
service: base
environment:
- NODE_ENV=production
- LOG_LEVEL=info
container_name: prod_node_api
command: "npm start"
depends_on:
- mongo
test:
extends:
file: base.yml
service: base_test
environment:
- NODE_ENV=test
- LOG_LEVEL=debug
container_name: test_node_api
command: "npm run test"
depends_on:
- mongo
process.json file
{
"apps" : [{
"name" : "node-api-boilerplate",
"script" : "./src/server.js",
"exec_mode" : "cluster",
"exec_interpreter": "babel-node",
"instances" : "max",
"merge_logs" :true
}]
}
server.js file
require('pretty-error').start();
require('babel-register');// eslint-disable-line import/no-extraneous-dependencies
const express = require('express');
const morgan = require('morgan');
const methodOverride = require('method-override');
const bodyParser = require('body-parser');
const createError = require('http-errors');
require('./config/database');
const router = require('./config/router');
const logger = require('./config/logger');
const allowCors = require('./config/cors');
const PORT = process.env.PORT;
const app = express();
app.disable('x-powered-by');
app.use(methodOverride());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(allowCors);
app.use(morgan('dev', {
skip: (req, res) => res.statusCode < 400,
stream: process.stderr,
}));
app.use(morgan('dev', {
skip: (req, res) => res.statusCode >= 400,
stream: process.stdout,
}));
/**
* Add and remove headers for all requests
*/
app.use((req, res, next) => {
res.setHeader('Content-Type', 'application/json');
res.setHeader('Accept', 'application/json');
next();
});
app.use('/api/v1', router);
/**
* Error Handler
*/
app.use((err, req, res, next) => {
logger.error(err.stack);
const error = createError(err);
res.status(error.status).json(error);
next();
});
app.listen(PORT, () => {
logger.info(`Listening on port ${PORT}`);
});
After a few days in search of the solution, I discovery that do not exists any problem, what happens is that to run my project I was run docker-compose run --rm <service_name> and the docker compose reference is clear
the docker-compose run command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
docker-compose run --service-ports <service_name>
However I chose to run docker-compose up <service_name>, it is enough for me because I don't have specific needs how to override a command or run only one container on different ports.