I have directories for node and react like this:
Project
+ client
+ admin
+ build
+ index.html
+ server_api
+ server.js
+ Dockerfile
+ docker-compose.yml
docker-compose file
server_api:
container_name: speed_react_server_api
build:
context: ./server_api
dockerfile: Dockerfile
image: speed_react/server_api
ports:
- "5000:5000"
volumes:
- ./server_api:/usr/src/app
My server.js serve static file from react app like this:
app.get('*', (req, res) => {
res.sendFile(path.resolve(__dirname, '../client/admin/', 'build', 'index.html'));
});
But it seems that it is not working.
I do not want to move folder client to server
Can you suggest me solution?
You can rearrange files as needed in your Dockerfile.
Note that this will require removing the volumes: line that (probably) causes your entire Dockerfile to get mostly ignored. Your Docker container won't be usable as a development environment. A host Node is very easy to install and features like live reloading and interactive debugging will work much better with much less configuration and tuning required.
If you move the Dockerfile up to the root directory, it can access both directory trees. Then you can use a multi-stage build to build the client and then copy it into the server. That would roughly look like this:
FROM node:14 AS client
WORKDIR /app
COPY client/package.json client/yarn.lock .
RUN yarn install
COPY client .
RUN yarn build
FROM node:14
WORKDIR /app
COPY server_api/package.json server_api/yarn.lock .
RUN yarn install
COPY server_api .
COPY --from=client /app/admin/build admin # <-----
EXPOSE 5000
CMD ["node", "index.js"]
In your code, change the path to path.resolve(__dirname, 'admin', 'index.html'); it should match the destination inside the container filesystem of the COPY --from=client line.
In your host development environment, you can use a symbolic link to point at the files (on MacOS or Linux)
ln -s ../client/admin/build admin
Finally, you need to change your docker-compose.yml to point at the root directory as the build context, and remove the bind mount (which will have a dangling symlink and not the file content).
server_api:
build: .
image: speed_react/server_api
ports:
- "5000:5000"
Update:
One answer is given by #David, that best to build the Docker image with multi-stage.
The other way is to build the only API and as the client has no dependency and so you can just mount clinet with API.
server_api:
container_name: speed_react_server_api
image: abc
ports:
- "4001:4001"
entrypoint: sh -c "cd /usr/src/app/server_api/; node server.js"
volumes:
- ./server_api/:/usr/src/app/server_api/
- ./client/:/usr/src/app/client/
You did not mention the error but I assume the script not serving the distribution.
But seems like you are missing express-static-files configuration.
To serve static files such as images, CSS files, and JavaScript files, use the >express.static built-in middleware function in Express.
The function signature is:
express.static(root, [options])
Here is the complete script
const express = require('express');
const path = require('path');
const app = express();
const PORT=process.env.PORT | 4001
app.use(express.static('../client/admin/build/dist', {
maxAge: '31557600'
}))
var compress = require('compression');
app.use(compress());
app.use('/', express.static('../client/admin/build/dist/', { redirect: false }));
app.get('*', (req, res) => {
res.sendFile(path.resolve('../client/admin/build/dist/index.html'));
});
app.listen(PORT, () => {
console.log(`Incentive frontend is listening on ${PORT}`);
});
Related
I have a simple application that grabs data from express and displays it in react. It works as intended without docker, but not when launching them as containers. Both React and Express are able to launch and can be viewed in browser at localhost:3000 and localhost:5000 after running docker
How they are communicating
In the react-app package.json, I have
"proxy": "http://localhost:5000"
and a fetch to the express route.
React Dockerfile
FROM node:17 as build
WORKDIR /code
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.12-alpine
COPY --from=build /code/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Express Dockerfile
FROM node:17
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "start"]
docker-compose.yml
version: "3"
services:
react-app:
image: react
stdin_open: true
ports:
- "3000:80"
networks:
- react-express
api-server:
image: express
ports:
- "5000:5000"
networks:
- react-express
networks:
react-express:
driver: bridge
from your example I figured out that you are using react-scripts?
If so, proxy parametr works only for development for npm start.
Keep in mind that proxy only has effect in development (with npm start), and it is up to you to ensure that URLs like /api/todos point to the right thing in production.
here: https://create-react-app.dev/docs/proxying-api-requests-in-development/
Using a proxy in package.json does not work, so instead you can put this in your react app. The same Dockerfile and docker-compose setup is used.
const api = axios.create({
baseURL: "http://localhost:5000"
})
and make request to express like this
api.post("/logs", {data:value})
.then(res => {
console.log(res)
})
This may raise an error with CORS, so you can put this in your Express API in the same file that you set the port and have it listening.
import cors from 'cors'
const app = express();
app.use(cors({
origin: 'http://localhost:3000'
}))
I have a react-express app that connects to MongoDB Atlas and is deployed to Google App Engine. Currently, the client folder is stored inside the backend folder. I manually build the React client app with npm run build, then use gcloud app deploy to push the entire backend app and the built React files to GAE, where it is connected to a custom domain. I am looking to dockerize the app, but am running into problems with getting my backend up and running. Before Docker, I was using express static middleware to serve the React files, and had a setupProxy.js file in the client directory that I used to redirect api requests.
My express app middleware:
if (process.env.NODE_ENV === 'production') {
app.use(express.static(path.join(__dirname, '/../../client/build')));
}
if (process.env.NODE_ENV === 'production') {
app.get('/*', (req, res) => {
res.sendFile(path.join(__dirname, '/../../client/build/index.html'));
});
}
My client Dockerfile:
FROM node:14-slim
WORKDIR /usr/src/app
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "npm", "start"]
My backend Dockerfile:
FROM node:14-slim
WORKDIR /usr/src/app
COPY ./package.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start"]
docker-compose.yml
version: '3.4'
services:
backend:
image: backend
build:
context: backend
dockerfile: ./Dockerfile
environment:
NODE_ENV: production
ports:
- 5000:5000
networks:
- mern-app
client:
image: client
build:
context: client
dockerfile: ./Dockerfile
stdin_open: true
environment:
NODE_ENV: production
ports:
- 3000:3000
networks:
- mern-app
depends_on:
- backend
networks:
mern-app:
driver: bridge
I'm hoping that someone can provide some insight/assistance regarding how to effectively connect my React and Express apps inside a container using Docker Compose so that I can make calls to my API server. Thanks in advance!
I have an app that's contains a NodeJS server and a ReactJS client. The project's structure is as follows:
client
Dockerfile
package.json
.
.
.
server
Dockerfile
package.json
.
.
.
docker-compose.yml
.gitignore
To run both of these, I am using a docker-compose.yml:
version: "3"
services:
server:
build: ./server
expose:
- 8000
environment:
API_HOST: "http://localhost:3000/"
APP_SERVER_PORT: 8000
MYSQL_HOST_IP: mysql
ports:
- 8000:8000
volumes:
- ./server:/app
command: yarn start
client:
build: ./client
environment:
REACT_APP_PORT: 3000
NODE_PATH: src
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./client/src:/app/src
- ./client/public:/app/public
links:
- server
command: yarn start
And inside of each of both the client and server folders I have a Dockerfile (same for each):
FROM node:10-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
COPY yarn.lock /app
COPY . /app
RUN yarn install
CMD ["yarn", "start"]
EXPOSE 80
Where the client's start script is simply react-scripts start, and the server's is nodemon index.js. The client's package.json has a proxy that is supposed to allow it communication with the server:
"proxy": "http://server:8000",
The react app would call a component that looks like this:
import React from 'react';
import axios from 'axios';
function callServer() {
axios.get('http://localhost:8000/test', {
params: {
table: 'sample',
},
}).then((response) => {
console.log(response.data);
});
}
export function SampleComponent() {
return (
<div>
This is a sample component
{callServer()}
</div>
);
}
Which would call the /test path in the node server, as defined in the index.js file:
const cors = require('cors');
const express = require('express');
const app = express();
app.use(cors());
app.listen(process.env.APP_SERVER_PORT, () => {
console.log(`App server now listening on port ${process.env.APP_SERVER_PORT}`);
});
app.get('/test', (req, res) => {
res.send('hi');
});
Now, this code works like a charm when I run it on my Linux Mint machine, but when I run it on Windows 10 I get the following error (I run both on Chrome):
GET http://localhost:8000/test?table=sample net::ERR_CONNECTION_REFUSED
Is it that I would need to run these on a different port or IP address? I read somewhere that the connections may be using a windows network instead of the network created by docker-compose, but I'm not even sure how to start diagnosing this. Please let me know if you have any ideas that could help.
EDIT: Here are the results of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1824c61bbe99 react-node-mysql-docker-boilerplate_client "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:3000->3000/tcp react-node-mysql-docker-boilerplate_client_1
and here is docker ps -a. For some reason, it seems the server image stops by itself as soon as it starts
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1824c61bbe99 react-node-mysql-docker-boilerplate_client "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:3000->3000/tcp react-node-mysql-docker-boilerplate_client_1
5c26276e37d1 react-node-mysql-docker-boilerplate_server "docker-entrypoint.s…" 3 minutes ago Exited (127) 3 minutes ago react-node-mysql-docker-boilerplate_server_1
I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.
Wanted Behavior
When my dockerized nodejs server is launched, i can access from my local machine to the address : http://localhost:3030
Docker console should then print "Hello World"
Problem Description
I have a nodejs Server contained in a Docker Container. I can't access to http://localhost:3030/ from my browser
server.js File
const port = require('./configuration/serverConfiguration').port
const express = require('express')
const app = express()
app.get('/', function (req, res) {
res.send('Hello World')
})
app.listen(port)
DockerFile Exposes port 3000 which is the port used by the server.js File
DockerFile
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
I use a docker-compose.yml file because i am linking my container with a mongodb service
docker-compose.yml File
version: '3'
services:
node_server:
build: .
volumes:
- "./app:/src/app"
ports:
- "3030:3000"
links:
- "mongo:mongo"
mongo:
image: mongo
ports:
- "27017:27017"
File publishes my container 3000 port to my host 3030 port
New Info
Tried to execute it on OSX, worked. It seems to be a problem with windows.
I changed localhost for the machine ip since i was using docker toolbox. My bad for not reading the documentation in dept