I create simple app with mongo+express+react+node.
Here is the structure of files/folders
- client/
- - package.json
- - index.js <— frontend
- - Dockerfile
- server.js <— backend
- Dockerfile
- docker-compose.yml
- package.json
package.json in core folder calls "dev": "concurrently \"npm run server\" \"npm run client\""
Dockerfile in the client folder looks like:
FROM node:10.15.3
WORKDIR /usr/app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm","start"]
Dockerfile in core folder:
FROM node:10.15.3
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 8080
CMD ["npm","start"]
docker-compose.yml I put here https://gist.github.com/2u4u/2b13910c220f5a647f15d198a50ebe2a
when I start docker I see this:
[HPM]Proxy created: /api -> http://[::1]:8080/
ℹ️ 「wds」: Project is running at http://172.17.0.3/
ℹ️ 「wds」: webpack output is served from
ℹ️ 「wds」: Content not from webpack is served from /usr/app/public
ℹ️ 「wds」: 404s will fallback to /
Starting the development server...
Is it error on client side?
It was working well without docker. How can I fix it?
Your client and/or server is most likely listening to localhost which changes when you containerize your application. That's because containers have their own localhost which is different from the localhost on your host machine. Your app is listening to requests coming from inside the container and all requests from outside the container (your host machine) are ignored.
I see you're using http-proxy-middleware and webpack-dev-server. I'm not very familiar with the first, but with WDS you can add the flag --host 0.0.0.0 so webpack dev server will listen to all traffic and not just localhost. You'll have to do something similar with http-proxy-middleware.
Related
I am working on a chat application, using create-react-app framework for the frontend and nodejs for the backend, am also using socket.io for the connection. when testing locally, everything worked fine. am build a docker image to deploy on heroku, this is my DockerFile code below:
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
COPY client/package*.json client/
RUN npm run install-client --only=production
COPY socket/package*.json socket/
RUN npm run install-socket --only=production
COPY server/package*.json server/
RUN npm run install-server --only=production
COPY client/ client/
RUN npm run build --prefix client
COPY server/ server/
COPY socket/ socket/
USER node
CMD [ "npm", "start", "--prefix", "server", "npm", "start", "--prefix", "socket"]
EXPOSE 5000
EXPOSE 8080
i dont know if am doing anything wrong, but when i deploy it , the build was successful, but when i link on the heroku link is give this error:
This chat-app-docker-demo.herokuapp.com page can’t be foundNo web page was found for the web address: https://chat-app-docker-demo.herokuapp.com/
HTTP ERROR 404
I'm trying do build a docker image of my Node backend for deployment but when I run it in a container and open in the browser I get "This site can’t be reached" error and the following log in dev tools:
crbug/1173575, non-JS module files deprecated
My backend is based on GraphQL Apollo server. Dockerfile is as following:
FROM node:16
WORKDIR /app
COPY ./package*.json ./
RUN npm ci --only=production
# RUN npm install
COPY . .
# RUN npm run build
EXPOSE 4000
CMD [ "node", "dist/main.js" ]
I've also tried to use the commented code, with no result.
The image builds without a problem and after running the container I get 🚀 Server ready at localhost:4000 in the docker logs, so I'd expect it to work properly.
"scripts": {
"build": "tsc",
"start": "node dist/main.js",
"dev": "concurrently \"tsc -w\" \"nodemon dist/main.js\""
},
That's the scripts part of my package.json I've also tried CMD ["npm", "start"] in Dockerfile but that doesn't work either. When I run the backend from terminal using npm start I can access the GraphQL playground at localhost:4000 - I assume that should be the same with docker?
I'm still new to docker so I'd be grateful for any hints. Thanks
EDIT:
I run the container with the following command:
docker run --rm -d -p 4000:80 image-name:latest
Seemingly it's running on port 0.0.0.0:4000 as that's what it says under 'PORT' when I execute docker ps
Please run docker inspect command and you will get IP and then run through that ip in browser
I have NodeJS/TypeScript application (github repo) which is working fine when I run the script defined in package.json. i.e., npm run start will start my local host and I can hit endpoint via POSTMAN.
I have created docker image (I am new to Docker and this is my first image). Here, I am getting Error: connect ECONNREFUSED 127.0.0.1:7001 error in POSTMAN.
I noticed that I do not see Listening on port 7001 message in terminal when I run docker file. This tells me that I am making some mistake in .Dockerfile.
Steps:
I created docker image using docker build -t <IMAGE-NAME> . I can see successfully created image.
I launched container using docker run --name <CONTAINER-NAME> <IMAGE-NAME>
I've also disabled Use the system proxy setting in POSTMAN but no luck.
Details:
Package.json file
"scripts": {
"dev": "ts-node-dev --respawn --pretty --transpile-only src/server.ts",
"compile": "tsc -p .",
"start": "npm run compile && npm run dev"
}
Response from terminal when I run npm run start (This is successful)
Dockerfile
#FROM is the base image for which we will run our application
FROM node:12.0.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
RUN npm install -g typescript
# Expose API port to the outside
EXPOSE 7001
# Launch application
CMD ["npm", "start"]
Response after running docker command
GitHub repo structure
By any chance did you forget to map your container port to the host one?
docker run --name <CONTAINER-NAME> -p 7001:7001 <IMAGE-NAME>
the -p does the trick of exposing the port to your network. The number on the left side is the container port (7001 as exposed on the Dockerfile) and the second one is the target port on the host machine. You can set this up to other available ports as well. Eg.: -p 7001:3000to expose on http://localhost:3000
Check out Docker documentation about networking
Finally, I was able to make this work with two things:
Using #Dante's suggestion (mentioned above).
Updating my .Dockerfile with following:
FROM node:12.0.0
# Change working directory
WORKDIR /user/app
# Copy package.json into the container at /app
COPY package*.json ./
# Install dependencies
RUN npm install
RUN npm install -g typescript
# Copy the current directory contents into the container at root level (in this case, /app directory)
COPY . ./
# Expose API port to the outside
EXPOSE 7001
# Launch application
CMD ["npm", "run", "start"]
Need an advice to dockerize and run a node JS static-content app on K8s cluster.
I have a static web-content which I run "npm run build” into the terminal which generates /build and direct my IIS webserver to /build/Index.html.
Now, I started creating a Docker file, how do I point my nodeJS image to invoke /build/Index.html file
FROM node:carbon
WORKDIR /app
COPY /Core/* ./app
npm run build
EXPOSE 8080
CMD [ "node", ".app/build/index.html" ]
Please how can I run this app only on node v8.9.3 and
npm 5.6.0 ?
Any inputs please ?
You can specify the version of node specifically:
FROM node:8.9.3
Assumptions:
package.json is under Code directory.
npm run build will be running outside of the container and a build directory will be created in Code directory.
We will copy the whole Code/build directory under /app directory of the container.
We will copy package.json to /app folder and will run the website through scripts available in package.json file.
Solution:
I would say add a script named start in the package.json and call that script from Dockerfile's CMD command. The script would look like:
"scripts": {
"start": "node ./index.html",
},
And the Dockerfile would look like:
FROM node:8.9.3
# Make app directory in the container.
RUN MKDIR /app
# Copy whole code to app directory.
COPY Code/build/ /app
# Copy package.json app directory.
COPY package.json /app
# make app directory as the working directory.
WORKDIR /app
# Install dependencies.
RUN npm install -only=production
# Expose the port
EXPOSE 8080
# Start the process
CMD ["npm", "start"]
I'm having trouble setting up chrome debugger within a dockerized node application.
I've tried following https://github.com/nodejs/node/issues/11591 to no success.
My application does run on PORT, but my chrome debugger always displays WebSockets request was expected when on localhost:9229. I have a suspicion that it has something to do with my index.js listening on PORT, but I'm unsure.
Can someone please help? Thanks!
(I have a .env file with DOCKER_WORKING_DIR and PORT defined.)
Dockerfile
FROM node:8
ENV DOCKER_WORKING_DIR="/usr/local/app"
WORKDIR ${DOCKER_WORKING_DIR}
COPY package.json .
COPY package-lock.json .
RUN npm install --quiet
COPY . .
CMD ["npm", "run", "start"]
docker-compose.yml
version: '2.2'
services:
api:
build:
context: ../../.
dockerfile: docker/images/app/Dockerfile
command: npm run start-dev
environment:
PORT: ${PORT}
DOCKER_WORKING_DIR: ${DOCKER_WORKING_DIR}
volumes:
- ../../.:${DOCKER_WORKING_DIR}/
- ${DOCKER_WORKING_DIR}/node_modules
ports:
- "${PORT}:${PORT}"
- 9229:9229
package.json
"scripts": {
"start": "node index.js",
"start-dev": "nodemon --watch ./src -x \"npm run start-debug\"",
"start-debug": "node --inspect=0.0.0.0:9229 index.js",
},
index.js
const server = require('./src/server');
server.listen(process.env.PORT);
Answering my own question. I used a node inspector chrome extension and then everything worked https://chrome.google.com/webstore/detail/nodejs-v8-inspector-manag/gnhhdgbaldcilmgcpfddgdbkhjohddkj?hl=en