Docker container with Angular2 app and NodeJs does not respond - node.js

I created new Angular2 app by angular-cli and run it in Docker. But I cannot connect it from localhost.
At first I init app on my local machine:
ng new project && cd project && "put my Dockerfile there" && docker build -t my-ui .
I start it by command:
docker run -p 4200:4200 my-ui
Then try on my localhost:
curl localhost:4200
and receive
curl: (56) Recv failure: Connection reset by peer
Then, I tried switch into running container (docker exec -ti container-id bash) and run curl localhost:4200 and it works.
I also tried to run container with --net= host param:
docker run --net=host -p 4200:4200 my-ui
And it works. What is the problem? I also tried to run container in daemon mode and it did not helped. Thanks.
My Dockerfile
FROM node
RUN npm install -g angular-cli#v1.0.0-beta.24 && npm cache clean && rm -rf ~/.npm
RUN mkdir -p /opt/client-ui/src
WORKDIR /opt/client-ui
COPY package.json /opt/client-ui/
COPY angular-cli.json /opt/client-ui/
COPY tslint.json /opt/client-ui/
ADD src/ /opt/client-ui/src
RUN npm install
RUN ng build --prod --aot
EXPOSE 4200
ENV PATH="$PATH:/usr/local/bin/"
CMD ["npm", "start"]

It seems that you use ng serve to run development server and it by default starts on loop interface (available only on localhost). You should provide specific parameter:
ng serve --host 0.0.0.0
to run it on all interfaces.

You need to change angular-cli to serve the app externally i.e update your npm script to ng serve --host 0.0.0.0

Related

NestJS with Docker and Portainer

i'm trying to turn UP my project with a Virtual Private Server. I've installed Docker and Portainer and i can start the project. But its not running in any port. I did set to run in port 3000 but when i put in browser IP_Of_My_VPS:3000 nothing happens. I'm new with docker and every configuration that i did was based on my searchs.
This print shows that image is running in no one port.
This other print shows that my application is running (but i dont know how access it).
My docker config:
FROM node:12-alpine
RUN apk --no-cache add curl
RUN apk --no-cache add git
RUN git --version
WORKDIR /app
COPY package*.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
RUN npm ci
COPY . .
RUN npm run build && rm -rf src
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD curl -f http://localhost:3000/health || exit 1
EXPOSE 3000
CMD ["node", "./dist/main.js"]
When docker container up, perform port forwarding
for examples,
docker run -p <your_forwarding_port>:3000 ~~~
# docker-compose.yaml
~~~
ports:
- "<your_forwarding_port>:3000"
~~~
you can see ref
: docker-container port
: docker-compose port

Can not connect to node app running in Docker container from browser

I am running a nodejs application in a Docker container. The application is hosted on a bluehost centOS VPS to which I connect using SSH. I use the following command to run the app in the container: sudo docker run -p 80:8080 -d skepticalbonobo/dandakou-nodeapp. Then I check that the container is running using sudo docker ps and sure enough it is. But when I try to access the app from Chrome using the domain name or IP address I get: "This site can’t be reached". I have noticed however that in the output of sudo docker ps, under COMMAND I get docker-entrypoint... as opposed to node app.js and I do not know how to fix it.You can pull the container using docker pull skepticalbonobo/dandakou-nodeapp. Here is the content of my Dockerfile:
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY . .
USER root
RUN chown -R node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
Thank you!
The default for Nodejs app is 3000.
Run following command and check on which port node app is running
sudo docker run -ti skepticalbonobo/dandakou-nodeapp /bin/sh
Expose in Dockerfile is just for documentation purpose.

docker port mapping ignored when adding volumes to the run command

When I start my docker container with:
docker run -it -d -p 8081:8080 --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
I can access my web application just fine in my browser on: localhost:8081
But if I instead run it with the two volumes below:
docker run -it -d -p 8081:8080 -v ${PWD}:/app -v /app/node_modules --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
The port mapping is ignored - I cannot access it at localhost:8081 but I can access it at localhost:8080.
My dockerfile has:
FROM node:8-alpine
RUN apk update && apk add bash
RUN npm install -g http-server
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Why does adding the volumes to the second docker run command ignore the port mapping from 8081 to 8080?
As suggested below running without -d (but with volumes):
docker run -it -p 8081:8080 -v ${PWD}:/app -v /app/node_modules --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
gives:
Starting up http-server, serving dist
Available on:
http://127.0.0.1:8080
http://172.17.0.2:8080
Hit CTRL-C to stop the server
But I cannot access it on localhost:8080 or localhost:8081 even though the container is indeed running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
603b1bf02d58 app-image "http-server dist" 11 seconds ago Up 5 seconds 0.0.0.0:8081->8080/tcp app-container
When I instead run it without volumes but still map to 8081 it works:
Starting up http-server, serving dist
Available on:
http://127.0.0.1:8080
http://172.17.0.2:8080
Hit CTRL-C to stop the server
and I can access it on localhost:8081. So something in the application must be messed up when adding the volumes just not sure what. I have also tried to run:
docker volume prune
before starting the container but it has no effect. Any ideas why creating the volumes prevents the application from being accessed?

running docker container is not reachable by browser

I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.

Docker container is not accessible

I have docker installed on Ubuntu 16.04 VM and I'm working on a personal project using nodejs and Docker image is from the DockerFile.
the container runs but when I try to access it with the VP'm public IP It's not accessible.
I tried to curl and I get curl: (52) empty reply from the server. after taking a very long time.
The port is mapped correctly and no firewall issues as well.
here is my docker file
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN apk update && apk upgrade \
&& apk add --no-cache git \
&& apk --no-cache add --virtual builds-deps build-base python \
&& npm install -g nodemon cross-env eslint npm-run-all node-gyp
node-pre-gyp && npm install\
&& npm rebuild bcrypt --build-from-source
RUN npm install --production --silent && mv node_modules ../
COPY . .
RUN pwd
EXPOSE 3001
CMD npm start
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8588419b40c4 xxx:v1 "/bin/sh -c 'npm sta…" 2 days ago
Up 2 days 0.0.0.0:3000->3001/tcp youthful_roentgen
Let xxx:v1 be the image name built by the Dockerfile you provided.
If you want to access your app via your host (curl localhost:3001), then you should run :
docker run -p 3001:3000 xxx:v1
This command binds port 3000 in your container to your port 3001 on your host (IIRC, 3000 is the default port used by npm start).
You should then be able to access localhost:3001 from your host with curl.
Note that EXPOSE directive in the Dockerfile does not automatically expose a port when running docker run. It's just an indication telling that your container listens on port you EXPOSEd. Here, your EXPOSE directive is wrong, you should have written :
EXPOSE 3000
because only port 3000 is exposed in the container (3000 is the default port used by npm). What port you choose to bind on the host (or not) is specified at runtime only.
If you don't want to access your app via localhost, but only via the container's IP, there is no need to bind the port (no -p). You only need to do curl <container_ip>:3000 from your host.

Resources