I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped
Related
I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.
I'm a newbie to Docker so please correct me if anything I'm stating is wrong.
I created a React app and wrote a following Dockerfile in the root repository:
# pull official base image
FROM node:latest
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 8080
# Finally runs the application
CMD [ "npm", "start" ]
My goal is to run the docker image in a way, that I can open the React app in my browser (with localhost).
Since in the Dockerfile I'm Exposing the app to the PORT: 8080. I thought I can run:
docker run -p 8080:8080 -t <name of the docker image>
But apparently the application is accessible through 3000 in the container, cause when I run:
docker run -p 8080:3000 -t <name of the docker image>
I can access it with localhost:8080.
What's the point of the EXPOSE port in the Dockerfile, when the service running in its container is accessible through a different port?
When containerizing a NodeJS app, do I always have to make sure that process.env.PORT in my app is the same as the EXPOSE in the Dockerfile?
EXPOSE is for telling docker what ports from inside the application can be exposed. It doesn't mean anything if you do not use those port inside (container -> host).
The EXPOSE is very handy when using docker run -P -t <name of the docker image> (-P capital P) to let Docker automatically publish all the exposed ports to random ports on the host (try it out. then run docker ps or docker inspect <containerId> and checking the output).
So if your web Server (React app) is running on port 3000 (inside the container) you should EXPOSE 3000 (instead of 8080) to properly integrate with the Docker API.
It's kind of weird.
Its just documentation in a sense.
https://docs.docker.com/engine/reference/builder/#:~:text=The%20EXPOSE%20instruction%20informs%20Docker,not%20actually%20publish%20the%20port.
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
do I always have to make sure that process.env.PORT in my app is the
same as the EXPOSE in the Dockerfile?
Yes. You should.
And then you also need to make sure that port actually gets published, when you use the docker run command or in your docker-compose.yml file, or however you plan on running docker.
Actually react app runs the default port 3000. so you must to mention ports and expose in docker-compose.yml. Now I'm changing the 3000 port to 8081
frontend:
container_name: frontend
build:
context: ./frontend/app
dockerfile: ../Dockerfile
volumes:
- ./frontend/app:/home/devops/frontend/app
- /home/devops/frontend/app/node_modules
ports:
- "8081:3000"
expose:
- 8081
command: ["npm", "start"]
restart: always
stdin_open: true
And run the docker
$ sudo docker-compose up -d
Then check the running containers for find the running port
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b970baf16d devops_frontend "docker-entrypoint..." 31 seconds ago Up 30 seconds 8081/tcp, 0.0.0.0:8081->3000/tcp frontend
It's resolved. check your public port
$ curl 'http://0.0.0.0:8081'
When I am not exposing any ports when writing my Dockerfile, nor am I binding any ports when running docker run, I am still able to interact with applications running inside the container. Why?
I am writing my Dockerfile for my Node application. It's pretty simple and looks like this:
FROM node:8
COPY . .
RUN yarn
RUN yarn run build
ARG PORT=80
EXPOSE $PORT
CMD yarn run serve
Using this Dockerfile, I was able to build the image using docker build
$ cd ~/project/dir/
$ docker build . --build-arg PORT=8080
And run it using docker run
$ docker run -p 8080 <image-id>
I then accessed the application, running inside the Docker container, on an IP address like http://172.17.0.12:8080/ and it works.
However, when I removed the EXPOSE instruction from the Dockerfile, and remove the -p option in docker run, the application still works! It's like Docker is automatically binding my ports
Additional Notes:
It appears that another user have experienced the same issue
I have tried rebuilding my image using --no-cache after I removed the EXPOSE instructions, but this problem still exists.
Using docker inspect, I see no entries for Config.ExposedPorts
the EXPOSE command in Dockerfile really doesnt do much and I think it is more for people that read the Dockerfile to know what ports/services are running inside the container. However, the EXPOSE is usefull when you start contianer with capital -P argument (-P, --publish-all Publish all exposed ports to random ports)
docker run -P my_image
but if you are using the lower case -p you have to specify the source:destination port... See this thread
If you dont write EXPOSE in Dockerfile it doesnt have any influence to the app inside container, it is only for the capital -P argument....
question:
I've created a simple site using Docker and Aurelia. The site runs in Docker, but is not accessible from my localhost. What I did:
create container
docker build -t randy/node-web-app .
docker run -p 9000:9000 -d randy/node-web-app
97f57c3d0da5d03f53b4ba893fdb866ca528e10e6c4a1b310726e514d8957650
see if the scripts ran:
docker logs 97f57c3d0da5
Application Available At: http://localhost:9000
Going into docker container terminal to see if the site is up:
docker exec -it 97f57c3d0da5 /bin/bash
See if it runs:
curl -i localhost:9000
summary:
HTTP/1.1 200 OK
<!DOCTYPE html>
(I actually see the HTML that it should return, but that's too big to post here.)
return to host terminal:
exit
curl -i localhost:9000
curl: (7) Failed to connect to localhost port 9000: Connection refused
How can I make sure I can access that site from my pc? In the first command, I've set expose on 9000:9000 so that shouldn't be a problem.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install -g aurelia-cli
RUN npm install
EXPOSE 9000
CMD ["npm", "start"]
I used Kitematic instead of the normal version of Docker on the machine where I deployed this image. Kitematic maps the docker image to an (internal) IP address instead of the localhost of the host machine.
My answer was to use 192.168.1.67 as IP instead of 127.0.0.1.
If you install Docker without Kitematic, this should not be an issue.
I'm finding the docs sorely lacking for this (or, I'm dumb), but here's my setup:
Webapp is running on Node and Express, in port 8080. It also connects with a MongoDB container (hence why I'm using docker-compose).
In my Dockerfile, I have:
FROM node:4.2.4-wheezy
# Set correct environment variables.
ENV HOME /root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install;
# Bundle app source
COPY . /usr/src/app
CMD ["node", "app.js"]
EXPOSE 8080
I run:
docker-compose build web
docker-compose build db
docker-compose up -d db
docker-compose up -d web
When I run docker-machine ip default I get 192.168.99.100.
So when I go to 192.168.99.100:8080 in my browser, I would expect to see my app - but I get connection refused.
What silly thing have I done wrong here?
Fairly new here as well, but did you publish your ports on your docker-compose file?
Your Dockerfile will simply expose the ports but not open access to your host. Publishing (with the -p flag on a docker run command or ports on a docker-compose file. Will enable access from outside the container (see ports in the docs)
Something like this may help:
ports:
- "8080:8080"