I'm delving into Docker and I'm trying to rack my brains on why this isn't working for me. I've read many articles and tutorials on how to setup this up and I seem to have everything in place, but my actual app just isn't showing up in the browser (localhost:3001). I'm using the latest version of Docker on my Mac, running Mavericks, using boot2docker. I definitely have boot2docker running as the docker commands run fine and I get no errors that seem to relate.
The super simple project looks like this:
src/
..index.js
..package.json
Dockerfile
The src/index.js file looks like this:
var express = require('express'),
app = express();
app.get('/', function(req, res){
res.send('Hello world!');
});
app.listen(3001);
The src/package.json file looks like this:
{
"name": "node-docker-example",
"version": "0.0.1",
"description": "A NodeJS webserver to run inside a docker container",
"main": "index.js",
"dependencies": {
"express": "*"
}
}
The Dockerfile file looks like this:
FROM ubuntu:14.04
# make sure apt is up to date
RUN apt-get update
# install nodejs and npm
RUN apt-get install -y nodejs npm git git-core
# add source files
ADD /src /srv
# set the working directory to run commands from
WORKDIR /srv
# install the dependencies
RUN npm install
# expose the port so host can have access
EXPOSE 3001
# run this after the container has been instantiated
CMD ["nodejs", "index.js"]
With all of this in place, I then build it just locally:
$ docker build -t me/foo .
No problems... Then I've tried some alternative ways to make it run, but none of these work and I can't see any response when viewing in my browser (localhost:3001)
$ docker run -i -t me/foo
$ docker run -i -t -p 3001:3001 me/foo
$ docker run -i -t -p 127.0.0.1:3001:3001 me/foo
Nothing seems to work, no errors come up... Well, apart from that localhost:3001 in the browser does absolutely nothing.
Please help me! I love the idea of docker, but I can't get the simplest thing running. Thanks!
boot2docker has an extra network
There's one extra layer of networking getting in the way. Remember that boot2docker has it's own OS and additional network IP, so try url=http://$(boot2docker ip):3001;curl -v "${url}" from a terminal on your mac and see if that returns HTML from your express app. If so, you can browse to your app with open "${url}".
I was able to take your files (thank you for posting full files!) and build and run your image locally.
Build, run, and test it like this
docker build -t foo .
docker run -i -t -p 3001:3001 foo
I think the key thing to note is that for docker build the -t argument means "tag" but for docker run it means "allocate a tty".
Test it like this (in a separate terminal from where it's running interactively)
curl -s "$(boot2docker ip):3001"
Here's where you went wrong
Or at least my guesses:
$ docker run -i -t me/foo
doesn't map any ports
$ docker run -i -t -p 3001:3001 me/foo
I think in theory this variant should work. If not, I'm pretty sure it's a boot2docker-specific networking issue at the IP layer.
$ docker run -i -t -p 127.0.0.1:3001:3001 me/foo
This is telling docker to bind to loopback on the docker server, not your mac, so you'll never be able to connect to this from your mac.
Related
I've cloned the following dockerized MEVN app and would like to access it from another PC on the local network.
The box that docker is running on has an ip of 192.168.0.111 but going to http://192.168.0.111:8080/ from another PC just says it can't be reached. I run other services like plex and a minecraft server that can be reached with this ip so I assume it is a docker config issue. I am pretty new to docker.
Here is the Dockerfile for the poral. I made a slight change from the repo adding -p 8080:8080 because I read elsewhere that it would open it up to lan access.
FROM node:16.15.0
RUN mkdir -p /usr/src/www &&
apt-get -y update &&
npm install -g http-server
COPY . /usr/src/vue
WORKDIR /usr/src/vue
RUN npm install
RUN npm run build
RUN cp -r /usr/src/vue/dist/* /usr/src/www
WORKDIR /usr/src/www
EXPOSE 8080
CMD http-server -p 8080:8080 --log-ip
Don't put -p 8080:8080 in the Dockerfile!
You should first build your docker image using docker build command.
docker build -t myapp .
once you've built the image, and confirmed using docker images you can run it using docker run command
docker run -p 8080:8080 myapp
Docker listens 0.0.0.0 IP address and the other machines on the same network can use your ip address to show your website on which port did you use for sharing. For example you use 8080 and actually you listen 0.0.0.0:8080 and the other machines http://192.168.0.111:8080/ can reach that website with your ip address. Without docker you can also listen 0.0.0.0 to share your app on network.
The box that docker is running on
What u mean by saying "BOX"? Is it some kind of virtual box or maybe actual computer with Linux or Windows or maybe MacOS?
Have u checked particular "BOX"'s firewall? (u may need to do "NAT" over firewall to particular in "BOX" running service for incoming requests from outside of "BOX").
I'll be happy to help u our if u'll provide more detailed information about your environment...
I'm trying to build a docker file in which I first download and install the Cloud SQL Proxy, before running nodejs.
FROM node:13
WORKDIR /usr/src/app
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
COPY . .
RUN npm install
EXPOSE 8000
RUN cloud_sql_proxy -instances=[project-id]:[region]:[instance-id]=tcp:5432 -credential_file=serviceaccount.json &
CMD node index.js
When building the docker file, I don't get any errors. Also, the file serviceaccount.json is included and is found.
When running the docker file and checking the logs, I see that the connection in my nodejs app is refused. So there must be a problem with the Cloud SQL proxy. Also, I don't see any output of the Cloud SQL proxy in the logs, only from the nodejs app. When I create a VM and install both packages separately, it works. I get output like "ready for connections".
So somehow, my docker file isn't correct, because the Cloud SQL proxy is not installed or running. What am I missing?
Edit:
I got it working, but I'm not sure this is the correct way to do.
This is my dockerfile now:
FROM node:13
WORKDIR /usr/src/app
COPY . .
RUN chmod +x wrapper.sh
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN npm install
EXPOSE 8000
CMD ./wrapper.sh
And this is my wrapper.sh file:
#!/bin/bash
set -m
./cloud_sql_proxy -instances=phosphor-dev-265913:us-central1:dev-sql=tcp:5432 -credential_file=serviceaccount.json &
sleep 5
node index.js
fg %1
When I remove the "sleep 5", it does not work because the server is already running before the connection of the cloud_sql_proxy is established. With sleep 5, it works.
Is there any other/better way to wait untill the first command is completely done?
RUN commands are used to do stuff that changes something in the file system of the image like installing packages etc. It is not meant to run a process when the you start a container from the resulting image like you are trying to do. Dockerfile is only used to build a static container image. When you run this image, only the arguments you give to CMD instruction(node index.js) is executed inside the container.
If you need to run both cloud_sql_proxy and node inside your container, put them in a shell script and run that shell script as part of CMD instruction.
See Run multiple services in a container
You should ideally have a separate container per process. I'm not sure what cloud_sql_proxy does, but probably you can run it in its own container and run your node process in its own container and link them using docker network if required.
You can use docker-compose to manage, start and stop these multiple containers with single command. docker-compose also takes care of setting up the network between the containers automatically. You can also declare that your node app depends on cloud_sql_proxy container so that docker-compose starts cloud_sql_proxy container first and then it starts the node app.
I'm posting for a friend. He asked my help and we couldn't find out what's going on.
My situation is: my application works perfectly on Ubuntu 18.04 when it’s not inside a container, but the customer required the use of containers so I created a Dockerfile so it could be started by a Docker container.
Here’s the contente of my Dockerfile
FROM node:8.9.4
ENV HOME=/home/backend
RUN apt-get update
RUN apt-get install -y build-essential libssl-dev
RUN apt-get install -y npm
COPY . $HOME/
WORKDIR $HOME/
RUN npm rebuild node-sass
RUN npm install --global babel-cli
USER root
EXPOSE 6543
CMD ["babel-node", "index.js"]
After building the image, I execute the following Docker run command:
sudo docker run --name backend-api -p 6543:6543 -d backend/backendapi1.0
Taking a look at the log output, I can conclude that the application Works properly:
I’ve created a rule in my nginx to redirect from port 90 to 6543 (before using containers it used to work)
server {
listen 90;
listen [::]:90;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://localhost:6543;
}
}
P.S.: i’ve tried to change from localhost to the container’s IP and it doesn’t work as well.
The fun fact is that when i try na internal telnet on 6543 it accepts the connection and closes it immediately.
P.S.: all ports are open on the firewall.
The application Works normally outside the container (using port 6543 and redirecting in nginx)
I’d appreciate if someone could help us to find out the reason why it’s happening. We don't have much experience creating containers.
Thanks a lot!
Edit: it's an AWS VM, but this is the return when we run the command curl:
We found the solution!!
It was an internar container router problem...
The following Docker run command solved the problem:
sudo docker run --name my_container_name --network="host" -e MONGODB=my_container_ip -p 6543:6543 my_dockerhub_image_name
Thanks a lot!!
I have a feeling the question I'm about to ask is silly but I can't find the solution to my issue and I've been on this problem for a while now.
I am trying to run a docker container for a node application with a command that looks similar to the following:
$ docker run --rm -d -p 3000:3000 <username>/<project>
The above command is working fine. However, when I attempt to map my ports to something different like so:
$ docker run --rm -d -p 3000:8080 <username>/<project>
...The program doesn't work anymore
EDIT: To answer the questions in the comments. I've also tried ports 5000 and 7000 and I'm sure their not in use.
I think you're attempting to change the wrong port in the mapping:
docker run --publish=${HOST_PORT}:${CONTAINER_PORT} <username>/<project>
Maps the host's ${HOST_PORT} to the container's ${CONTAINER_PORT}.
Unless you change the container image's configuration, you're more likely to be choosing a host port.
What happens if you:
docker run --rm -d -p 8080:3000 <username>/<project>
And the try (from the host), e.g. curl localhost:8080?
I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.