Docker published ports are not accessible except for 3000 - node.js

I have a feeling the question I'm about to ask is silly but I can't find the solution to my issue and I've been on this problem for a while now.
I am trying to run a docker container for a node application with a command that looks similar to the following:
$ docker run --rm -d -p 3000:3000 <username>/<project>
The above command is working fine. However, when I attempt to map my ports to something different like so:
$ docker run --rm -d -p 3000:8080 <username>/<project>
...The program doesn't work anymore
EDIT: To answer the questions in the comments. I've also tried ports 5000 and 7000 and I'm sure their not in use.

I think you're attempting to change the wrong port in the mapping:
docker run --publish=${HOST_PORT}:${CONTAINER_PORT} <username>/<project>
Maps the host's ${HOST_PORT} to the container's ${CONTAINER_PORT}.
Unless you change the container image's configuration, you're more likely to be choosing a host port.
What happens if you:
docker run --rm -d -p 8080:3000 <username>/<project>
And the try (from the host), e.g. curl localhost:8080?

Related

Apache Pulsar installation in Windows Docker

Could you please help me how to install a local standalone pulsar cluster using windows docker.i have followed the below options.but i couldn't able to access the pulsar UI
8080 port is already allocated for some other process.so here i'm using 8081 port.
Option 1:
docker run -it -p 6650:6650 -p 8081:8081 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:2.5.2 bin/pulsar standalone
Option 2:
docker run -it -p 6650:6650 -p 8081:8081 -v "$PWD/data:/pulsar/data".ToLower() apachepulsar/pulsar:2.5.2 bin/pulsar standalone
Using the above two options, i couldn't able to see the INFO - [main:WebService] - Web Service started at http://127.0.0.1:8081.Also i'm not able to access the following url in the system.
pulsar://localhost:6650
http://localhost:8081
Thanks
The problem is the mapping between the ports. It is clear that you cannot use 8080 on your side, but the port 8080 should be still used within the container, because this port is used by the service. The correct command is:
docker run -it -p 6650:6650 -p 8081:8080 apachepulsar/pulsar:2.5.2 bin/pulsar standalone
It makes sense to try it out without the volumes first and add them later.

Ports On Docker

Is there a way to bind ports to containers without passing an argument via the run command? I do not like starting my containers with the 'docker run' command so using the -p argument is not an option for me. I like to start my containers with the 'docker start containername' command. I would like to specify the hostname of the docker-server with the port number (http://dockerserver:8081) and this should then be forwarded to my container's app which is listening on port 8081. My setup is on Azure but is pretty basic so the Azure docker plugin looks a bit like overkill. I read up about the expose command but seems like you still need to use the 'docker run -p' command to get access to the container from the outside. Any suggestions would be very much appreciated.
docker run is just a shortcut for docker create + docker start. Ports need to be exposed when a container is created, so the -p option is available in docker create:
docker create -d -p 80:80 --name web nginx:alpine
docker start web
Port publishing only does ports though.
If you want the hostname passed to the container, you'll need to do it with a command option or (more likely) an environment variable - defined with ENV in the Dockerfile and passed with -e in docker create.

docker -P not exposing ports of application started as argument

I'd like to start a container with an argument to start a server inside the container. The problem is the -P switch is not exposing the ports of this server to my host.
docker run -it -e "JAVA_HOME=/opt/jdk1.8.0_45" -e "CARBON_HOME=/opt/IOT/wso2iots-1.0.0-SNAPSHOT" -P ubuntupreped:2.0 /bin/sh /opt/IOT/wso2iots-1.0.0-SNAPSHOT/bin/wso2server.sh
When I build a image exposing the env variables and a script to start the server, the -P switch works as expected.
Any idea whats happening here?

Can't access from outside process running in a Docker container

I'm trying to run a gameserver inside a docker container on my server but I'm having troubles connecting to it.
I created my container and started my gameserver (which is using port 7777) inside it.
I'm running the container with this command:
docker run -p 7777:7777 -v /home/gameserver/:/home -c=1024 -m=1024m -d --name my_gameserver game
I published the ports 7777 with the -p parameter but I can't connect to my gameserver, even though logs show that it is started.
I think I should bind my IP in some way but I have no idea what to do.
What I found so far is that docker inspect my_gameserver | grep IPAddress returns 172.17.0.24.
The problem was coming from the fact that I didn't expose the UDP port.
Correct command was:
docker run -p 7777:7777 -p 7777:7777/udp -v -d --name my_gameserver game

How to map ports with - Express + Docker + Azure

I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.

Resources