I'm attempting to run a node.js application in debug mode in one Docker container, and attach a debugger from another container onto the application running in the first container.
As such, I'm trying to open up port 5858 to the outside world. However, when I --link another container to the first container (with alias firstContainer), and run nmap -p 5858 firstContainer, I find that port 5858 is closed. The first container has told me that the node.js application is listening on port 5858, I've exposed the port in the Dockerfile, and I've also bound the ports to the corresponding port on my machine (although, I'm not certain that's necessary). When I run nmap on port 8080, all is successful.
How can I open up port 5858 on a Docker container such that I can attach a debugger to this port?
The Dockerfile is:
FROM openshift/base-centos7
# This image provides a Node.JS environment you can use to run your Node.JS
# applications.
MAINTAINER SoftwareCollections.org <sclorg#redhat.com>
EXPOSE 8080 5858
ENV NODEJS_VERSION 0.10
LABEL io.k8s.description="Platform for building and running Node.js 0.10 applications" \
io.k8s.display-name="Node.js 0.10" \
io.openshift.expose-services="8080:http" \
io.openshift.tags="builder,nodejs,nodejs010"
RUN yum install -y \
https://www.softwarecollections.org/en/scls/rhscl/v8314/epel-7-x86_64/download/rhscl-v8314-epel-7-x86_64.noarch.rpm \
https://www.softwarecollections.org/en/scls/rhscl/nodejs010/epel-7-x86_64/download/rhscl-nodejs010-epel-7-x86_64.noarch.rpm && \
yum install -y --setopt=tsflags=nodocs nodejs010 && \
yum clean all -y
# Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH
COPY ./s2i/bin/ $STI_SCRIPTS_PATH
# Each language image can have 'contrib' a directory with extra files needed to
# run and build the applications.
COPY ./contrib/ /opt/app-root
# Drop the root user and make the content of /opt/app-root owned by user 1001
RUN chown -R 1001:0 /opt/app-root
USER 1001
# Set the default CMD to print the usage of the language image
CMD $STI_SCRIPTS_PATH/usage
Run with:
docker run -P -p 5858:5858 -p 8080:8080 --name=firstContainer nodejs-sample-app
Taken from/built with instructions from here.
Thanks.
-P automagically maps any exposed ports within a container to a random port on host machine, while -p allows explicit mapping of ports. Using the --link flag allows two docker containers to communicate with each other, but does nothing to expose the ports to the outside world (outside the docker private network).
Related
I've cloned the following dockerized MEVN app and would like to access it from another PC on the local network.
The box that docker is running on has an ip of 192.168.0.111 but going to http://192.168.0.111:8080/ from another PC just says it can't be reached. I run other services like plex and a minecraft server that can be reached with this ip so I assume it is a docker config issue. I am pretty new to docker.
Here is the Dockerfile for the poral. I made a slight change from the repo adding -p 8080:8080 because I read elsewhere that it would open it up to lan access.
FROM node:16.15.0
RUN mkdir -p /usr/src/www &&
apt-get -y update &&
npm install -g http-server
COPY . /usr/src/vue
WORKDIR /usr/src/vue
RUN npm install
RUN npm run build
RUN cp -r /usr/src/vue/dist/* /usr/src/www
WORKDIR /usr/src/www
EXPOSE 8080
CMD http-server -p 8080:8080 --log-ip
Don't put -p 8080:8080 in the Dockerfile!
You should first build your docker image using docker build command.
docker build -t myapp .
once you've built the image, and confirmed using docker images you can run it using docker run command
docker run -p 8080:8080 myapp
Docker listens 0.0.0.0 IP address and the other machines on the same network can use your ip address to show your website on which port did you use for sharing. For example you use 8080 and actually you listen 0.0.0.0:8080 and the other machines http://192.168.0.111:8080/ can reach that website with your ip address. Without docker you can also listen 0.0.0.0 to share your app on network.
The box that docker is running on
What u mean by saying "BOX"? Is it some kind of virtual box or maybe actual computer with Linux or Windows or maybe MacOS?
Have u checked particular "BOX"'s firewall? (u may need to do "NAT" over firewall to particular in "BOX" running service for incoming requests from outside of "BOX").
I'll be happy to help u our if u'll provide more detailed information about your environment...
Could you please help me how to install a local standalone pulsar cluster using windows docker.i have followed the below options.but i couldn't able to access the pulsar UI
8080 port is already allocated for some other process.so here i'm using 8081 port.
Option 1:
docker run -it -p 6650:6650 -p 8081:8081 --mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf apachepulsar/pulsar:2.5.2 bin/pulsar standalone
Option 2:
docker run -it -p 6650:6650 -p 8081:8081 -v "$PWD/data:/pulsar/data".ToLower() apachepulsar/pulsar:2.5.2 bin/pulsar standalone
Using the above two options, i couldn't able to see the INFO - [main:WebService] - Web Service started at http://127.0.0.1:8081.Also i'm not able to access the following url in the system.
pulsar://localhost:6650
http://localhost:8081
Thanks
The problem is the mapping between the ports. It is clear that you cannot use 8080 on your side, but the port 8080 should be still used within the container, because this port is used by the service. The correct command is:
docker run -it -p 6650:6650 -p 8081:8080 apachepulsar/pulsar:2.5.2 bin/pulsar standalone
It makes sense to try it out without the volumes first and add them later.
I have this Dockerfile ...
FROM keymetrics/pm2:latest-alpine
RUN apk update && \
apk upgrade && \
apk add \
bash
COPY . ./
EXPOSE 1886 80 443
CMD pm2-docker start --auto-exit --env ${NODE_ENV} ecosystem.config.js
How can I execute the CMD command using sudo ?
I need to do this because the port 443 is allowed only for sudo user.
The su-exec can be used in alpine.
Do add it the package, if not already available, add the following to your Dockerfile
RUN apk add --no-cache su-exec
Inside your scripts you'd run inside docker you can use the following to become another user:
exec su-exec <my-user> <my command>
Alternatively, you could add the more familiair sudo package while building your docker-file
Add the following to your Dockerfile that's FROM alpine
RUN set -ex && apk --no-cache add sudo
After that you can use sudo
sudo -u <my-user> <my command>
Sudo isn't shipped with Alpine images normally, and it rarely makes sense to include it inside of any container. What you need isn't sudo to bind to a low numbered port, but the root user itself, and sudo is just a common way to get root access in multi-user environments. If a container included sudo, you would need to either setup the user with a password, or allow commands to run without a password. Regardless of which you chose, you now have a privilege escalation inside the container, defeating the purpose of running the container as a normal user, so you may as well run the container as root at that point.
If the upstream image is configured to run as a non-root user (unlikely since you run apk commands during the build), you can specify USER root in your Dockerfile, and all following steps will run as root by default, including the container entrypoint/cmd.
If you start your container as a different user, e.g. docker run -u 1000 your_image, then to run your command as root, you'd remove the -u 1000 option. This may be an issue if you run your container in higher security environments that restrict containers to run as non-root users.
If your application itself is dropping the root privileges, then including sudo is unlikely not help, unless the application itself has calls to sudo internally. If that's the case, update the application to drop root privileges after binding to the ports.
Most importantly, if the only reason for root inside your container is to bind to low numbered ports, then configure your application inside the container to bind to a high numbered port, e.g. 8080 and 8443. You can map this container port to any port on the host, including 80 and 443, so the outside world does not see any impact. E.g. docker run -p 80:8080 -p 443:8443 your_image. This simplifies your image (removing tools like sudo) and increases your security at the same time.
I'm trying to run a gameserver inside a docker container on my server but I'm having troubles connecting to it.
I created my container and started my gameserver (which is using port 7777) inside it.
I'm running the container with this command:
docker run -p 7777:7777 -v /home/gameserver/:/home -c=1024 -m=1024m -d --name my_gameserver game
I published the ports 7777 with the -p parameter but I can't connect to my gameserver, even though logs show that it is started.
I think I should bind my IP in some way but I have no idea what to do.
What I found so far is that docker inspect my_gameserver | grep IPAddress returns 172.17.0.24.
The problem was coming from the fact that I didn't expose the UDP port.
Correct command was:
docker run -p 7777:7777 -p 7777:7777/udp -v -d --name my_gameserver game
I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.