For the below docker-compose building docker file dynamically:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
creates bridge type network (project1_default) is created:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
....
f1b7ca7c6dfe project1_default bridge local
....
after running below command:
$ docker-compose -p project1 up -d
After launching and connecting to master jenkins, we configure and create slave jenkins on separate EC2 host using EC2 plugin of master jenkins.
But, above network (project1_default) is single host network, that can allow packets travel within single host. Below is my visualisation of bridge type network (project1_default)...
So, we need to launch and configure master jenkins to launch a slave jenkins on separate EC2 hosts using EC2 plugin,
Do I need to create multi-host network(swarm) type? instead of bridge type (project1_default)...
If yes, how to create a multi-host network?
all three should run in a container? Plus the containers are running on separate ec2 instances, correct?
you could expose the necessary ports to the underlying host IP. This will expose the Jenkins containers to your network and you will be able to interact with it, just as if it was installed directly on the ec2 instance (and not in a container).
Here an example
docker run -p 80:8080 ubuntu bash
this will expose port 8080 in the container to port 80 on your host machine. Start with your master and then the slaves and add your slaves just as you would in a non-container setup by using the ec2 instance's IP and the port that you exposed for the slaves.
You can find more information regarding port publishing here
https://docs.docker.com/engine/reference/commandline/run/
Related
I run a service inside a container that binds to 127.0.0.1:8888.
I want to expose this port to the host.
Does docker-compose support this?
I tried the following in docker-compose.yml but did not work.
expose:
- "8888"
ports:
- "8888:8888"
P.S. Binding the service to 0.0.0.0 inside the container is not possible in my case.
UPDATE: Providing a simple example:
docker-compose.yml
version: '3'
services:
myservice:
expose:
- "8888"
ports:
- "8888:8888"
build: .
Dockerfile
FROM centos:7
RUN yum install -y nmap-ncat
CMD ["nc", "-l", "-k", "localhost", "8888"]
Commands:
$> docker-compose up --build
$> # Starting test1_myservice_1 ... done
$> # Attaching to test1_myservice_1
$> nc -v -v localhost 8888
$> # Connection to localhost 8888 port [tcp/*] succeeded!
TEST
$>
After inputing TEST in the console the connection is closed, which means the port is not really exposed, despite the initial success message. The same issue occurs with with my real service.
But If I bind to 0.0.0.0 (instead of localhost) inside the container everything works fine.
Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project.
To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect.
There are a few workarounds:
You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated.
You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work.
You can get things in the same network namespace.
That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container:
$ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh
/ # ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 127.0.0.1:8888 *:*
LISTEN 0 128 127.0.0.11:41469 *:*
/ # nc -v -v localhost 8888
Connection to localhost 8888 port [tcp/8888] succeeded!
TEST
/ #
Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace.
If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with
docker run --net host ...
In compose, this would look like:
version: '3'
services:
myservice:
network_mode: "host"
build: .
You can see the docker compose documentation on this option here. This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces.
Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port.
According #BMitch voted answer "it is not possible to externally access this port directly if the container runs with it's own network namespace".
Based on this I think it worths it to provide my workaround on the issue:
One way would be to setup an iptables rule inside the container, for port redirection, before running the service. However this seems to require iptables modules to be loaded explicitly on the host (according to this ). This in someway breaks portablity.
My way (using socat: forwarding *:8889 to 127.0.0.1:8888.)
Dockerfile
...
yum install -y socat
RUN echo -e '#!/bin/bash\n./localservice &\nsocat TCP4-LISTEN:8889,fork
TCP4:127.0.0.1:8888\n' >> service.sh
RUN chmod u+x service.sh
ENTRYPOINT ["./service.sh"]
docker-compose.yml
version: '3'
services:
laax-pdfjs:
ports:
# Switch back to 8888 on host
- "8888:8889"
build: .
Check your docker compose version and configure it based on the version.
Compose files that do not declare a version are considered “version 1”. In those files, all the services are declared at the root of the document.
Reference
Here is how I set up my ports:
version: "3"
services:
myservice:
image: myimage:latest
ports:
- "80:80"
We can help you further if you can share the remaining of your docker-compose.yaml.
When using Docker Compose, how can we determine and set the IP address of the Docker host in the /etc/hosts file of the Docker container when the host is running Linux? I will like a hostname like DOCKER_HOST in the container to refer to the Docker host.
Docker for Linux does not yet support host.docker.internal, unlike Docker for Windows and Docker for Mac, so we need an alternative to this.
--network host cannot be used here because for my purpose localhost in the container must still refer to the container itself.
Docker Compose supports using extra_hosts to add a hostname-ip mapping in /etc/hosts, but I am unable to figure out how to automatically determine the host IP address to be used.
version: '3'
services:
api:
build: .
ports:
- "8080:8080"
extra_hosts:
- "DOCKER_HOST:X.X.X.X" # <=== How do we automatically set this IP address?
Is it possible to do something like the following, where we do not have to manually define the Docker host IP address when starting the containers?
DOCKER_HOST=`command_to_get_host_ip` docker-compose up -d
Or can we set a static IP for the Docker host in docker-compose.yml?
You can try :
DOCKER_HOST=$(python -c 'import socket;print(socket.gethostbyname(socket.gethostname()))') docker-compose up -d
I have ansible running one of my ubuntu virtual machines on Azure. I am trying to host a website on Nginx docker container on a remote machine (host machine).
I've done everything provided o this link
http://www.inanzzz.com/index.php/post/6138/setting-up-a-nginx-docker-container-on-remote-server-with-ansible
When I run the curl command it displays all the content of index.html on the terminal as output and when I try to access the website (Welcome to Nginx page) on the browser it doesn't show anything.
I'm not sure what IP address to assign for the NGINX_IP variable in the docker/.env file shown in this tutorial.
Is there any other tutorial that can help me achieve what I want.
Thanks in advance.
For your issue, the problem is that you do not map the container port to the host port. So you just can access the container inside the host.
The solution is that you need to map the port in the docker-compose file like this:
version: '3'
services:
nginx_img:
container_name: ${COMPOSE_PROJECT_NAME}_nginx_con
build:
context: ./nginx
ports:
- "80:80"
networks:
public_net:
ipv4_address: ${NGINX_IP}
networks:
public_net:
driver: bridge
ipam:
driver: default
config:
- subnet: ${NETWORK_SUBNET}
The docker container runs like this:
The last step, you need to allow the port 80 in the NSG which associated with the VM that you run the nginx. Then you can access the nginx outside the VM in the browser.
I’m using Docker for Mac for my development environment.
The problem is that anybody within our local network can access the server and the MySQL database running on my machine. Of course, they need to know the credentials, which they can possibly brute force.
For example, if my local IP is 10.10.100.22, somebody can access my local site by typing https://10.10.100.22:8300, or database mysql -h 10.10.100.22 -P 8301 -u root -p (port 8300 maps to docker 443, port 8301 maps to docker 3306).
Currently, I use Mac firewall and block incoming connections for vpnkit, which is used by Docker. It works, but I'm not sure if this is the best approach.
UPDATE
The problem with firewall is that you have to coordinate it with all developers in your team. I was hoping to achieve my goal just using Docker configuration same as private networks in Vagrant https://www.vagrantup.com/docs/networking/private_network.html.
What is the best way to restrict access to my docker dev environment within the local network?
Found very simple solution for my problem. In the docker-compose.yml instead of,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "8301:3306"
which opens the 8301 port wide open for the local network. I did the following,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "127.0.0.1:8301:3306"
which binds the 8301 port to the docker host 127.0.0.1 only and the port is not accessible outside of the host.
I have problems to connect a nodeJS application which is running as a docker container to a mongoDB. Let me explain what I have done so far:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3732cc1d90 mongo:3.4 "docker-entrypoint..." 3 weeks ago Up 3 weeks 27017/tcp mongo_live
As you can see, there is already a mongo docker container running.
Now I'm running my nodeJS application docker container (which is a build from meteorJS):
$ docker run -it 0b422defbd59 /bin/bash
In this docker container I want to run the application by running:
$ node main.js
Now I'm getting the error
Error: MONGO_URL must be set in environment
I already tried to set MONGO_URL by setting:
ENV MONGO_URL mongodb://mongo_live:27017/
But this doesn't work:
MongoError: failed to connect to server [mongo_live:27017] on first connect
So my question is how to connect to a DB, which is - as far as I understand - 'outside' of the running container. Alternativly how do I set up a new DB to this container?
There are couple of ways to do it.
run your app in the same network as your mongodb:
docker run --net container:mongo_live your_app_docker_image
# then you can use mongodb in your localhost
$ ENV MONGO_URL mongodb://localhost:27017/
Also you can link two containers:
docker run --link mongo_live:mongo_live you_app_image ..
# Now mongodb is accessible via mongo_live
use mongodb container ip address:
docker inspect -f '{{.NetworkSettings.IPAddress}}' mongo_live
# you will get you container ip here
$ docker run -it 0b422defbd59 /bin/bash
# ENV MONGO_URL mongodb://[ip from previous command]:27017/
You can bind your mongodb port to your host and use host's hostname in your app
You can use docker network and run both apps in the same network
You could pass --add-host mongo_live:<ip of mongo container> to docker run for your application and then use mongo_live for mongodb url
You can also use docker compose to make your life easier ;)
...
When you run containers each container works in independent network. Because one container cant connect to other point to point.
The are 3 ways to connect containers
Have a little fuss with low-level docker network magic
Connect container through localhost. Each container must expose ports on localhost (as your mongo_live). But you need add to host ile on localhost 127.0.0.1 mongo_live (This is the simplest way)
Use docker-compose. It convenient tool for working many containers together. (This is right way)
Add mongodb to application container is not docker way.
Please use below snippet for your docker-compose.yml file, replace comments with your actuals. Should solve your problem.
version: '2'
services:
db:
build: <image for mongoDB>
ports:
- "27017:27017" # whatever port u r using
environment:
#you can specify mondo db username and stuff here
volumes:
- #load default config for mondodb from here
- "db-data-store:/data/db" # path depends on which image you use
networks:
- network
nodejs:
build: #image for node js
expose:
- # mention port for nodejs
volumes:
- #mount project code on container
networks:
- network
depends_on:
- db
networks:
network:
driver: bridge
Please use the below links for references :
1) NodeJs Docker
2) MongoDb docker
3) docker-compose tutorial
Best of Luck
I had problem how to connect my server.js to mongodb. And that's how i solved it hope you find it useful.
Tap For My Screenshot