Description of the problem:
Hello,
We have a pod YAML file with specifications of our containers, their setup, etc. If we run this pod locally with the command 'podman play kube pod.yaml', it starts successfully, and our application runs. But if we run this command inside GitLab-runner (podman as executor), some networking problems occur.
Gitlab runner version: 14.9.2 (CI runner), 15.0.0 (Local GitLab runner)
Description of the problem and differences between running containers in GitLab-runner vs. localhost.
We can't access the external network from the containers inside GitLab-runner (no packages can't be installed etc.). Gitlab-runner container itself can access the external network.
If I try to access POD_NAME:8080/ inside GitLab-runner, it says connection refused. If I try to access POD_NAME:8080/ from localhost with directly running application containers from the host, it says connection refused too.
If I try to access localhost:8080 from the host, it loads the page. If I try to access localhost:8080 from the runner, it says 'No route to host'.
We run the gitlab-runner command with '--privileged' and '--network=host' flags. Also, in our pod, we use 'networkHost: true' and 'privileged: true'.
Containers from GitLab-runner and localhost have different /etc/hosts files:
Containers started from localhost:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.88.0.113 cqe dd88a3440675-infra
127.0.1.1 cqe cqe-dispatcher
127.0.1.1 cqe cqe-umbsender
127.0.1.1 cqe cqe-frontend
127.0.1.1 cqe cqe-db
127.0.1.1 cqe cqe-umbreader
Containers started from runner:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.88.0.117 runner--project-0-concurrent-0 runner--project-0-concurrent-0-d244dbca3614d5aa-build-2
10.88.0.2 cqe 4a46f7216e30-infra
10.88.0.1 host.containers.internal
10.88.0.117 host.containers.internal
Our pod file (I kept only network information and deleted env variables etc.):
apiVersion: v1
kind: Pod
metadata:
labels:
name: cqe
spec:
hostNetwork: true
privileged: true
restartPolicy: Always
containers:
- name: db
- name: frontend
securityContext:
runAsUser: 5000
runAsGroup: 5000
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
- name: dispatcher
securityContext:
runAsUser: 5000
runAsGroup: 5000
- name: umbreader
workingDir: /clusterqe-django
securityContext:
runAsUser: 5000
runAsGroup: 5000
- name: umbsender
workingDir: /clusterqe-django
securityContext:
runAsUser: 5000
runAsGroup: 5000
Containers have access to each other via all ports. For example, I can access db from the frontend by 'curl POD_NAME:3306'. This also works in containers from GitLab-runner.
I think the problem will probably be related to the fact that we are unpacking a container within a container. But despite using all the different flags and settings, we have not been able to solve this problem for a long time. I will be happy to add more information and reproduction steps.
If I were to describe the main problems that I don't understand:
Containers inside gitlab-runner don't have access to the external network.
I can't access the pod name and its exposed port regardless of whether it's running in the runner or on localhost.
Differences in /etc/hosts and other network settings if I'm inside the runner or on the localhost.
Check if See GitLab 15.1 (June 2022) helps, considering podman is officially supported there.
GitLab Runner 15.1
We’re also releasing GitLab Runner 15.1 today!
GitLab Runner is the lightweight, highly-scalable agent that runs your CI/CD jobs and sends the results back to a GitLab instance.
GitLab Runner works in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab.
What's new:
Use Podman as the container runtime in the Runner Docker executor <===
Add Software Attestations (metadata) to enable SLSA 2 in GitLab CI
Bug Fixes:
Kubernetes executor ignores Docker ENTRYPOINT for the runner helper image
Docker Hub image for GitLab Runner 15.0.0 has only amd64, not arm64 or ppc64le
See Documentation.
And with GitLab 15.3 (August 2022):
GitLab Runner 15.3
We’re also releasing GitLab Runner 15.3 today! GitLab Runner is the lightweight, highly-scalable agent that runs your CI/CD jobs and sends the results back to a GitLab instance. GitLab Runner works in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab.
Use Podman for the Docker executor on Linux <====
The list of all changes is in the GitLab Runner CHANGELOG.
See Documentation.
Related
I run a service inside a container that binds to 127.0.0.1:8888.
I want to expose this port to the host.
Does docker-compose support this?
I tried the following in docker-compose.yml but did not work.
expose:
- "8888"
ports:
- "8888:8888"
P.S. Binding the service to 0.0.0.0 inside the container is not possible in my case.
UPDATE: Providing a simple example:
docker-compose.yml
version: '3'
services:
myservice:
expose:
- "8888"
ports:
- "8888:8888"
build: .
Dockerfile
FROM centos:7
RUN yum install -y nmap-ncat
CMD ["nc", "-l", "-k", "localhost", "8888"]
Commands:
$> docker-compose up --build
$> # Starting test1_myservice_1 ... done
$> # Attaching to test1_myservice_1
$> nc -v -v localhost 8888
$> # Connection to localhost 8888 port [tcp/*] succeeded!
TEST
$>
After inputing TEST in the console the connection is closed, which means the port is not really exposed, despite the initial success message. The same issue occurs with with my real service.
But If I bind to 0.0.0.0 (instead of localhost) inside the container everything works fine.
Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project.
To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect.
There are a few workarounds:
You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated.
You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work.
You can get things in the same network namespace.
That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container:
$ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh
/ # ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 127.0.0.1:8888 *:*
LISTEN 0 128 127.0.0.11:41469 *:*
/ # nc -v -v localhost 8888
Connection to localhost 8888 port [tcp/8888] succeeded!
TEST
/ #
Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace.
If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with
docker run --net host ...
In compose, this would look like:
version: '3'
services:
myservice:
network_mode: "host"
build: .
You can see the docker compose documentation on this option here. This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces.
Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port.
According #BMitch voted answer "it is not possible to externally access this port directly if the container runs with it's own network namespace".
Based on this I think it worths it to provide my workaround on the issue:
One way would be to setup an iptables rule inside the container, for port redirection, before running the service. However this seems to require iptables modules to be loaded explicitly on the host (according to this ). This in someway breaks portablity.
My way (using socat: forwarding *:8889 to 127.0.0.1:8888.)
Dockerfile
...
yum install -y socat
RUN echo -e '#!/bin/bash\n./localservice &\nsocat TCP4-LISTEN:8889,fork
TCP4:127.0.0.1:8888\n' >> service.sh
RUN chmod u+x service.sh
ENTRYPOINT ["./service.sh"]
docker-compose.yml
version: '3'
services:
laax-pdfjs:
ports:
# Switch back to 8888 on host
- "8888:8889"
build: .
Check your docker compose version and configure it based on the version.
Compose files that do not declare a version are considered “version 1”. In those files, all the services are declared at the root of the document.
Reference
Here is how I set up my ports:
version: "3"
services:
myservice:
image: myimage:latest
ports:
- "80:80"
We can help you further if you can share the remaining of your docker-compose.yaml.
For the below docker-compose building docker file dynamically:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
creates bridge type network (project1_default) is created:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
....
f1b7ca7c6dfe project1_default bridge local
....
after running below command:
$ docker-compose -p project1 up -d
After launching and connecting to master jenkins, we configure and create slave jenkins on separate EC2 host using EC2 plugin of master jenkins.
But, above network (project1_default) is single host network, that can allow packets travel within single host. Below is my visualisation of bridge type network (project1_default)...
So, we need to launch and configure master jenkins to launch a slave jenkins on separate EC2 hosts using EC2 plugin,
Do I need to create multi-host network(swarm) type? instead of bridge type (project1_default)...
If yes, how to create a multi-host network?
all three should run in a container? Plus the containers are running on separate ec2 instances, correct?
you could expose the necessary ports to the underlying host IP. This will expose the Jenkins containers to your network and you will be able to interact with it, just as if it was installed directly on the ec2 instance (and not in a container).
Here an example
docker run -p 80:8080 ubuntu bash
this will expose port 8080 in the container to port 80 on your host machine. Start with your master and then the slaves and add your slaves just as you would in a non-container setup by using the ec2 instance's IP and the port that you exposed for the slaves.
You can find more information regarding port publishing here
https://docs.docker.com/engine/reference/commandline/run/
I have ansible running one of my ubuntu virtual machines on Azure. I am trying to host a website on Nginx docker container on a remote machine (host machine).
I've done everything provided o this link
http://www.inanzzz.com/index.php/post/6138/setting-up-a-nginx-docker-container-on-remote-server-with-ansible
When I run the curl command it displays all the content of index.html on the terminal as output and when I try to access the website (Welcome to Nginx page) on the browser it doesn't show anything.
I'm not sure what IP address to assign for the NGINX_IP variable in the docker/.env file shown in this tutorial.
Is there any other tutorial that can help me achieve what I want.
Thanks in advance.
For your issue, the problem is that you do not map the container port to the host port. So you just can access the container inside the host.
The solution is that you need to map the port in the docker-compose file like this:
version: '3'
services:
nginx_img:
container_name: ${COMPOSE_PROJECT_NAME}_nginx_con
build:
context: ./nginx
ports:
- "80:80"
networks:
public_net:
ipv4_address: ${NGINX_IP}
networks:
public_net:
driver: bridge
ipam:
driver: default
config:
- subnet: ${NETWORK_SUBNET}
The docker container runs like this:
The last step, you need to allow the port 80 in the NSG which associated with the VM that you run the nginx. Then you can access the nginx outside the VM in the browser.
I wrote docker-compose.yml file which download the image from docker store. I had already subscribe that image in docker store and I am able to pull that image. Following are the services I am using on my compose file.
store/datastax/dse-server:5.1.6
datastax/dse-studio
The link which I followed to write the compose file is datastax/docker-images
I am running docker from Docker Toolbox because I am using Window 7.
version: '2'
services:
seed_node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
- SEEDS=seed_node
links:
- seed_node
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
studio:
image: "datastax/dse-studio"
environment:
- DS_LICENSE=accept
ports:
- 9091:9091
When I go the browser link for http://192.168.99.100:9091/ and try to have a connection I am getting the following errors:
TEST FAILED
All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:9042] Cannot connect))
Docker Compose creates a default internal network where all your containers get IP addresses and can communicate. The IP address you're using there (192.168.99.100) is the address of your host that's running the containers, not the internal IP addresses where the containers can communicate with each other on that default internal network. Port 9091 where you're running Studio is available on that external IP address because you exposed it in the studio service of your yaml:
ports:
- 9091:9091
For Studio to make a connection to one of your nodes, you need to be using an IP on that internal network where they communicate, not on that external IP. The cool thing with Docker Compose is that instead of trying to figure out those internal IPs, you can just use a hostname that matches the name of your service in the docker-compose.yaml file.
So to connect to the service you named node (i.e. the DSE node), you should just use the hostname node (instead of an IP) when creating the connection in Studio.
I’m using Docker for Mac for my development environment.
The problem is that anybody within our local network can access the server and the MySQL database running on my machine. Of course, they need to know the credentials, which they can possibly brute force.
For example, if my local IP is 10.10.100.22, somebody can access my local site by typing https://10.10.100.22:8300, or database mysql -h 10.10.100.22 -P 8301 -u root -p (port 8300 maps to docker 443, port 8301 maps to docker 3306).
Currently, I use Mac firewall and block incoming connections for vpnkit, which is used by Docker. It works, but I'm not sure if this is the best approach.
UPDATE
The problem with firewall is that you have to coordinate it with all developers in your team. I was hoping to achieve my goal just using Docker configuration same as private networks in Vagrant https://www.vagrantup.com/docs/networking/private_network.html.
What is the best way to restrict access to my docker dev environment within the local network?
Found very simple solution for my problem. In the docker-compose.yml instead of,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "8301:3306"
which opens the 8301 port wide open for the local network. I did the following,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "127.0.0.1:8301:3306"
which binds the 8301 port to the docker host 127.0.0.1 only and the port is not accessible outside of the host.