Centos Docker : Connection reset by peer - azure

I run nginx container on my azure Vm, centos 7 with this command:
docker run --name nginx2 --detach -p 0.0.0.0:90:80 nginx
Now If I run this command on the host machine: curl localhost:90 I got the error:
curl: (56) Recv failure: Connection reset by peer
What I tried:
On my local machine
I follow the same steps on my local machine and everything works like a charm. So the nginx image is not the problem
Inside the container
On my VM, I connect to the nginx container and run the command : curl localhost:80 and got the answer. So nginx listen on port 80.
From another container
If I create a second container and run the command curl <nginx-ip>:80. I got the result. So nginx not listen only on localhost, but on all ip.
host network
When I run the container using Host Newtork --network host, everything work fine. But I don't want to do that
Try with another container
I try using this container: docker run -p 8000:8000 -it python:3.7-slim python3 -m http.server --bind 0.0.0.0. But same error.
My question is what is the problem with bridge network on Centos ?

I had similar problem with CentOS 7.1 and latest Docker 19.03.
Only way I was able to resolve this was to revert to Docker 18.03 (other versions might also work). After uninstalling current Docker I installed earlier version:
yum install docker-ce-18.03.1.ce-1.el7.centos
After this curl localhost:<port> started to work ok with the nginx.

I faced the same problem with 3000 and 80 port, add your ports to the firewal records like that:
firewall-cmd --permanent --zone=trusted --change-interface=docker0
firewall-cmd --permanent --zone=trusted --add-port=80/tcp
firewall-cmd --permanent --zone=trusted --add-port=90/tcp
firewall-cmd --reload
also check if you have iptables - disable it
systemctl disable iptables.service

Related

getting error when using ssh to connect to a docker container

I got a container that has openssh installed and can be connnected via the command
ssh 172.17.0.2.
Now I want to get a port (say 32769) on the host side, and map the port 22 (of docker container) to it, the reason for doing that is I want to get the ssh 127.0.0.1 -p 32769 works on localhost, I got the errors as : ssh_exchange_identification: read: Connection reset by peer . The port mapping is showing normally on docker engine: 0.0.0.0:32769->22/tcp.
Can somebody help me with that? Much appreciated!
Check that the SSH daemon is running in your container first (through a docker exec or docker attach session):
service ssh status
# or
service sshd status
Make sure you have the right IP address
sudo docker inspect -f "{{ .NetworkSettings.IPAddress }}" Container_Name
Use the right SSH URL:
ssh root#172.17.0.2
See more in "How to SSH into a Running Docker Container and Run Commands" from Sofija Simic.
Using a docker run -p 32769:22 is in your case a good idea.
The OP mentions in the discussion an issue with docker proxy:
The docker-proxy was not getting eth0 of the container as -container-ip.
Here is what I've got
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32785 \
-container-ip 172.21.1.2 -container-port 22

Docker Node JS Installation

I am brand new to docker and I am attempting to follow the node js tutorial that they have listed: https://docs.docker.com/examples/nodejs_web_app/
I follow this tutorial and everything seems to work great until the test portion and I can't curl to the port specified.
$ curl -i localhost:49160
curl: (7) Failed to connect to localhost port 49160: Connection refused
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21e727bc5a7d username/docker-test "node /src/index.js" 13 minutes ago Up 13 minutes 0.0.0.0:49160->8080/tcp gloomy_heisenberg
$ docker logs 21e727bc5a7d
Running on localhost:8080
$ docker exec -it 21e727bc5a7d bash
[root#21e727bc5a7d /]# netstat -tulpn
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node'
Not sure if I am confused about something or how to troubleshoot this, any ideas?
This is the solution:
Run you application like this:
docker run --rm -i -t -p 49160:8080 <your username>/centos-node-hello
An output should come saying listening to port 8080.
This means that the application is running
Open another docker CLI terminal and type:
docker-machine ls
You will see and IP address and a port. Example: 192.168.99.100:<some-number>
Go to a browser and type the IP address and the exported port number.
In this case, 192.168.99.100:49160, and your website should come on.
Hope it helps.
I experienced the same problem and discovered it was due to boot2docker not forwarding my localhost to the virtual machine in which Docker is running.
Find out the IP of the VM running Docker like so:
$ boot2docker ip
You'll see output like:
$ 192.168.99.102
And with the printout it gives you, curl your specified port on that IP. For example:
$ curl -i 192.168.99.102:49160
If you are using Mac OS X and have installed docker using Docker Toolbox, you can use:
curl $(docker-machine ip default):49160
I can tell you how to troubleshoot this issue at least.
First check the logs, you'll need the container id to do this. You get the id of the container using docker ps
Run docker <id> logs to view the logs. It's possible your command returned an error.
If you'd like to get a closer look, You can start a BASH shell on the container. Run this command docker exec -it <id> bash and that will give you a shell on the container to troubleshoot. The shell is the same instance of the container so you can troubleshoot the running service.
Hit given command to find IP address of docker-machine
$ docker-machine ls
The output will be like :
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
Now run your application from host machine as :
$ curl -i 192.168.99.100:49160

Difficulty accessing Docker's API

I was struggling to get connected to the Docker API running on a RedHat 7.1 VM, the docker API is running on both a TCP port and a UNIX socket.
To configure this I set -H OPTIONS as follows:
OPTIONS='--selinux-enabled -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock'
in the file:
/etc/sysconfig/docker
Running the docker client on the same box, it connected OK to the docker daemon via either communication path:
docker images
or
docker -H=tcp://127.0.0.1:2375 images
both work equally well.
I was unable to get any sense out of it from another box, I figured the first thing to do would be to prove I can connect to port 2375 from elsewhere. I was having no joy when I tried:
telnet 10.30.144.66 2375
I figured it must be a firewall problem but it took a while longer before I realised it was the firewall built into Linux.
To make 2375 accessable:
Use one of the following depending on your distro
sudo firewall-cmd --zone=public --add-port=2375/tcp --permanent
sudo firewall-cmd --reload
OR
sudo iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 2375 -j ACCEPT
sudo /sbin/service iptables save
I was facing similar problem when my IntelliJ IDE was not able to connect docker engine API installed on RHEL.
It got resolved with following:
firewall-cmd --add-port=2376/tcp --permanent
firewall-cmd --reload
systemctl restart docker

Mongodb Management Service Error, Failure dialing host

I've downloaded MMS by using this command
curl -OL https://mms.mongodb.com/download/agent/monitoring/mongodb-mms-monitoring-agent_latest_amd64.deb
Installed it by using sudo
sudo dpkg -i mongodb-mms-monitoring-agent_latest_amd64.deb
And edited /monitoring-agent.config file which located in
/etc/mongodb-mms
It was working Just fine Until I've started my mongod rplSet by adding --Fork flag, and by using this command
sudo mongod --fork --port 27017 --dbpath /mydbpath --logpath /mylogpath/mongodb.log --replSet Rplname
After starting the services using the above command, my MMS started showing that the host is Unreachable, and that following msg, in all of the members.
Task failure `hostIpAddr`. Err: `Failure determining IPv4 address for `myDnsAdd.cloudapp.net`. Err: `myDnsAdd.cloudapp.net: no such host` at monitoring-agent/components/task.go:221 at monitoring-agent/components/worker.go:153 at monitoring-agent/components/worker.go:224 at monitoring-agent/components/worker.go:236 at pkg/runtime/proc.c:1445`
I've edited the hosts file, and added the hosts ips and hostnames.
Opened the 443 port, and tried to start mmms with --system flag like this
sudo start mongodb-mms-monitoring-agent --system
But still, Host is Unreachable. I got access list on the mongoport (:27017) is it beacuse of this? if so what IP should I add to that access list?
Best,
If your OS is Ubuntu 14.04, then this link should be interesting for you:
https://jira.mongodb.org/browse/MMS-2202
Basically it says that it is a bug in glibc and the solution for this guy was to update Ubuntu to version 14.10.

Forward host port to docker container

Is it possible to have a Docker container access ports opened by the host? Concretely I have MongoDB and RabbitMQ running on the host and I'd like to run a process in a Docker container to listen to the queue and (optionally) write to the database.
I know I can forward a port from the container to the host (via the -p option) and have a connection to the outside world (i.e. internet) from within the Docker container but I'd like to not expose the RabbitMQ and MongoDB ports from the host to the outside world.
EDIT: some clarification:
Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-22 22:39 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00027s latency).
PORT STATE SERVICE
6311/tcp open unknown
joelkuiper#vps20528 ~ % docker run -i -t base /bin/bash
root#f043b4b235a7:/# apt-get install nmap
root#f043b4b235a7:/# nmap 172.16.42.1 -p 6311 # IP found via docker inspect -> gateway
Starting Nmap 6.00 ( http://nmap.org ) at 2013-07-22 20:43 UTC
Nmap scan report for 172.16.42.1
Host is up (0.000060s latency).
PORT STATE SERVICE
6311/tcp filtered unknown
MAC Address: E2:69:9C:11:42:65 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 13.31 seconds
I had to do this trick to get any internet connection within the container: My firewall is blocking network connections from the docker container to outside
EDIT: Eventually I went with creating a custom bridge using pipework and having the services listen on the bridge IP's. I went with this approach instead of having MongoDB and RabbitMQ listen on the docker bridge because it gives more flexibility.
A simple but relatively insecure way would be to use the --net=host option to docker run.
This option makes it so that the container uses the networking stack of the host. Then you can connect to services running on the host simply by using "localhost" as the hostname.
This is easier to configure because you won't have to configure the service to accept connections from the IP address of your docker container, and you won't have to tell the docker container a specific IP address or host name to connect to, just a port.
For example, you can test it out by running the following command, which assumes your image is called my_image, your image includes the telnet utility, and the service you want to connect to is on port 25:
docker run --rm -i -t --net=host my_image telnet localhost 25
If you consider doing it this way, please see the caution about security on this page:
https://docs.docker.com/articles/networking/
It says:
--net=host -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quick ip addr command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack — that would require --privileged=true — but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.
Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run
ip addr
This will give you a list of network adapters, one of which will look something like
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
valid_lft forever preferred_lft forever
You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.
As stated in one of the comments, this works for Mac (probably for Windows/Linux too):
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
You can also reach the gateway using gateway.docker.internal.
Quoted from https://docs.docker.com/docker-for-mac/networking/
This worked for me without using --net=host.
You could also create an ssh tunnel.
docker-compose.yml:
---
version: '2'
services:
kibana:
image: "kibana:4.5.1"
links:
- elasticsearch
volumes:
- ./config/kibana:/opt/kibana/config:ro
elasticsearch:
build:
context: .
dockerfile: ./docker/Dockerfile.tunnel
entrypoint: ssh
command: "-N elasticsearch -L 0.0.0.0:9200:localhost:9200"
docker/Dockerfile.tunnel:
FROM buildpack-deps:jessie
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install ssh && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY ./config/ssh/id_rsa /root/.ssh/id_rsa
COPY ./config/ssh/config /root/.ssh/config
COPY ./config/ssh/known_hosts /root/.ssh/known_hosts
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/config && \
chown $USER:$USER -R /root/.ssh
config/ssh/config:
# Elasticsearch Server
Host elasticsearch
HostName jump.host.czerasz.com
User czerasz
ForwardAgent yes
IdentityFile ~/.ssh/id_rsa
This way the elasticsearch has a tunnel to the server with the running service (Elasticsearch, MongoDB, PostgreSQL) and exposes port 9200 with that service.
TLDR;
For local development only, do the following:
Start the service or SSH tunnel on your laptop/computer/PC/Mac.
Build/run your Docker image/container to connect to hostname host.docker.internal:<hostPort>
Note: There is also gateway.docker.internal, which I have not tried.
END_TLDR;
For example, if you were using this in your container:
PGPASSWORD=password psql -h localhost -p 5432 -d mydb -U myuser
change it to this:
PGPASSWORD=password psql -h host.docker.internal -p 5432 -d mydb -U myuser
This magically connects to the service running on my host machine. You do not need to use --net=host or -p "hostPort:ContainerPort" or -P
Background
For details see: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
I used this with an SSH tunnel to an AWS RDS Postgres Instance on Windows 10. I only had to change from using localhost:containerPort in the container to host.docker.internal:hostPort.
I had a similar problem accessing a LDAP-Server from a docker container.
I set a fixed IP for the container and added a firewall rule.
docker-compose.yml:
version: '2'
services:
containerName:
image: dockerImageName:latest
extra_hosts:
- "dockerhost:192.168.50.1"
networks:
my_net:
ipv4_address: 192.168.50.2
networks:
my_net:
ipam:
config:
- subnet: 192.168.50.0/24
iptables rule:
iptables -A INPUT -j ACCEPT -p tcp -s 192.168.50.2 -d $192.168.50.1 --dport portnumberOnHost
Inside the container access dockerhost:portnumberOnHost
If MongoDB and RabbitMQ are running on the Host, then the port should already exposed as it is not within Docker.
You do not need the -p option in order to expose ports from container to host. By default, all port are exposed. The -p option allows you to expose a port from the container to the outside of the host.
So, my guess is that you do not need -p at all and it should be working fine :)
For newer versions of Docker, this worked for me. Create the tunnel like this (notice the 0.0.0.0 at the start):
-L 0.0.0.0:8080:localhost:8081
This will allow anyone with access to your computer to connect to port 8080 and thus access port 8081 on the connected server.
Then, inside the container just use "host.docker.internal", for example:
curl host.docker.internal:8081
why not use slightly different solution, like this?
services:
kubefwd:
image: txn2/kubefwd
command: ...
app:
image: bash
command:
- sleep
- inf
init: true
network_mode: service:kubefwd
REF: txn2/kubefwd: Bulk port forwarding Kubernetes services for local development.
Easier way under all platforms nowadays is to use host.docker.internal. Let's first start with the Docker run command:
docker run --add-host=host.docker.internal:host-gateway [....]
Or add the following to your service, when using Docker Compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Full example of such a Docker Compose file should then look like this:
version: "3"
services:
your_service:
image: username/docker_image_name
restart: always
networks:
- your_bridge_network
volumes:
- /home/user/test.json:/app/test.json
ports:
- "8080:80"
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
your_bridge_network:
Again, it's just an example. But in if this docker image will start a service on port 80, it will be available on the host on port 8080.
And more importantly for your use-case; if the Docker container want to use a service from your host system that would now be possible using the special host.docker.internal name. That name will automatically be resolved into the internal Docker IP address (of the docker0 interface).
Anyway, let's say... you also running a web service on your host machine on (port 80). You should now be able to reach that service within your Docker container.. Try it out: nc -vz host.docker.internal 80.
All WITHOUT using network_mode: "host".

Resources