Unable to communicate from one Docker container to an exposed container on the same machine - linux

Currently I'm trying to make one container communicate with another one that is exposed and running on the same machine.
Lets say the external IP address is 123.123.123.123 and I exposed a basic NGINX Docker container on port 8080 via the ports property inside my docker-compose.yaml and I execute curl http://123.123.123.123:8080. From an external machine it successfully gets a response back, same goes for executing the command from the host machine. However when I execute this curl from another container on the same machine it exits with a timeout.
I'm unsure of the cause, I have tried temporarily exposing all ports via https://serverfault.com/a/129087 and this did actually allow communication from one container to the exposed container (Of course I restored the previous configuration afterwards).
It is important for me to be able to use the external routing, especially since in production jwilder/nginx-proxy is used with HTTPS certificates.
The machine is running Ubuntu 20.04, I haven't altered any firewall settings provided by iptables.
ufw status output:
Status: active
To Action From
-- ------ ----
22/tcp LIMIT Anywhere
2375/tcp ALLOW Anywhere
2376/tcp ALLOW Anywhere
22/tcp (v6) LIMIT Anywhere (v6)
2375/tcp (v6) ALLOW Anywhere (v6)
2376/tcp (v6) ALLOW Anywhere (v6)
Probably the most relevant part of iptables -L:
Chain DOCKER (6 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.20.0.2 tcp dpt:http
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:https
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:http
ACCEPT tcp -- anywhere 172.19.0.6 tcp dpt:mysql
Curious how this issue could be fixed. Of course adding both containers to the same internal network fixes this, but since port 8080 is already exposed to the world I would like this to include internal traffic as well. I'm using Docker Compose, both of these containers are not apart of the same docker-compose.yaml

The solution was staring in my face all this time. Basically with this image provided by DigitalOcean both iptables and UFW are enabled at the same time. However Docker is not able to expose ports whenever you add one to your Docker configuration.
Apparently that mostly isn't a problem for external traffic since in this scenario UFW apparently does not handle external traffic, that is all up to iptables. Adding ufw allow http and ufw allow https fixes it!

Related

How to restrict access from internet to containers ports on remote linux server?

I use docker-compose on ubuntu 18 on remote server.
How, with iptables, can i block access from the internet to the docker port and only allow access to it from the localhost of this server?
For instance, i want to block 4150 port for internet. Trying this:
iptables -A DOCKER-USER -p tcp --dport 4150 -j DROP does not block the port - still can access to it from the internet (not from server machine).
How can i block access from internet to all ports that are on the server, but allow only 22,80 ? And keep that ports available from localhost of the server (eg from the server itself) ?
Not the IPTables based solution you're looking for, but a much simpler solution is to only publish to a specific interface, instead of all interfaces. And when that interface is the loopback interface, e.g. 127.0.0.1, you'll only be able to access the port locally. To do this, add the interface to the beginning of the publish spec:
docker run -p 127.0.0.1:4150:4150 ...
Or a similar syntax in the compose file:
...
ports:
- 127.0.0.1:4150:4150
...
As for why the command you tried using didn't work, this needs conntrack to get the original port rather than the docker mapped port:
iptables -I DOCKER-USER -p tcp -m contrack --ctorigdstport 4150 -j DROP
This also changed from -A (append) to -I (insert) because there's a default rule to accept everything in that list.

How to Allow New Relic IP with UFW?

from this article: https://docs.newrelic.com/docs/apm/new-relic-apm/getting-started/networks
it suggests me to allow outgoing firewall with following ips and ports:
Networks
50.31.164.0/24
162.247.240.0/22
Ports
TCP 80
TCP 443
I'm using ufw, how can i do it?
I've tried this:
sudo ufw allow proto tcp from 50.31.164.0/24 port 80
sudo ufw allow proto tcp from 50.31.164.0/24 port 443
sudo ufw allow proto tcp from 162.247.240.0/22 port 80
sudo ufw allow proto tcp from 162.247.240.0/22 port 443
When I check my rules: they are like this:
Am I doing this right?
You already allow connections from anywhere to 80 & 443, so you don't need the extra allow statements for their specific IP ranges.
The request for outgoing connections is if you are running say your corporate site through it but are filtering outgoing traffic from your LAN (ie, only being able to get to a schools website from a kiosk in their hallway or from a lab) and you want people on your local network to be able to access it.

How to access Weave DNS-Server from external?

I use the Weave network plugin on a Docker-Swarm.
I created a docker network with a specific IP-Range, different from the default Weave Network, to which I route from my internal network.
To make the containers even better accessible I use weave to attach DNS names like containername.auto.mycompany.de. Now I want to access those from my company Network. The Problem is, that weave only allows access to the weave DNS from the local host.
Like on one of my swarm nodes i can do:
host foobar.auto.mycompany.de 172.17.0.1
Using domain server:
Name: 172.17.0.1
Address: 172.17.0.1#53
Aliases:
foobar.auto.mycompany.de has address 10.40.13.3
Host foobar.auto.mycompany.de not found: 3(NXDOMAIN)
Host foobar.auto.mycompany.de not found: 3(NXDOMAIN)
But I don't find a way to make the weave container accessible on one of the IP's from this (10.40.130/24) docker network or expose the port to the swarm node.
The only way I can think of, but don't like, is doing something like this:
iptables -t nat -A DOCKER -p tcp --dport 53 -j DNAT --to-destination 172.17.0.1:53
(this does not work, it's just the idea)
Or tamper with the weave script to make it expose the port on start of the weave container.
Does anybody know of a better solution?
In fact setting the rules
iptables -A DOCKER -p tcp -m tcp --dport 53 -j DNAT --to-destination 172.17.0.1:53
iptables -A DOCKER -p udp -m udp --dport 53 -j DNAT --to-destination 172.17.0.1:53
does it. When I first tried that, I simply missed to see, that my request would have come from "outside" the server to work, not from inside to the loopback device.
Still not a pretty solution but it does the job. I'm looking forward to see better solutions from you guys.
(Bounty stands!)

Connection Refused: curl <ec2 public dns>:8080

NOTE* there are a lot of question with similar titles but they either (1) do not have any good answer or (2) are not related to the following issue.
I am trying to run a simple chat/messaging demo via node.js on my Ubuntu ec2 instance running apache2. The application works great on my local machine but I am having some trouble with the setup on my server.
I am trying to listen to port 8080 but upon executing:
$ curl <ec2 public dns>:8080
I can the error:
curl: (7) Failed to connect to <ec2 public dns> port 8080: Connection refused
I went through the default troubleshooting steps:
(1) Quadruple check to make sure I am allowing incoming connections on port 8080 in my ec2 security group.
(2) Check my ufw firewall to make sure port 8080 is enabled:
ubuntu#ip-000-00-00-000:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22 ALLOW IN Anywhere
80 ALLOW IN Anywhere
443 ALLOW IN Anywhere
21/tcp ALLOW IN Anywhere
8080 ALLOW IN Anywhere
22 (v6) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
443 (v6) ALLOW IN Anywhere (v6)
21/tcp (v6) ALLOW IN Anywhere (v6)
8080 (v6) ALLOW IN Anywhere (v6)
I am newer to all of the concepts mentioned in this post so I may be missing something simple.
After consulting this post I realized that port 8080 wasn't listening for anything.
So I ran
node chat.js
Which somewhere in the code called .listen(8080). Then upon calling curl, it worked.

apache not accepting incoming connections from outside of localhost

I've booted up a CentOS server on rackspace and executed yum install httpd'd. Then services httpd start. So, just the barebones.
I can access its IP address remotely over ssh (22) no problem, so there's no problem with the DNS or anything (I think...), but when I try to connect on port 80 (via a browser or something) I get connection refused.
From localhost, however, I can use telnet (80), or even lynx on itself and get served with no problem. From outside (my house, my school, a local coffee shop, etc...), telnet connects on 22, but not 80.
I use netstat -tulpn (<- I'm not going to lie, I don't understand the -tulpn part, but that's what the internet told me to do...) and see
tcp 0 0 :::80 :::* LISTEN -
as I believe I should. The httpd.conf says Listen 80.
I have services httpd restart'd many a time.
Honestly I have no idea what to do. There is NO way that rackspace has a firewall on incoming port 80 requests. I feel like I'm missing something stupid, but I've booted up a barebones server twice now and have done the absolute minimum to get this functioning thinking I had mucked things up with my tinkering, but neither worked.
Any help is greatly appreciated! (And sorry for the long winded post...)
Edit
I was asked to post the output of iptables -L. So here it is:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
In case not solved yet. Your iptables say:
state RELATED,ESTABLISHED
Which means that it lets pass only connections already established... that's established by you, not by remote machines. Then you can see exceptions to this in the next rules:
state NEW tcp dpt:ssh
Which counts only for ssh, so you should add a similar rule/line for http, which you can do like this:
state NEW tcp dpt:80
Which you can do like this:
sudo iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
(In this case I am choosing to add the new rule in the fourth line)
Remember that after editing the file you should save it like this:
sudo /etc/init.d/iptables save
CentOS 7 uses firewalld by default now. But all the answers focus on iptables. So I wanted to add an answer related to firewalld.
Since firewalld is a "wrapper" for iptables, using antonio-fornie's answer still seems to work but I was unable to "save" that new rule. So I wasn't able to connect to my apache server as soon as a restart of the firewall happened. Luckily it is actually much more straightforward to make an equivalent change with firewalld commands. First check if firewalld is running:
firewall-cmd --state
If it is running the response will simply be one line that says "running".
To allow http (port 80) connections temporarily on the public zone:
sudo firewall-cmd --zone=public --add-service=http
The above will not be "saved", next time the firewalld service is restarted it'll go back to default rules. You should use this temporary rule to test and make sure it solves your connection issue before moving on.
To permanently allow http connections on the public zone:
sudo firewall-cmd --zone=public --permanent --add-service=http
If you do the "permanent" command without doing the "temporary" command as well, you'll need to restart firewalld to get your new default rules (this might be different for non CentOS systems):
sudo systemctl restart firewalld.service
If this hasn't solved your connection issues it may be because your interface isn't in the "public zone". The following link is a great resource for learning about firewalld. It goes over in detail how to check, assign, and configure zones: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-firewalld-on-centos-7
SELinux prevents Apache (and therefore all Apache modules) from making remote connections by default.
# setsebool -P httpd_can_network_connect=1
Try with below setting in iptables.config table
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
Run the below command to restart the iptable service
service iptables restart
change the httpd.config file to
Listen 192.170.2.1:80
re-start the apache.
Try now.
If you are using RHEL/CentOS 7 (the OP was not, but I thought I'd share the solution for my case), then you will need to use firewalld instead of the iptables service mentioned in other answers.
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --reload
And then check that it is running with:
firewall-cmd --permanent --zone=public --list-all
It should list 80/tcp under ports
Search for LISTEN directive in the apache config files (httpd.conf, apache2.conf, listen.conf,...) and if you see localhost, or 127.0.0.1, then you need to overwrite with your public ip.
Try disabling iptables: service iptables stop
If this works, enable TCP port 80 to your firewall rules:
run system-config-selinux from root, and enable TCP port 80 (HTTP) on your firewall.
this would work:
-- for REDHAT
use : cat "/etc/sysconfig/iptables"
iptables -I RH-Firewall-1-INPUT -s 192.168.1.3 -p tcp -m tcp --dport 80 -j ACCEPT
followed by
sudo /etc/init.d/iptables save
this is what worked for us to get the apache accessible from outside:
sudo iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
sudo service iptables restart
Set apache to list to a specific interface and port something like below:
Listen 192.170.2.1:80
Also check for Iptables and TCP Wrappers entries that might be interfering on the host with outside hosts accessing that port
Binding Docs For Apache
Disable SELinux
$ sudo setenforce 0

Resources