Allowing some IPs in DOCKER-USER chain for inbound traffic, rejecting all the rest - linux

I'm trying to allow only certain IP addresses to access ports exposed by docker containers on the host. All the rest external IPs should not be able to access them (even if I expose a port on 0.0.0.0). There is a plenty of workarounds across the internet how to achieve that - starting with disabling iptables management for docker and manage all the iptables rules manually (which is not so cool, especially if you have to deal with docker swarm ingress routing etc.), till dirty solutions with resetting iptables rules with cronjobs and custom scripts. But hey, we have a DOCKER-USER chain, which seems to be suitable for such kind of stuff?
The Docker-vs-Firewall Problem
As we know, upon startup docker adds some iptables chains and rules in order to do its networking magic. The thing is that it adds these chains at the very top of the FORWARD chain on every docker service restart, which means that all your predefined rules with rejects on that chain become completely useless. Docker documentation suggests to use its DOCKER-USER chain for such kind of things (rules in this chain stay persistent from docker's perspective and are executed before any other docker rules).
From the example in docker documentation we see that we can allow access for one IP (and deny for others) using
-I DOCKER-USER -i eth0 ! -s 1.2.3.4 -j DROP
This command kind of works and only IP 1.2.3.4 can access docker's exposed ports. But the problem is that in this case containers aren't able to connect to Internet.
So the situation is...
With
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
everything works and all the external traffic is allowed, and with:
Chain DOCKER-USER (1 references)
target prot opt source destination
DROP all -- !1.2.3.4 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
only IP 1.2.3.4 is allowed to access docker's exposed services (which is nice!), but there's no Internet connection from inside containers. That's a problem. No other iptables rules are added - only docker's default + this one custom DROP in DOCKER-USER chain. I was trying to RETURN internal IP ranges like 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 in this chain (some of forum threads suggested to try that), so my chain had looked like:
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 192.168.0.0/16 0.0.0.0/0
RETURN all -- 172.16.0.0/12 0.0.0.0/0
RETURN all -- 10.0.0.0/8 0.0.0.0/0
ACCEPT all -- 1.2.3.4 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
but still no success - docker's exposed ports are accessible only from 1.2.3.4 (which is nice again), but still no Internet connection from the container.
Any ideas on why is this happening and how could I allow containers to communicate with outside world while restricting inbound traffic?
Thanks in advance!

*filter
:DOCKER-USER - [0:0]
-A DOCKER-USER -m state --state RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -s A.B.C.D -j ACCEPT
-A DOCKER-USER -i MY_WAN_INTERFACE_NAME -j DROP

Related

Are Services in Kubernetes simply reverse proxies that allow a constant point of contact?

I'm researching Kubernetes Services (that is one of the kinds of k8s components, much like Pods and ReplicaSets). They seem to function like reverse proxies, but I know k8s internally uses DNS, so perhaps they are some kind of load balancing DNS? Also it would somehow mean, that since a Pod can be relocated or exist on many nodes it couldn't simply be a Reverse Proxy since it too would need to be addressable, but share a single IP on many machines... (obviously struggling to imagine how they were built without looking directly at the source code -- yet).
What makes up a K8s Service? DNS + Reverse Proxy, or something more/less? Some kind of networking trick?
Regular ClusterIP services
Ensuring network connectivity for type: ClusterIP Services is the responsibility of the kube-proxy -- a component that typically runs on each and every node of your cluster. The kube-proxy does this by intercepting outgoing traffic from Pods on each node and filtering traffic targeted at service IPs. Since it is connected to the Kubernetes API, the kube-proxy can determine which Pod IPs are associated with each service IP and can forward the traffic accordingly.
Conceptually, the kube-proxy might be considered similar to a reverse proxy (hence the name), but typically uses IPtables rules (or, starting at Kubernetes 1.9 optionally IPVS). Each created service will result in a set of IPtables rules on every node that intercepts and forwards traffic targeted at the service IP to the respective Pod IPs (service IPs are purely virtual and exist only in these IPtables rules; nowhere in the entire cluster you will find an actual network interface holding that IP).
Load balancing is also implemented via IPtables rules (or IPVS). Load balancing always occurs on the source node that the traffic originates from.
Here's an example from the Debug Services section of the documentation:
u#node$ iptables-save | grep hostnames
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1.7:9376
-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2.3:9376
-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR
For more information, have a look at the Virtual IPs and Service Proxies section in the manual.
Headless services
Besides regular ClusterIP services, there are also Headless Services (which are declares by specifying the property clusterIP: None when creating the service). These will not use the kube-proxy; instead, their DNS hostname will directly resolve to all Pod IPs that are associated with the service. Load balancing is achieved via regular DNS round-robin.
For services and routing within the cluster mostly iptables except when you are explicitly using ipvs (which would partially use iptables) or something like Cillium (which allows you to plugin BFP). The kube-proxy in each node manages the iptable rules. More information on how the proxy modes here.
On the DNS side, Kubernetes runs either kube-dns (1.10 or older) or coredns (1.11 or newer) and essentially all services and pods get registered in the that DNS service for other pods/containers to find them. For example, a service will have an FQDN like this: my-service.my-namespace.svc.cluster.local More information on that here.

Azure: How to set a default route without losing connectivity

Currently I have a Linux VM on Azure. I want to remove the default route (which is pointing to outside internet). However, if I do so I lose connectivity to the VM itself. How do I do this? I've looked into adding a load balancer to use inbound source NAT but it doesn't seem to work for me.
Thank you
So after looking heavily into Load Balancers and such, it seems I can't do this with any Azure Resources. Here's my solution: I create a lightweight VM to act as a router, attach it with a public IP, have it on the same private network as my Main VM. The lightweight VM is Linux with iptables routing rules to forward RDP packets to my main VM (copy from another stackoverflow thread):
iptables -t nat -A PREROUTING -p tcp --dport 3389 -j DNAT --to-destination <WINDOWS SERVER IP>
iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT
iptables -t nat -A POSTROUTING -j MASQUERADE
So users would connect to the IP of the routingVM which would then forward to MainVM. Now you can delete the default routes in the mainVM without getting disconnected.

Blocking IP is not working

This question has been asked several times but none of the answers works for me. This is very simple, I want to block some IP access to a server
I tried this:
.htaccess
Order Deny,Allow
Deny from 151.101.52.84
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP all -- 151.101.52.0/24 anywhere
REJECT tcp -- 151.101.52.84 anywhere tcp reject-with icmp-port-unreachable
DROP all -- 151.101.52.84 anywhere
DROP all -- 151.101.52.84 anywhere
DROP tcp -- 151.101.52.84 anywhere
ACCEPT tcp -- anywhere anywhere tcp dpt:http limit: avg 100/min burst 200
/etc/hosts.deny
ALL : 151.101.52.84
netstat -te | grep 151.101
tcp 0 1 ip-*-*-*-*.us-we:51181 151.101.52.84:http SYN_SENT apache 800352623
Already restarted httpd
Even I blocked the IP via Amazon EC2 VPC
Need to restart entire server? Need something else with iptables?
Have you tried Fail2Ban?
sudo apt-get install fail2ban

How to delete/unblock my server IP in iptables on Ubuntu?

My first server blocked my second server IP and I haven't access now.
Command iptables -L -n | grep xx.xxx.xxx.xx
give me result like:
ACCEPT all -- xx.xxx.xxx.xx 0.0.0.0/0
REJECT all -- xx.xxx.xxx.xx 0.0.0.0/0 reject-with icmp-port-unreachable
The xx.xxx.xxx.xx is the same IP and it is my server IP.
I have two rules like ACCEPT AND REJECT for this IP.
How can I give access to my server IP from iptables and prevent blocking my IP ?
Many thanks for help.
To recreate your issue
iptables -A INPUT -s 10.64.7.109 -j ACCEPT
iptables -A INPUT -s 10.64.7.109 -j REJECT
iptables -P INPUT DROP
but that will not block 10.64.7.109 and the first rule hit will be the accept
you are only sharing the output that specifically has the IP you are interested in so i can not see what rules above these 2 would be blocking it.
you can allow this IP by inserting a rule in position 1 which will resolve your issue but without seeing all your rules I can not say it is the most appropriate way to resolve the issue.
iptables -I INPUT 1 -s 10.64.7.109 -j ACCEPT

Why doesn't my iptables entries block pinging a Xen virtual machine? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm writing a bash script to add simple firewalling for Xen.
Here's the actual firewall configuration :
Chain INPUT (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain RH-Firewall-1-INPUT (2 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT icmp -- anywhere anywhere icmp any
ACCEPT esp -- anywhere anywhere
ACCEPT ah -- anywhere anywhere
ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns
ACCEPT udp -- anywhere anywhere udp dpt:ipp
ACCEPT tcp -- anywhere anywhere tcp dpt:ipp
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT udp -- anywhere anywhere state NEW udp dpt:ha-cluster
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
I'd like to add a new chain for each of my virtual machines (each of them has a virtual interface called vif1.0, vif2.0, etc). Output interface (bridge) is xenbr0.
Here's what I do (for example to block ping 'in'to domU1, vif1.0) :
iptables -N domUFirewall
iptables -I FORWARD -j domUFirewall
iptables -I INPUT -j domUFirewall
iptables -A domUFirewall -i vif1.0 -p icmp -j DROP
But .. it doesn't work, i'm still able to ping in/out the domU.
Must be something really 'dumb' but I can't find out what's wrong.
Any clues ?
Thx
Since you're using XEN with bridged networking, packets are being intercepted at a level before ordinary iptables commands can influence them. Thus, you'll probably need to use the ebtables command to influence packet routing in the way that you want to.
ebtables/iptables interaction on a Linux-based bridge
ebtables(8) - Linux man page
Xen Wiki * XenNetworking
Original answer left below that will work for other configurations, but not for XEN with bridged networking.
I am going to pretend for the sake of example that the IP address of vif1.0 is 192.168.1.100.
I would redo the logic to not check the input device, but to instead check by IP Address. At the input chain, the packet is coming from (say) device eth0, not from vif1.0. Thus, this rule:
iptables -I INPUT -i vif1.0 -j domUFirewall
that I previously proposed will never match any packets. However, if you do the following, it should do what you want:
iptables -I INPUT -d 192.168.1.100 -j domUFirewall
where in this case the chain domUFirewall is set up by:
iptables -N domUFirewall
iptables -F domUFirewall
iptables -A domUFirewall -p icmp -j DROP
If a given chain is for a single device, then you want to make this check before jumping into the chain, on a rule with the "-j chainName" action. Then, in the chain itself, you never have to check for the device or IP Address.
Second, I would always flush (empty) the chain in your script, just in case you're re-running the script. Note that when you rerun the script, you may get complaints on the -N line. That's OK.
There are other ways you could do this, but to give a different example, I would need to know specifically how your VM is set up -- bridged networking? NAT? Etc. But the example I gave here should work in any of these modes.
Here are some useful links for the future:
Quick HOWTO, Ch14: Linux Firewalls Using iptables
Sandbox a VMware Virtual Machine With iptables

Resources