How to block outgoin domain request - dns

I have a server. I want to make possible sending requests to this server from the internet (this works) and simultaneously block requests from the server to domains.
I want to type
ping google.com
on the server and see nothing.
Now I see
ping google.com
PING google.com (216.58.205.46) 56(84) bytes of data.
64 bytes from mil04s24-in-f46.1e100.net (216.58.205.46): icmp_seq=1 ttl=54 time=36.3 ms
64 bytes from mil04s24-in-f46.1e100.net (216.58.205.46): icmp_seq=2 ttl=54 time=63.7 ms

You can block any tcp / udp outgoing traffic using the iptables firewall rules.
For example, you can block any outgoing TCP traffic from port 5050 with below firewall rule -
iptables -A OUTPUT -p tcp --dport 4050 -j DROP
iptables save
Similarly to block ping request, you can block ICMP protocol for any outbound traffic.

Related

Packets not getting forwarded on Centos7 between GRE tunnels [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm have configured GRE tunnels between centos machines and corresponding routing tables on individual centos machines as shown in the image:
Im able to
Ping from Router-1 to gre1 tunnels other end:
worker]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.43 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.472 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.291 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.319 ms
The traffic reaches Transit Router over the GRE tunnel(this is verified by tcpdump proto gre)
Ping from Router-2 to gre2 tunnels other end:
worker]# ping 11.0.0.2
PING 11.0.0.2 (11.0.0.2) 56(84) bytes of data.
64 bytes from 11.0.0.2: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 11.0.0.2: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 11.0.0.2: icmp_seq=3 ttl=64 time=0.369 ms
64 bytes from 11.0.0.2: icmp_seq=4 ttl=64 time=0.258 ms
This traffic too flows on tunnel
and on the transit router I'm able to ping the private address of both Router-1 and Router-2 after adding the routing entry:
Transit Router:
[root#vmc-centos conf]# ping 10.2.32.1
PING 10.2.32.1 (10.2.32.1) 56(84) bytes of data.
64 bytes from 10.2.32.1: icmp_seq=1 ttl=64 time=0.589 ms
64 bytes from 10.2.32.1: icmp_seq=2 ttl=64 time=0.380 ms
64 bytes from 10.2.32.1: icmp_seq=3 ttl=64 time=0.383 ms
Router-1:
worker]# tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
04:54:36.684864 IP 10.206.83.3 > 10.206.90.103: GREv0, length 88: IP 10.0.0.2 > 10.2.32.1: ICMP echo request, id 20445, seq 34, length 64
04:54:36.684951 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.2.32.1 > 10.0.0.2: ICMP echo reply, id 20445, seq 34, length 64
04:54:37.684776 IP 10.206.83.3 > 10.206.90.103: GREv0, length 88: IP 10.0.0.2 > 10.2.32.1: ICMP echo request, id 20445, seq 35, length 64
Transit Router:
[root#vmc-centos conf]# ping 10.4.32.1
PING 10.4.32.1 (10.4.32.1) 56(84) bytes of data.
64 bytes from 10.4.32.1: icmp_seq=1 ttl=64 time=0.553 ms
64 bytes from 10.4.32.1: icmp_seq=2 ttl=64 time=0.325 ms
64 bytes from 10.4.32.1: icmp_seq=3 ttl=64 time=0.354 ms
Router-2:
worker]# sudo tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
04:56:57.549823 IP 10.206.83.3 > 10.206.86.199: GREv0, length 88: IP 11.0.0.2 > 10.4.32.1: ICMP echo request, id 20690, seq 24, length 64
04:56:57.549896 IP 10.206.86.199 > 10.206.83.3: GREv0, length 88: IP 10.4.32.1 > 11.0.0.2: ICMP echo reply, id 20690, seq 24, length 64
But now when I try to reach the private network of Router-2(10.4.32.1) from Router-1, the packets reach till Transit Router but are not being forwarded from there to Router-2:
Router-1:
worker]# ping 10.4.32.1
PING 10.4.32.1 (10.4.32.1) 56(84) bytes of data.
Transit Router:
[root#vmc-centos conf]# tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
04:59:06.382024 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.0.0.1 > 10.4.32.1: ICMP echo request, id 36131, seq 40, length 64
04:59:07.382007 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.0.0.1 > 10.4.32.1: ICMP echo request, id 36131, seq 41, length 64
Router-2:
[root#wdc-10-206-86-199 worker]# sudo tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
Route forwarding is enabled on all the machines:
[root#vmc-centos conf]# sudo sysctl -p
net.ipv4.ip_forward = 1
iptables on transit router:
[root#vmc-centos ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT gre -- anywhere anywhere
ACCEPT gre -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT gre -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Note: I have tried this before and the packets were reaching the other private network. Now Im trying on another setup, theres some config I'm missing.
Got the answer here:
https://serverfault.com/questions/1010565/packets-not-getting-forwarded-on-centos7-between-gre-tunnels
The Docker daemon seems to be running on the forwarding machine. By default to isolate containers on different bridges and the host machine, Docker will install a default DROP policy on the forwarding chain in iptables. There is a setting in Docker daemon to not do this. Set iptables to false in /etc/docker/daemon.json. See Docker and iptables.
If you change default policy to ACCEPT, that will work.
iptables --policy FORWARD ACCEPT
BUT, when you (or a package upgrade of docker, or a reboot) restarts the Docker daemon the default policy will again change to DROP, if you didn't change the setting of the docker daemon.

How can I determine which is my IP address on a different network?

I'm trying to launch a SNMP query from a pod uploaded in an Azure cloud to an internal host on my company's network. The snmpget queries work well from the pod to, say, a public SNMP server, but the query to my target host results in:
root#status-tanner-api-86557c6786-wpvdx:/home/status-tanner-api/poller# snmpget -c public -v 2c 192.168.118.23 1.3.6.1.2.1.1.1.0
Timeout: No Response from 192.168.118.23.
an NMAP shows that the SNMP port is open|filtered:
Nmap scan report for 192.168.118.23
Host is up (0.16s latency).
PORT STATE SERVICE
161/udp open|filtered snmp
I requested a new rule to allow 161UDP from my pod, but I'm suspecting that I requested the rule to be made for the wrong IP address.
My theory is that I should be able to determine the IP address my pod uses to access this target host if I could get inside the target host, open a connection from the pod and see using netstat which is the IP address my pod is using. The problem is that I currently have no access to this host.
So, my question is How can I see from which address my pod is reaching the target host? Some sort of public address is obviously being used, but I can't tell which one is it without entering the target host.
I'm pretty sure I'm missing an important network tool that should help me in this situation. Any suggestion would be profoundly appreciated.
By default Kubernetes will use you node ip to reach the others servers, so you need to make a firewall rule using your node IP.
I've tested using a busybox pod to reach other server in my network
Here is my lab-1 node IP with ip 10.128.0.62:
$rabello#lab-1:~ ip ad | grep ens4 | grep inet
inet 10.128.0.62/32 scope global dynamic ens4
In this node I have a busybox pod with the ip 192.168.251.219:
$ kubectl exec -it busybox sh
/ # ip ad | grep eth0 | grep inet
inet 192.168.251.219/32 scope global eth0
When perform a ping test to another server in the network (server-1) we have:
/ # ping 10.128.0.61
PING 10.128.0.61 (10.128.0.61): 56 data bytes
64 bytes from 10.128.0.61: seq=0 ttl=63 time=1.478 ms
64 bytes from 10.128.0.61: seq=1 ttl=63 time=0.337 ms
^C
--- 10.128.0.61 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.337/0.907/1.478 ms
Using tcpdump on server-1, we can see the ping requests from my pod using the node ip from lab-1:
rabello#server-1:~$ sudo tcpdump -n icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:16:09.291714 IP 10.128.0.62 > 10.128.0.61: ICMP echo request, id 6230, seq 0, length 64
10:16:09.291775 IP 10.128.0.61 > 10.128.0.62: ICMP echo reply, id 6230, seq 0, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
Make sure you have an appropriate firewall rule to allow your node (or your vpc range) reach your destination and check if you VPN is up (if you have one).
I hope it helps! =)

IP table rule to block telnet packet for particular active port but not other type of packets

In my Linux device port 5080 is open which accepts all packets. I want to block only telnet packets.
I tried to add iptable rule but i am not sure how to specifically mention telnet packets only because adding iptable rule with protocol tcp blocks all the tcp packets and telnet but I want to block only telnet packets.
iptables -I INPUT 1 -p tcp --dport 5080 -j DROP
I don't think you can do that with iptables.
You'll need some application layer aware IPS (maybe suricata?)

How do I make my Docker container communicate with another node through a 2nd interface?

I am struggling to perform a pathetic test that involves the communication between a server in a sandbox01 network and a Docker container that is running in my "Docker Host" server (this machine is in the same subnet as the other nodes in the sandbox01 network. i.e., it has an interface called ens34, on the 10.* address/range. It also has an eth0 interface, on the 9.* network, which allows it to access the outside world: download packages, docker images, etc. etc.).
Anyway, here is a little diagram to illustrate what I have:
The problem:
Cannot communicate between a node in sandbox01 subnet (10.* network) and the container.
e.g., someserver.sandbox01 → mydocker2 : ens34 :: docker0 :: vethXXX → container
The communication only works when I stop iptables, which makes things really mysterious!!! Just wondering if you faced any similar issues.. any ideas would be extremely appreciated.
The mystery:
After many tests, it was confirmed that the container can't communicate with any other node in the 10.* network – it doesn't behave as expected: it was supposed to produce a response through its gateway, docker0 (172.17.0.1), and find its way through the routing table in the docker host to communicate with "someserver.sandbox01" (10.1.21.59).
It only works when we let it process the MASQUARADE in iptables. However, Docker automatically adds this rule: -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -c 0 0 -j MASQUERADE
**Note the " ! -o docker0" there, so Docker doesn't want us to mask the ip addresses that are sending requests??? This is messing up the communication somehow...
The container responds ok to any communication coming through the IP 9.* (eth0) -- i.e., I can send requests from my laptop -- but never through the 10.* (ens34). If I run a terminal within the container, the container can ping ALL the IP addresses leveraging all the mapped routes, EXCEPT, EXCEPT!!! the IP addresses in the 10.* range. Why??????
[root#mydocker2 my-nc-server]# docker run -it -p 8080:8080 --name nc-server nc-server /bin/sh
sh-4.2# ping 9.83.90.55
PING 9.83.92.20 (9.83.90.55) 56(84) bytes of data.
64 bytes from 9.83.90.55: icmp_seq=1 ttl=117 time=124 ms
64 bytes from 9.83.90.55: icmp_seq=2 ttl=117 time=170 ms
^C
--- 9.83.90.55 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 124.422/147.465/170.509/23.046 ms
sh-4.2# ping 9.32.145.98
PING 9.32.148.67 (9.32.145.98) 56(84) bytes of data.
64 bytes from 9.32.145.98: icmp_seq=1 ttl=63 time=1.37 ms
64 bytes from 9.32.145.98: icmp_seq=2 ttl=63 time=0.837 ms
^C
--- 9.32.145.98 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.837/1.104/1.372/0.269 ms
sh-4.2# ping 10.1.21.5
PING 10.1.21.5 (10.1.21.5) 56(84) bytes of data.
^C
--- 10.1.21.5 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms
sh-4.2# ping 10.1.21.60
PING 10.1.21.60 (10.1.21.60) 56(84) bytes of data.
^C
--- 10.1.21.60 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms
For some reason, this interface here doesn't play well with Docker:
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.21.18 netmask 255.255.255.0 broadcast 10.1.21.255
Could this be related to the fact that the eth0 is the primary NIC for this Docker host?
The workaround:
In mydocker2 we need to stop iptables and add a new sub-interface under ens34 →
service iptables stop
ifconfig ens34:0 10.171.171.171 netmask 255.255.255.0
And in someserver.sandbox01 we need to add a new route →
route add -net 10.171.171.0 netmask 255.255.255.0 gw 10.1.21.18
Then the communication between then works. I know.. bizarre, right?
In case any of you wants to ask, no, I don't want to use the " --net=host " option to replicate the interfaces from the docker host to my container.
So, thoughts? Suggestions? Ideas?
SOLVED!!!
Inside /etc/sysconfig/network-scripts, there were 2 files:
route-ens34 and rule-ens34-
if you remove those, and restart the network, it should start working.
Cheers!

ufw firewall DROP icmp from ALL to a specific IP address on my server

My server has apache2 on a single interface with multiple IPs on sub-interfaces. I want to reject an icmp echo request from all external IPs to a specific IP assigned to one of my sub interfaces but allow the icmp to the other IPs on the same interface:
eth2 - 10.128.20.252
eth2:1 - 10.128.20.11
eth2:2 - 10.128.20.12
eth2:3 - 10.128.20.13 <-want to block icmp to only this IP
eth2:4 - 10.128.20.14
Edit the /etc/ufw/before.rules file and change the Ip to suit your needs. Then do:
# ufw reload
put this before the icmp ok codes section, it works
drop icmp to specific IP
-A ufw-before-input -p icmp --icmp-type echo-request -d 10.128.20.13 -j REJECT

Resources