How to find the source process of DNS traffic? - dns

Every day at a specific time there is a denial DNS traffic in tcpdump for this DNS traffic.
03:10:01.188267 IP hostname.21355 > h.root-servers.net.domain: 52619% [1au] NS? . (28)
03:10:01.188294 IP hostname.19992 > e.root-servers.net.domain: 33364% [1au] NS? . (28)
03:10:01.564808 IP hostname.27167 > e.root-servers.net.domain: 17614% [1au] NS? . (28)
03:10:01.564845 IP hostname.47993 > h.root-servers.net.domain: 39462% [1au] NS? . (28)
03:10:01.941076 IP hostname.33760 > j.root-servers.net.domain: 56169% [1au] NS? . (28)
03:10:01.941446 IP hostname.7920 > h.root-servers.net.domain: 54000% [1au] NS? . (28)
03:10:02.317699 IP hostname.4292 > j.root-servers.net.domain: 11824% [1au] NS? . (28)
03:10:02.694087 IP hostname.55797 > c.root-servers.net.domain: 20468% [1au] NS? . (28)
03:10:02.694383 IP hostname.29552 > h.root-servers.net.domain: 62991% [1au] NS? . (28)
03:10:03.070598 IP hostname.42961 > c.root-servers.net.domain: 47966% [1au] NS? . (28)
03:10:03.447014 IP hostname.23176 > d.root-servers.net.domain: 61501% [1au] NS? . (28)
03:10:03.447366 IP hostname.14098 > b-2016.b.root-servers.net.domain: 24736% [1au] NS? . (28)
But we could not find the source process even by scheduling continuous netstat -tulpne in a while loop without sleep. The connection is failing immediately and hence not captured in netstat I suppose. Even the PID is active only for a fraction of seconds.
Is there any way of finding the source process that initiates this connection?

These queries are called priming queries. They are used by DNS resolvers to update the list and IP addresses of the DNS root servers. You likely have a DNS server, such as BIND or Unbound, running which uses these queries to keep the cache updated.

Related

ip_forward doesn't works for tcp

I have one linux box with 2 interfaces, wlan0 and wlan1. wlan0 connects to an external wlan AP of subnet 192.168.43.x. On wlan1 a softap is running on subnet 192.168.60.x.
I have run following commands to start nat and ip_forward:
iptables -F
iptables -F INPUT
iptables -F OUTPUT
iptables -F FORWARD
iptables -t nat -F
iptables -t mangle -F
iptables -A INPUT -j ACCEPT
iptables -A OUTPUT -j ACCEPT
iptables -A FORWARD -j ACCEPT
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
Another pc e6220 is connected to 192.168.60.x softap. I want to forward pc's traffic to external network 192.168.43.x.
network graph:
external AP (192.168.43.1) <~~~> {wlan0(192.168.43.2), <-- forward --> internal AP on wlan1(192.168.60.1)} <~~~> e6220 (192.168.60.46)
After adding route on pc, icmp and udp works fine, both can reach 192.168.43.1, but tcp doesn't work!
userg#e6220:~ $ ping 192.168.43.1
PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data.
64 bytes from 192.168.43.1: icmp_seq=1 ttl=63 time=9.74 ms
64 bytes from 192.168.43.1: icmp_seq=2 ttl=63 time=9.57 ms
^C
user#e6220:~ $ telnet 192.168.43.1
Trying 192.168.43.1...
telnet: Unable to connect to remote host: Network is unreachable
Capture on linux box wlan1:
17:10:57.579994 IP 192.168.60.46.56412 > 192.168.43.1.telnet: Flags [S], seq 1764532258, win 29200, options [mss 1460,sackOK,TS val 2283575078 ecr 0,nop,wscale 7], length 0
17:10:57.580075 IP 192.168.60.1 > 192.168.60.46: ICMP net 192.168.43.1 unreachable, length 68
No packets seen on wlan0.
It's strange kernel says unreachable, but UDP works.
// route -n output
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.43.1 0.0.0.0 UG 0 0 0 wlan0
192.168.43.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
192.168.60.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan1
I think this may be kernel-version specific. On Linux 3.4.x both tcp and udp are good. This issue is seen on Linux 3.10.61. I want to know if there's any way to fix it, kernel configuration or iptables command. Thanks!
Finally I got the solution.
The routing box is configured with CONFIG_IP_MULTIPLE_TABLES, which is policy routing. The routing rule is:
# ip rule list
0: from all lookup local
10000: from all fwmark 0xc0000/0xd0000 lookup legacy_system
13000: from all fwmark 0x10063/0x1ffff lookup local_network
15000: from all fwmark 0x0/0x10000 lookup legacy_system
16000: from all fwmark 0x0/0x10000 lookup legacy_network
17000: from all fwmark 0x0/0x10000 lookup local_network
23000: from all fwmark 0x0/0xffff uidrange 0-0 lookup main
32000: from all unreachable
So every packet goes to "unreachable". TCP turns good after adding following rules:
# ip rule add iif wlan0 table main
# ip rule add iff wlan1 table main
# ip rule list
0: from all lookup local
9998: from all iif wlan0 lookup main
9999: from all iif wlan1 lookup main
10000: from all fwmark 0xc0000/0xd0000 lookup legacy_system
13000: from all fwmark 0x10063/0x1ffff lookup local_network
15000: from all fwmark 0x0/0x10000 lookup legacy_system
16000: from all fwmark 0x0/0x10000 lookup legacy_network
17000: from all fwmark 0x0/0x10000 lookup local_network
23000: from all fwmark 0x0/0xffff uidrange 0-0 lookup main
32000: from all unreachable
But I still don't know why UDP is good before adding these 2 rules.

ufw firewall DROP icmp from ALL to a specific IP address on my server

My server has apache2 on a single interface with multiple IPs on sub-interfaces. I want to reject an icmp echo request from all external IPs to a specific IP assigned to one of my sub interfaces but allow the icmp to the other IPs on the same interface:
eth2 - 10.128.20.252
eth2:1 - 10.128.20.11
eth2:2 - 10.128.20.12
eth2:3 - 10.128.20.13 <-want to block icmp to only this IP
eth2:4 - 10.128.20.14
Edit the /etc/ufw/before.rules file and change the Ip to suit your needs. Then do:
# ufw reload
put this before the icmp ok codes section, it works
drop icmp to specific IP
-A ufw-before-input -p icmp --icmp-type echo-request -d 10.128.20.13 -j REJECT

No traffic when listening with netcat

I have a server with multiple network interfaces.
I'm trying to run a network monitoring tools in order to verify network traffic statistics by using the sFlow standard on a router.
I get my sFlow datagram on port 5600 of eth1 interface. I'm able to see the generated traffic thanks to tcpdump:
user#lnssrv:~$ sudo tcpdump -i eth1
14:09:01.856499 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1456
14:09:02.047778 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1432
14:09:02.230895 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1300
14:09:02.340114 IP 198.51.100.253.5678 > 255.255.255.255.5678: UDP, length 111
14:09:02.385036 STP 802.1d, Config, Flags [none], bridge-id c01e.b4:a4:e3:0b:a6:00.8018, length 43
14:09:02.434658 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1392
14:09:02.634447 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1440
14:09:02.836015 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1364
14:09:03.059851 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1372
14:09:03.279067 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1356
14:09:03.518385 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1440
It seems all ok, but, when i try to read the packet with netcat it seems that there are no packets here:
nc -lu 5600
Indeed, sflowtool nor nprobe doesn't read anything from port 5600.
Where I'm wrong?
nc -lu 5600 is going to open a socket on port 5600, meaning that it will only dump packages that are received in that socket, i.e, packages aiming to that specific address and port.
On the other side, tcpdump collects all the traffic flowing, even without it being sent to a specific server.
Two causes of your problem here:
a) Your host IP is not 198.51.100.232
With host command you will be able exactly see TCP traffic of your server
for example : tcpdump -i eth1 host 198.51.100.232 port 80
b) There is another server that is listening in UDP port 5600 that is grabbing all the data, so, nothing is leftover for nc socket.
Notice: with TCPDUMP you will not be able to check and listen UDP ports.
Not sure that it can help here but in my case it helped ( I had similar problem ) so i just stopped "iptables" like
service iptables stop.
It seems that tcpdump works on the lower level than iptable and ipdaple can stop datagrams from being proceed to the higher level. Here a good article on this topic with nice picture.

Shorewall: How do I enable ping to a specific IP address where the zone containing that IP has ping disabled

I'm using Shorewall.
How do I enable ping to a specific IP address where the zone containing that IP has ping disabled?
in etc/rules....
Suppose you have a zone dmz and in that zone you have the
external ip address of 67.89.164.199 and the internal ip of 10.0.99.10.
That is the external ip DNAT's to 10.0.99.10
The dmz zone doesn't allow pings from the outside.
Here's what to do:
1) DNAT net dmz:10.0.99.10 icmp 8 - 67.89.164.199
or in other words, send all pings "icmp 8" that come to 67.89.164.199 to 10.0.99.10
2) Ping/ACCEPT net dmz:10.0.99.10
i.e. accept Ping
Allow single ip using
iptables -A INPUT -s x.x.x.x -p ICMP --icmp-type 8 -j ACCEPT
Reference http://www.trickylinux.net/ping-single-ip-linux/

How do I route a packet to use localhost as a gateway?

I'm trying to test a gateway I wrote(see What's the easiest way to test a gateway? for context). Due to an issue I'd rather not get into, the gateway and "sender" have to be on the same machine. I have a receiver(let's say 9.9.9.9) which the gateway is able to reach.
So I'll run an application ./sendStuff 9.9.9.9 which will send some packets to that IP address.
The problem is: how do I get the packets destined for 9.9.9.9 to go to the gateway on localhost? I've tried:
sudo route add -host 9.9.9.9 gw 127.0.0.1 lo
sudo route add -host 9.9.9.9 gw <machine's external IP address> eth0
but neither of those pass packets through the gateway. I've verified that the correct IPs are present in sudo route. What can I do?
Per request, here is the route table, after running the second command(IP addresses changed to match the question. x.y.z.t is the IP of the machine I'm running this on):
Destination Gateway Genmask Flags Metric Ref Use Iface
9.9.9.9 x.y.z.t 255.255.255.255 UGH 0 0 0 eth0
x.y.z.0 0.0.0.0 255.255.254.0 U 0 0 0 eth0
0.0.0.0 <gateway addr> 0.0.0.0 UG 100 0 0 eth0
127.0.0.1 is probably picking up the packets, then forwarding them on their merry way if ipv4 packet forwarding is enabled. If it's not enabled, it will drop them.
If you are trying to forward packets destined to 9.9.9.9 to 127.0.0.1, look into iptables.
Edit: try this:
iptables -t nat -A OUTPUT -d 9.9.9.9 -j DNAT --to-destination 127.0.0.1
That should redirect all traffic to 9.9.9.9 to localhost instead.

Resources