PXEBOOT, TFTPD-HPA and Firewall - firewall

I have setup a pxeboot which basically works fine. I can run any configured linux image.
Then I have enabled the firewall, released UDP port 69 for TFTP
~# iptables -L |grep tftp
ACCEPT udp -- anywhere anywhere udp dpt:tftp
ACCEPT udp -- anywhere anywhere udp dpt:tftp
~# netstat -tulp|grep tftp
udp 0 0 0.0.0.0:tftp 0.0.0.0:* 15869/in.tftpd
udp6 0 0 [::]:tftp [::]:* 15869/in.tftpd
~# cat /etc/services|grep tftp
tftp 69/udp
and now I get a timeout when pxeboot is pulling tftp://192.168.0.220/images/pxelinux.0 (rc = 4c126035).
Anywhere is ok here for now as there is another firewall running between the pxeserver and the router which blocks everything unwanted from/to WAN
The funny part is that tcpdump shows that the request is incoming on the pxeboot server:
~# tcpdump port 69
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp5s0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:00:47.062723 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:47.415412 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:48.184506 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:49.722630 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
14:00:52.798136 IP 192.168.0.136.1024 > mittelerde.tftp: 47 RRQ "images/pxelinux.0" octet blksize 1432 tsize 0
Once I stop the firewall service pxeboot works fine again. Of course the conntrack module is loaded:
~# lsmod|grep conntrack
nf_conntrack_tftp 16384 0
nf_conntrack_ftp 20480 0
xt_conntrack 16384 4
nf_conntrack_ipv4 16384 20
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nf_conntrack 131072 9 xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4,nf_nat,nf_conntrack_tftp,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_ftp
libcrc32c 16384 2 nf_conntrack,nf_nat
x_tables 40960 8 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_MASQUERADE,xt_nat,xt_comment,ip_tables
What I am missing here?

Problem solved. For tftpd-hpa the following UDP ports must be open as well:
1024
49152:49182

Related

Linux: who is listening on tcp port 22?

I have a AST2600 evb board. After power on (w/ RJ45 connected), it boots into a OpenBMC kernel. From serial port, using ip command I can obtain its IP address. From my laptop, I can ssh into the board using account root/0penBmc:
bruin#gen81:/$ ssh root#192.168.6.132
root#192.168.6.132's password:
Then I want to find out which tcp ports are open. As there is no ss/lsof/netstat utilities, I cat /proc/net/tcp:
root#AMIfa7ba648f62e:/proc/net# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:14EB 00000000:0000 0A 00000000:00000000 00:00000000 00000000 997 0 9565 1 0c202562 100 0 0 10 0
1: 3500007F:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000 997 0 9571 1 963c8114 100 0 0 10 0
The strange thing puzzled me is that that tcp port 22 is not listed in /proc/net/tcp, which suggests that no process is listening on tcp port 22. If this is true, how the ssh connection is established?
Btw, as tested using ps, it's the dropbear process who is handling the ssh connection, and the dropbear is spawned dynamically (i.e., if no ssh connection, no such process exist; if I made two ssh connection, two dropbear processes were spawned).
PS: as suggested by John in his reply, I added the ss utilities into the image, and it shows what I expected:
root#AMI8287361b9c6f:~# ss -antp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 0.0.0.0:5355 0.0.0.0:* users:(("systemd-resolve",pid=239,fd=12))
LISTEN 0 0 127.0.0.1:5900 0.0.0.0:* users:(("obmc-ikvm",pid=314,fd=5))
LISTEN 0 0 127.0.0.53:53 0.0.0.0:* users:(("systemd-resolve",pid=239,fd=17))
LISTEN 0 0 *:443 *:* users:(("bmcweb",pid=325,fd=3),("systemd",pid=1,fd=41))
LISTEN 0 0 *:5355 *:* users:(("systemd-resolve",pid=239,fd=14))
LISTEN 0 0 *:5900 *:* users:(("obmc-ikvm",pid=314,fd=6))
LISTEN 0 0 *:22 *:* users:(("systemd",pid=1,fd=49))
LISTEN 0 0 *:2200 *:* users:(("systemd",pid=1,fd=50))
ESTAB 0 0 [::ffff:192.168.6.89]:22 [::ffff:192.168.6.98]:34906 users:(("dropbear",pid=485,fd=2),("dropbear",pid=485,fd=1),("dropbear",pid=485,fd=0),("systemd",pid=1,fd=20))
Good question.
First, it is pretty straigt forward to add common tools/utitlies to an image.
It could be added (for local testing only) by adding a line
OBMC_IMAGE_EXTRA_INSTALL:append = " iproute2 iproute2-ss"
to the https://github.com/openbmc/openbmc/blob/master/meta-aspeed/conf/machine/evb-ast2600.conf file (or to your own testing/deveopment layer). Adding useful tools is often worth it.
Second, if you are using ipv6 you will need to check /proc/net/tcp6
Third, you can also look for a port by looking up the pid of your application ps | grep <application name>. Then reading the port used by that pid cat /proc/<pid>/net/tcp
Last, if you have any more question or these steps don't work. Please reach out to us on discord https://discord.com/invite/69Km47zH98 or Email https://lists.ozlabs.org/listinfo/openbmc (they are the preferred place to ask questions)

UDP packets received on veth, caught by tcpdump, accepted by iptables, but not forwarded to netcat

I have two namespaces srv1 and srv2, interconnected via a softswitch (p4 bmv2) with veth pairs. The softswitch does just simple forwarding. The veth interfaces inside the namespaces have IP addresses assigned to them (respectively 192.168.1.1 and 192.168.1.2). I could ping between the two namespaces using those IP addresses:
sudo ip netns exec srv1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=1.04 ms
but when I try netcat I dont receive messages on the server side:
client:
sudo ip netns exec srv1 netcat 192.168.1.2 80 -u
hello!
server:
sudo ip netns exec srv2 netcat -l 80 -u
The interface receives the packets with proper format.I verified with tcpdump on on both namespaces and I saw the packets being sent and received properly:
client:
sudo ip netns exec srv1 tcpdump -XXvv -i srv1p
[sudo] password for simo:
tcpdump: listening on srv1p, link-type EN10MB (Ethernet), capture size 262144 bytes
^C06:09:41.088601 IP (tos 0x0, ttl 64, id 14169, offset 0, flags [DF], proto UDP (17), length 35)
192.168.1.1.55080 > 192.168.1.2.http: [bad udp cksum 0x8374 -> 0x5710!] UDP, length 7
0x0000: 00aa bbcc dd02 00aa bbcc dd01 0800 4500 ..............E.
0x0010: 0023 3759 4000 4011 801d c0a8 0101 c0a8 .#7Y#.#.........
0x0020: 0102 d728 0050 000f 8374 6865 6c6c 6f21 ...(.P...thello!
0x0030: 0a .
1 packet captured
1 packet received by filter
0 packets dropped by kernel
server:
sudo ip netns exec srv2 tcpdump -XXvv -i srv2p
tcpdump: listening on srv2p, link-type EN10MB (Ethernet), capture size 262144 bytes
^C06:09:41.089232 IP (tos 0x0, ttl 64, id 14169, offset 0, flags [DF], proto UDP (17), length 35)
192.168.1.1.55080 > 192.168.1.2.http: [bad udp cksum 0x8374 -> 0x5710!] UDP, length 7
0x0000: 00aa bbcc dd02 00aa bbcc dd01 0800 4500 ..............E.
0x0010: 0023 3759 4000 4011 801d c0a8 0101 c0a8 .#7Y#.#.........
0x0020: 0102 d728 0050 000f 8374 6865 6c6c 6f21 ...(.P...thello!
0x0030: 0a .
1 packet captured
1 packet received by filter
0 packets dropped by kernel
I added on srv2 iptable rules to ACCEPT udp packets on port 80 and to LOG:
sudo ip netns exec srv2 iptables -t filter -A INPUT -p udp --dport 80 -j ACCEPT
sudo ip netns exec srv2 iptables -I INPUT -p udp --dport 80 -j LOG --log-prefix " IPTABLES " --log-level=debug
I could see the stats increasing on the entry and the packets being logged on /var/log/kern.log, but the message never reachets the netcats listening socker.
sudo ip netns exec srv2 iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1 33 LOG udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 LOG flags 0 level 7 prefix " IPTABLES "
4 133 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:80
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
kernel logs:
kernel: [581970.306032] IPTABLES IN=srv2p OUT= MAC=00:aa:bb:cc:dd:02:00:aa:bb:cc:dd:01:08:00 SRC=192.168.1.1 DST=192.168.1.2 LEN=33 TOS=0x00 PREC=0x00 TTL=64 ID=51034 DF PROTO=UDP SPT=48784 DPT=80 LEN=13
When I replace the softswitch with a bridge the netcat works. I thought maybe the softswitch corrputs the packets but the tcpdump shows the right format. The UDP checksum is not correct but it is generated like that from the source server, and it is the same thing when using the linux bridge anyways but it works in that case. Is ther a way to know the reason those packets do not reach the netcat server ?

Send raw IP packet with tun device

I'm trying to programmatically construct and send IP packet through TUN device.
I've setup the TUN device and proper routes:
# ip tuntap add mode tun tun0
# ip link set tun0 up
# ip addr add 10.0.0.2/24 dev tun0
which results in:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 600 0 0 wlp3s0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0
192.168.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
$ ifconfig tun0
tun0: flags=4241<UP,POINTOPOINT,NOARP,MULTICAST> mtu 1500
inet 10.0.0.2 netmask 255.255.255.0 destination 10.0.0.2
inet6 fe80::f834:5267:3a1:5d1d prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
IP forwarding is ON: # echo 1 > /proc/sys/net/ipv4/ip_forward
I've setup NAT for tun0 packets:
# iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wlp3s0 -j MASQUERADE
# iptables -t nat -L -v
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any wlp3s0 10.0.0.0/24 anywhere
Then I have python script to produce ICMP packets:
import os
from fcntl import ioctl
import struct
import time
import random
# pip install pypacker==4.0
from pypacker.layer3.ip import IP
from pypacker.layer3.icmp import ICMP
TUNSETIFF = 0x400454ca
IFF_TUN = 0x0001
IFF_NO_PI = 0x1000
ftun = os.open("/dev/net/tun", os.O_RDWR)
ioctl(ftun, TUNSETIFF, struct.pack("16sH", b"tun0", IFF_TUN | IFF_NO_PI))
req_nr = 1
req_id = random.randint(1, 65000)
while True:
icmp_req = IP(src_s="10.0.0.2", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
os.write(ftun, icmp_req.bin())
time.sleep(1)
req_nr += 1
I can see packets originating from tun0 interface:
# tshark -i tun0
1 0.000000000 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=1/256, ttl=64
2 1.001695939 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=2/512, ttl=64
3 2.003375319 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=3/768, ttl=6
But wlp3s0 interface is silent, thus it seems that packets don't get NAT'ed and routed to wlp3s0 interface, which is my WLAN card.
Any ideas what I am missing?
I'm running Debian 9.
And turns out that packet forwarding was disabled - default policy for FORWARD chain is DROP:
# iptables -L -v
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
So I changed the policy:
# iptables -P FORWARD ACCEPT
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Also, I had to change the IP packet source address to something other than 10.0.0.2 which is the preferred source address for tun0 interface:
$ ip route
10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.2
So I changed the packet source address to 10.0.0.4:
icmp_req = IP(src_s="10.0.0.4", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
Then kernel started forwarding packets coming from tun0 interface to the gateway interface:
# tshark -i wlp3s0
5 0.008428567 192.168.0.103 → 8.8.8.8 ICMP 62 Echo (ping) request id=0xb5c7, seq=9/2304, ttl=63
6 0.041114028 8.8.8.8 → 192.168.0.103 ICMP 62 Echo (ping) reply id=0xb5c7, seq=9/2304, ttl=48 (request in 5)
Also ping responses were sent back to tun0:
# tshark -i tun0
1 0.000000000 10.0.0.4 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb5c7, seq=113/28928, ttl=64
2 0.035470191 8.8.8.8 → 10.0.0.4 ICMP 48 Echo (ping) reply id=0xb5c7, seq=113/28928, ttl=47 (request in 1)

Network port open, but no process attached?

When I check my server, I found some strange ports:
[root#server ~]# netstat -tulnp |grep "-"
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:33181 0.0.0.0:* LISTEN -
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
udp 0 0 0.0.0.0:33252 0.0.0.0:* -
No program can be found in the output of netstat -tulnp(with root privilege).
How could I find the usage of these ports? How could I judge it's safe or not?
OS: CentOS 5.6 x86_64
Kernel: 2.6.18-238.el5 #1 SMP Thu Jan 13 15:51:15 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
update:
# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 824 rquotad
100011 2 udp 824 rquotad
100011 1 tcp 827 rquotad
100011 2 tcp 827 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100021 1 udp 33252 nlockmgr
100021 3 udp 33252 nlockmgr
100021 4 udp 33252 nlockmgr
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100021 1 tcp 33181 nlockmgr
100021 3 tcp 33181 nlockmgr
100021 4 tcp 33181 nlockmgr
100005 1 udp 839 mountd
100005 1 tcp 842 mountd
100005 2 udp 839 mountd
100005 2 tcp 842 mountd
100005 3 udp 839 mountd
100005 3 tcp 842 mountd
These are likely to be RPC ports reserved by the portmapper. 2049 is a well known port used by NFS. Your other ports are probably other RPC services. To query the portmapper for a full list of services and their ports use rpcinfo -p.

No traffic when listening with netcat

I have a server with multiple network interfaces.
I'm trying to run a network monitoring tools in order to verify network traffic statistics by using the sFlow standard on a router.
I get my sFlow datagram on port 5600 of eth1 interface. I'm able to see the generated traffic thanks to tcpdump:
user#lnssrv:~$ sudo tcpdump -i eth1
14:09:01.856499 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1456
14:09:02.047778 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1432
14:09:02.230895 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1300
14:09:02.340114 IP 198.51.100.253.5678 > 255.255.255.255.5678: UDP, length 111
14:09:02.385036 STP 802.1d, Config, Flags [none], bridge-id c01e.b4:a4:e3:0b:a6:00.8018, length 43
14:09:02.434658 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1392
14:09:02.634447 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1440
14:09:02.836015 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1364
14:09:03.059851 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1372
14:09:03.279067 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1356
14:09:03.518385 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1440
It seems all ok, but, when i try to read the packet with netcat it seems that there are no packets here:
nc -lu 5600
Indeed, sflowtool nor nprobe doesn't read anything from port 5600.
Where I'm wrong?
nc -lu 5600 is going to open a socket on port 5600, meaning that it will only dump packages that are received in that socket, i.e, packages aiming to that specific address and port.
On the other side, tcpdump collects all the traffic flowing, even without it being sent to a specific server.
Two causes of your problem here:
a) Your host IP is not 198.51.100.232
With host command you will be able exactly see TCP traffic of your server
for example : tcpdump -i eth1 host 198.51.100.232 port 80
b) There is another server that is listening in UDP port 5600 that is grabbing all the data, so, nothing is leftover for nc socket.
Notice: with TCPDUMP you will not be able to check and listen UDP ports.
Not sure that it can help here but in my case it helped ( I had similar problem ) so i just stopped "iptables" like
service iptables stop.
It seems that tcpdump works on the lower level than iptable and ipdaple can stop datagrams from being proceed to the higher level. Here a good article on this topic with nice picture.

Resources