I know I can use tc and netem to do
tc qdisc add dev eth0 root netem loss 50%
This would drop 50% of packets in all of eth0 traffic. However, I would like to specify a protocol (UDP, TCP etc), so only packets of this protocol would be dropped.
This is an annoying feature of iptables. Even though the docs say that DROP silently drops the packet on the floor, it still tells the calling program, causing sendmsg (or whatever) to return setting errno to ENETUNREACH or EPERM. There doesn't seem to be a feature "no really silently drop the packet and don't tell anyone about it".
I have found the following workaround, however: if the packets are going to be leaving your local machine, you can set the TTL to 0 in the mangle table:
iptables -t mangle -A SomeChain -m ttl -j TTL --ttl-gt 0 --ttl-set 0
I have successfully used this to deal with a DoS reflection attack.
Use iptables instead - it has a probability option that should allow you to do this, for example:
iptables -A INPUT -m statistic -p tcp --mode random --probability 0.5 -j DROP
Adjust the various values to match the desired traffic/direction/probability.
Related
I've been using /sys/class/net/eno1/statistics/rx_bytes and tx_bytes to gather stats on my network interface. The trouble is, that network has a device (a Silicon Dust HDHOMERUN HDTV tuner) which constantly streams UDP packets at a very high rate that I don't want to monitor. I'd like to remove that traffic from the monitor - perhaps by only looking at TCP packets.
Is there any way to separate out the TCP and UDP stats?
netstat -st gives some info but it's somewhat cryptic - just how big is a 'segment'? The MTU? The man page is silent on that.
$ netstat -st | grep 'segments received'
25449056 segments received
1683 bad segments received
$ netstat -st | grep 'segments sent out'
37860139 segments sent out
Based on this answer from serverfault. If you are using iptables you can add a rule to each of the INPUT and OUTPUT chains which will count every packet which carries TCP in the payload. It is possible that you will need to invoke every iptables command with sudo.
Create the rules:
# Match all TCP-carrying packets incoming on 'eno1' iface
iptables -I INPUT -i eno1 -p tcp
# Match all TCP-carrying packets outgoing through 'eno1' iface
iptables -I OUTPUT -o eno1 -p tcp
Afterwards, you can use iptables -nvxL INPUT or OUTPUT to be presented with the number of bytes processed by the rule:
Chain INPUT (policy ACCEPT 9387 packets, 7868103 bytes)
pkts bytes target prot opt in out source destination
10582 9874623 tcp -- eno1 * 0.0.0.0/0 0.0.0.0/0
In case you already have other rules defined it might be handy to create a separate chain entirely. This is also described in the answer i referenced, though you also want the -i and -o options in the in/out chains respectively. These allow you to filter on a single interface (use -i for INPUT and -o for OUTPUT).
iptables -N count_in # create custom chain named 'count_in'
iptables -A count_in -j RETURN # append RETURN action to chain 'count_in'
iptables -I INPUT -j count_in # insert chain at the top of chain INPUT
iptables -I count_in 1 -i eno1 -p tcp # insert rule that matches all tcp packets on eno1
# and has no action (does nothing)
iptables -nvxL count_in # list chain 'count_in' rules
I am not sure whether the "bytes" counter includes the IP header, or just the TCP segment bytes but it is still probably the closest metric to what you want to measure (TCP-only rx/tx bytes).
Additionally keep in mind that oftentimes rules defined with iptables are not actually saved and will get deleted on a system reboot. To enable them persistently on every reboot you may use the iptables-save and iptables-restore commands. To learn their usage you should probably look in your Linux distro's documentation as well as iptables manual.
Finally, AFAIK iptables is considered legacy by now and it is being slowly replaced by nftables. I myself still have iptables installed in my system by default. If you want to switch/are already using nftables, then you need to translate above commands to the syntax supported by the nft command. There is a utility called iptables-translate available which may help with this. It's purpose is to translate old iptables commands to equivalent nft commands. I mention this mostly for the sake of completeness, you should be just fine using iptables for your particular task if you have it installed.
You can use iptraf-ng.
Install with:
sudo apt install iptraf-ng
This will give you statistics per protocol (IPv4/IPv6/TCP/UDP/ICMP/...) on a specific interface:
sudo iptraf-ng -d eth0
You can also use this to have details per ports:
sudo iptraf-ng -s eth0
In cisco routers they seem to be able to change the NAT translation timeout for DNS separately from udp.
When port translation is configured, there is finer control over translation entry timeouts, because each entry contains more context about the traffic using it. Non-DNS UDP translations time out after 5 minutes; DNS times out in 1 minute. TCP translations time out after 24 hours, unless a RST or FIN is seen on the stream, in which case it times out in 1 minute.
from: https://community.cisco.com/t5/networking-documents/quot-ip-nat-translation-timeout-quot-command/ta-p/3137012
How can I do this on linux?
When I do sysctl net.netfilter I can find conntrack timeout viables for each protocol such as
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 120
but I can't find any settings for DNS.
Is there a way to change the DNS conntrack timeout viable independently from other udp traffic?
If this is not possible by changing the conntrack settings how can I change the DNS conntrack timeout? (I just want to try this for fun no big reason) Should I write some C code to check each udp packet that goes through the NAT using netfilter and then use conntrack to add it to the table with a different timeout variable?
I'm using Ubuntu 20.04
I was able to change the DNS timeout by using nfct
First create a special contrack timeout policy:
sudo nfct add timeout dns-timeout-test inet udp unreplied 20 replied 20
Then refer to this policy for DNS packages:
iptables -I PREROUTING -t raw -p udp --dport 53 -j CT --timeout dns-timeout-test
Then check the DNS conntrack entry with conntrack -E
Packets with dport 53 will have timeout of 20 instead of the default 30.
[NEW] udp 17 20 src=someaddress dst=someaddress sport=someport dport=53 [UNREPLIED] src=someaddress dst=someaddress sport=53 dport=someaport
I want to drop incoming traffic of my Linux host based on TCP option field.
Like TCP option 30 Multi path TCP.
If packet contain multi-path tcp notation or option field 30, then my Linux host needs drop the connection or packet.
My setup is host 1 <-> host 2 <-> host 3.
Host 1 sends packet via host 2 to host 3.
Host 2 have two interfaces eth0 and eth1.
eth0 connects host 1 and eth1 connects host 3.
When incoming eth 0 packets contains option field 30, I just want to cancel the connection or drop the packets.
I tried iptables string compare, but it didn't works.
The command is,
sudo iptables -I INPUT -j DROP -p tcp -s 0.0.0.0/0 -m string --string "Multipath TCP" --algo bm.
But above rule doesnot stop the multipath TCP to send and receive via host 2 eth0, eth1.
host 2 not able to drop the multi-path TCP (option field 30) traffic.
Is it possible to drop a specif TCP packet based on option field.
First, you need to add the rule in FORWARDING chain on host2 (the reason is the packets are not targeted to host2 and will not hit the INPUT chain).
There is an option available in iptables to match the TCP options. Please try the following iptables command:
iptables -I FORWARD -p tcp --tcp-option 30 -j DROP
Is there any way to construct a firewall rule using "iptables" which filters packets on both input and output? I've only been able to find rules like the following which allow you to designate it as applying to packet source (INPUT) or destination (OUTPUT).
iptables -A INPUT -i eth1 -p tcp -s 192.168.1.0/24 --dport 80 -j DROP.
It would make sense though that I should be able to filter packets coming from and going to specific places so that I could end up with a fw table like the following:
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 172.152.4.0/24 92.3.0.0/16
Any suggestions?
Thanks!
Yes and no.
No because: iptables works by defining how to treat packets based on their categorization into chains (INPUT, OUTPUT, FORWARD, ...) first and only then also on specific characteristics (source or destination address, protocol type, source or destination port, etc). You can never define an iptable rule that does not apply to a specific chain.
INPUT, OUTPUT, and FORWARD are the default chains of the iptables system. INPUT addresses everything with destination localhost (i.e. that is addressed to your network device); OUTPUT applies to everything with source localhost (i.e. that comes from your computer).
Yes because: You can define custom chains. You can do that like so
sudo iptables -N MYCHAIN
then you can send packets from both the INPUT and the OUTPUT (and if you like the FORWARD) chain to MYCHAIN, for instance all the TCP packages from INPUT:
sudo iptables -A INPUT -p tcp -j MYCHAIN
or all the packages from OUTPUT
sudo iptables -A OUTPUT -j MYCHAIN
and then you can define any rule you want for mychain, including
sudo iptables -A MYCHAIN -s 172.152.4.0/24 -d 92.3.0.0/16 -j ACCEPT
which should be more or less the rule you wanted
However, one might argue that it does indeed make sense to keep INPUT and OUTPUT chains seperate. Most users will want to apply much stricter rules on INPUT and FORWARD than on OUTPUT. Also, iptables can be used for routing in which case it makes a fundamental difference if you have an incoming or an outgoing package.
I am making a packet filtering program running on Ubuntu 12.04 which uses libipq as the library for copying packets into userspace. The logic of libipq works fine for me, my issue is that I have noticed a significant performance hit from using libipq to not using libipq. If I remove my iptable rules that I added for my program and let the kernel handle the packets, the speed is 50 MB/s. However, when using libipq and having restored my iptables rule, the speed goes down to 1 MB/s (if i'm lucky), it's usually half of that.
I wonder, could something be wrong with my iptable rules? Could there be a more efficient use of rules, or is libipq simply that inefficient (or my code even though I don't do that much)? Here is the script I use to setup my iptable rules:
#!/bin/sh
modprobe iptable_filter
modprobe ip_queue
iptables -A FORWARD -p icmp -j QUEUE
iptables -A FORWARD -p tcp -j QUEUE
iptables -A FORWARD -p udp-j QUEUE
iptables -A INPUT -p icmp -j QUEUE
iptables -A INPUT -p tcp -j QUEUE
iptables -A INPUT -p udp -j QUEUE
Other than that, my iptable rules are the default set that came with Ubuntu.
NOTE: My setup is for a client and server VM on two different subnets and using my Ubuntu VM to bridge both using NAT and ip masquerading.
Libipq has been deprecated in favour of the newer libnetfilter_queue