what is the correct tshark capture filter option for the DHCP frame? - linux

I am trying to capture the DHCP frames for analysis using the following command in my mac book.
sudo tshark -i en0 -f "port 67 or port 68" -a duration:300 -w /tmp/dump.pcap
I use the following command to get all the fields of all protocols in the packet but it is not printing any value. Is the capture filter option for the DHCP frame is correct? Any help is appreciated?
sudo tshark -T text -r /tmp/dump.pcap -V

Answer
Yes, your commands are OK. Maybe no DHCP packets arrived and therefore not captured. Try to force a DHCP activity by commands in second teminal window of the same device:
sudo dhclient -r
sudo dhclient
Warning: Do not apply these commands if you are connected remotely. First command releases the IP address and your connection will be interrupted without a possibility to put second command and get address back remotely.
Some details concerning data capture
The thsark filters have the same syntax as Wireshark.
Threre exist 2 (or 3) filter types:
capture filter, -f tshark option: It selects which packets will be captured and which not. This is useful e.g. for getting lower capture file size.
display filter, -Y tshark option: It selects which packets will be displayed from all captured ones.
You can combine both types.
Examples:
tshark -i eth0 -n -Y "ip.addr==8.8.8.8"
tshark -i eth0 -n -Y "ip.addr==8.8.8.8" -f "udp port 53"
tshark -i eth0 -n -Y "ip.addr==8.8.8.8 and udp.port==53"
All packets are captured, but only the 8.8.8.8 IP address packets are displayed.
Only the DNS packets are captured, and only the 8.8.8.8 IP address packets from captured are displayed.
All packets are captured, but only the 8.8.8.8 IP address packets having UDP port 53 (i.e. DNS) are displayed. Compare different syntax of the port filtering between the display and the capture filters in line above.
All other options like -a, -b, -w, -s can be applied too.
The tcpdump application is usefull too. It is available in most Linux systems even very small or special. It does not have a display filter option. Only capture filters can be applied. Other options are missing: -a, -b ...
sudo tcpdump -i eth0 -w /tmp/dhcp.pcap "udp port 67 or udp port 68"

Related

Linux: how to get network stats on tcp (ie excluding udp)?

I've been using /sys/class/net/eno1/statistics/rx_bytes and tx_bytes to gather stats on my network interface. The trouble is, that network has a device (a Silicon Dust HDHOMERUN HDTV tuner) which constantly streams UDP packets at a very high rate that I don't want to monitor. I'd like to remove that traffic from the monitor - perhaps by only looking at TCP packets.
Is there any way to separate out the TCP and UDP stats?
netstat -st gives some info but it's somewhat cryptic - just how big is a 'segment'? The MTU? The man page is silent on that.
$ netstat -st | grep 'segments received'
25449056 segments received
1683 bad segments received
$ netstat -st | grep 'segments sent out'
37860139 segments sent out
Based on this answer from serverfault. If you are using iptables you can add a rule to each of the INPUT and OUTPUT chains which will count every packet which carries TCP in the payload. It is possible that you will need to invoke every iptables command with sudo.
Create the rules:
# Match all TCP-carrying packets incoming on 'eno1' iface
iptables -I INPUT -i eno1 -p tcp
# Match all TCP-carrying packets outgoing through 'eno1' iface
iptables -I OUTPUT -o eno1 -p tcp
Afterwards, you can use iptables -nvxL INPUT or OUTPUT to be presented with the number of bytes processed by the rule:
Chain INPUT (policy ACCEPT 9387 packets, 7868103 bytes)
pkts bytes target prot opt in out source destination
10582 9874623 tcp -- eno1 * 0.0.0.0/0 0.0.0.0/0
In case you already have other rules defined it might be handy to create a separate chain entirely. This is also described in the answer i referenced, though you also want the -i and -o options in the in/out chains respectively. These allow you to filter on a single interface (use -i for INPUT and -o for OUTPUT).
iptables -N count_in # create custom chain named 'count_in'
iptables -A count_in -j RETURN # append RETURN action to chain 'count_in'
iptables -I INPUT -j count_in # insert chain at the top of chain INPUT
iptables -I count_in 1 -i eno1 -p tcp # insert rule that matches all tcp packets on eno1
# and has no action (does nothing)
iptables -nvxL count_in # list chain 'count_in' rules
I am not sure whether the "bytes" counter includes the IP header, or just the TCP segment bytes but it is still probably the closest metric to what you want to measure (TCP-only rx/tx bytes).
Additionally keep in mind that oftentimes rules defined with iptables are not actually saved and will get deleted on a system reboot. To enable them persistently on every reboot you may use the iptables-save and iptables-restore commands. To learn their usage you should probably look in your Linux distro's documentation as well as iptables manual.
Finally, AFAIK iptables is considered legacy by now and it is being slowly replaced by nftables. I myself still have iptables installed in my system by default. If you want to switch/are already using nftables, then you need to translate above commands to the syntax supported by the nft command. There is a utility called iptables-translate available which may help with this. It's purpose is to translate old iptables commands to equivalent nft commands. I mention this mostly for the sake of completeness, you should be just fine using iptables for your particular task if you have it installed.
You can use iptraf-ng.
Install with:
sudo apt install iptraf-ng
This will give you statistics per protocol (IPv4/IPv6/TCP/UDP/ICMP/...) on a specific interface:
sudo iptraf-ng -d eth0
You can also use this to have details per ports:
sudo iptraf-ng -s eth0

Find out how much data is sent and received via a terminal command

I'm working on a project where my client is billed exorbitant rates for data transfer on a boat. When they are in port, they use 3g and when they are out at sea they use sattelite.
Every 30 minutes I need to check to see what network I am attached to (moving vessel) but I need to give them specific information on how much data is actually used to make these calls.
I was wondering if anyone knew of any way to get the exact bytes that were sent out and received via terminal response.
Right now I am running this command to get the IP address that my ISP has assigned me.
dig +short myip.opendns.com #resolver1.opendns.com
To identify which network is used right now you may check route table
netstat -r | grep default
You will see default interface used for connection.
There are multiple commands that will show you statistics for interface. E.g.
ip -s link show dev eth0
where eth0 interface identified from command above.
or
ethtool -S eth0
If you want to get data independently from interface(all data stats from boot) you may use IpExt sectoin of
netstat -s
All those metrics will provide system wide counters. For inspecting specific app you may use iptables stats. There are owner module in iptables-extensions that may help. Here are example commands:
# sudo su
# iptables -A OUTPUT -m owner --uid-owner 1000 -j CONNMARK --set-mark 1
# iptables -A INPUT -m connmark --mark 1
# iptables -A OUTPUT -m connmark --mark 1
# iptables -nvL | grep -e Chain -e "connmark match 0x1"
Iptables will allow you to clear counters whenever it needed. Also owner module allow you match packets associated with user group, process id and socket.

How to display all data using tcpdump?

I am capturing network traffic by using tcpdump. The problem is: I can't see all capture data when the package is too long. For example, when the tcp frame length is more than 500, I just see 100-200 or less. How to display all frame data(500+)?
I have tried add -vv and -vvv parameter. This is my current command:
tcpdump -i eth1 tcp and host 10.27.13.14 and port 6973 -vv -X -c 1000
Add -s0 parameter:
tcpdump -i eth1 tcp and host 10.27.13.14 and port 6973 -s0 -vv -X -c 1000

How to Capture Remote System network traffic?

I have been using wire-shark to analyse the packets of socket programs, Now i want to see the traffic of other hosts traffic, as i found that i need to use monitor mode that is only supported in Linux platform, so i tried but i couldn't capture any packets that is transferred in my network, listing as 0 packets captured.
Scenario:
I'm having a network consisting of 50+ hosts (all are powered by windows Except mine), my IP address is 192.168.1.10, when i initiate a communication between any 192.168.1.xx it showing the captured traffic.
But my requirement is to monitor the traffic of 192.168.1.21 b/w 192.168.1.22 from my host i,e. from 192.168.1.10.
1: is it possible to capture the traffic as i mentioned?
2: If it is possible then is wire-shark is right tool for it (or should i have to use differnt one)?
3: if it is not possible, then why?
Just adapt this a bit with your own filters and ips : (on local host)
ssh -l root <REMOTE HOST> tshark -w - not tcp port 22 | wireshark -k -i -
or using bash :
wireshark -k -i <(ssh -l root <REMOTE HOST> tshark -w - not tcp port 22)
You can use tcpdump instead of tshark if needed :
ssh -l root <REMOTE HOST> tcpdump -U -s0 -w - -i eth0 'port 22' |
wireshark -k -i -
You are connected to a switch which is "switching" traffic. It bases the traffic you see on your mac address. It will NOT send you traffic that is not destined to your mac address. If you want to monitor all the traffic you need to configure your switch to use a "port mirror" and plug your sniffer into that port. There is no software that you can install on your machine that will circumvent the way network switching works.
http://en.wikipedia.org/wiki/Port_mirroring

Isolated test network on a Linux server running a web server (lightttpd) and (curl) clients

I'm writing a testing tool that requires known traffic to be captured from a NIC (using libpcap), then fed into the application we are trying to test.
What I'm attempting to set-up is a web server (in this case, lighttpd) and a client (curl) running on the same machine, on an isolated test network. A script will drive the entire setup, and the goal is to be able to specify a number of clients as well as a set of files for each client to download from the web server.
My initial approach was to simply use the loopback (lo) interface... run the web server on 127.0.0.1, have the clients fetch their files from http://127.0.0.1, and run my libpcap-based tool on the lo interface. This works ok, apart from the fact that the loopback interface doesn't emulate a real Ethernet interface. The main problem with that is that packets are all inconsistent sizes... 32kbytes and bigger, and somewhat random... it's also not possible to lower the MTU on lo (well, you can, but it has no effect!).
I also tried running it on my real interface (eth0), but since it's an internal web client talking to an internal web server, traffic never leaves the interface, so libpcap never sees it.
So then I turned to tun/tap. I used socat to bind two tun interfaces together with a tcp connection, so in effect, i had:
10.0.1.1/24 <-> tun0 <-socat-> tcp connection <-socat-> tun1 <-> 10.0.2.1/24
This seems like a really neat solution... tun/tap devices emulate real Ethernet devices, so i can run my web server on tun0 (10.0.1.1) and my capture tool on tun0, and bind my clients to tun1 (10.0.2.1)... I can even use tc to apply shaping rules to this traffic and create a virtual WAN inside my linux box... but it just doesn't work...
Here are the socat commands I used:
$ socat -d TCP-LISTEN:11443,reuseaddr TUN:10.0.1.1/24,up &
$ socat TCP:127.0.0.1:11443 TUN:10.0.2.1/24,up &
Which produces 2 tun interfaces (tun0 and tun1), with their respective IP addresses.
If I run ping -I tun1 10.0.1.1, there is no response, but when i tcpdump -n -i tun0, i see the ICMP echo requests making it to the other side, just no sign of the response coming back.
# tcpdump -i tun0 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes
16:49:16.772718 IP 10.0.2.1 > 10.0.1.1: ICMP echo request, id 4062, seq 5, length 64
<--- insert sound of crickets here (chirp, chirp)
So am I missing something obvious or is this the wrong approach? Is there something else i can try (e.g. 2 physical interfaces, eth0 and eth1???).
The easiest way is just to use 2 machines, but I want all of this self-contained, so it can all be scripted and automated on a single machine, without and other dependencies...
UPDATE:
There is no need for the 2 socats to be connected with a tcp connection, it's possible (and preferable for me) to do this:
socat TUN:10.0.1.1/24,up TUN:10.0.2.1/24,up &
The same problem still exists though...
OK, so I found a solution using Linux network namespaces (netns). There is a helpful article about how to use it here: http://code.google.com/p/coreemu/wiki/Namespaces
This is what I did for my setup....
First, download and install CORE: http://cs.itd.nrl.navy.mil/work/core/index.php
Next, run this script:
#!/bin/sh
core-cleanup.sh > /dev/null 2>&1
ip link set vbridge down > /dev/null 2>&1
brctl delbr vbridge > /dev/null 2>&1
# create a server node namespace container - node 0
vnoded -c /tmp/n0.ctl -l /tmp/n0.log -p /tmp/n0.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 0
ip link add name veth0 type veth peer name n0.0
ip link set n0.0 netns `cat /tmp/n0.pid`
vcmd -c /tmp/n0.ctl -- ip link set n0.0 name eth0
vcmd -c /tmp/n0.ctl -- ifconfig eth0 10.0.0.1/24 up
# start web server on node 0
vcmd -I -c /tmp/n0.ctl -- lighttpd -f /etc/lighttpd/lighttpd.conf
# create client node namespace container - node 1
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 1
ip link add name veth1 type veth peer name n1.0
ip link set n1.0 netns `cat /tmp/n1.pid`
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0
vcmd -c /tmp/n1.ctl -- ifconfig eth0 10.0.0.2/24 up
# bridge together nodes using the other end of each veth pair
brctl addbr vbridge
brctl setfd vbridge 0
brctl addif vbridge veth0
brctl addif vbridge veth1
ip link set veth0 up
ip link set veth1 up
ip link set vbridge up
This basically sets up 2 virtual/isolated/name-spaced networks on your Linux host, in this case, node 0 and node 1. A web server is started on node 0.
All you need to do now is run curl on node 1:
vcmd -c /tmp/n1.ctl -- curl --output /dev/null http://10.0.0.1
And monitor the traffic with tcpdump:
tcpdump -s 1514 -i veth0 -n
This seems to work quite well... still experimenting, but looks like it will solve my problem.

Resources