Difference in protocol filter output Wireshark tshark - protocols

I am working with Wireshark and tshark (slightly advanced beginner with both) to dissect and filter a lot of pcap files. One of my interests is to filter out certain protocols into text files that I can ingest into a database for aggregate analysis. I am doing this using tshark with a bash file:
for pcap_file in "${pcap_file_folder}\*.pcap";
do
tshark -r "${file}" -2 -R smtp -T fields -E sep=/t -E header=y -E quote=d ...<etc>
done
The "etc" part of this code is just a list of field names that I want to retain (e.g., -e ip.src -e smtp.message).
As I was looking at the "smtp" filtered files this morning, I decided to check to see if Wireshark might provide more information on some of the packets that looked interesting. So, I opened the relevant .pcap file in Wireshark and filtered for smtp. I got back 30 results, while in tshark the same file produced over 150k results. What is the difference between the -R filter in tshark versus the filter in Wireshark? Is there one that is preferable?
I also noticed in the file resulting from the tshark filter that the frame.protocols field often did not contain "smtp". Am I using -R incorrectly?

Related

How to read text file from Snort?

I have a worm that its signature is in .txt file. Now I wanna check it with Snort IDS. I read the the manual page of Snort, But I couldn't find anything. How can I do this?(Is there a command for detecting worms signature using Snort something like snort -r worm.txt -c /etc/snort/snort.conf ?)
Try to send this file with "nc" in your local machine (just an idea)
You will need two terminal and Snort must be listening in you network interface :
The first terminal
nc -l 1234 > filename.out
The first terminal
nc host.example.com 1234 < Worm.txt

Best method to output log content to listening port

I am outputting content of a log via netcat to an application over the network. I don't know if what I'm doing is the most efficient, especially when I notice the netcat session becomes non-responsive. I have to stop netcat and start it again for the application to work again.
The command I run is:
/bin/tail -n1 -f /var/log/custom_output.log | /bin/nc -l -p 5020 --keep-open
This needs to run like this 24/7. Is this the most efficient way of doing it? How can I improve on it so I don't have to restart the process daily?
EDIT
So I realised that when the log is being rotated, netcat is locked onto a file that's not longer being written to. I can deal with this easily enough.
The question still stands. Is this the best way to do something like this?
It's been 6 years, but maybe someone will come in handy.
To account for log rotation, use tail with the -F flag.
nc (aka netcat) variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
tail -n0 -F "$LOG_FILE" | nc -k -l -p $PORT
Notes:
Flag -k in nc is analog to --keep-open in "the OpenBSD rewrite of netcat";
Multiple clients can connect to nc at the same time, but only the first one will be receive appended log lines;
tail will run immediately, so it will collect appended log lines even if no client is connected. Thus, the first client can receive some buffered data - all log lines that have been appended since tail was run.
socat variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
socat TCP-LISTEN:$PORT,fork,reuseaddr SYSTEM:"tail -n0 -F \"$LOG_FILE\" </dev/null"
Note: here socat will fork (clone itself) on each client connection and start a separate tail process. Thus:
Each connected client will receive appended log lines at the same time;
Clients will not receive any previously buffered by tail strings.
additional
You can redirect stderr to stdout in the tail process by adding 2>&1 (in both variants). In this case, clients will receive auxiliary message lines, e.g.:
tail: /var/log/custom_output.log: file truncated;
tail: '/var/log/custom_output.log' has become inaccessible: No such file or directory - printed when the log file has been removed or renamed, only if -F is used;
tail: '/var/log/custom_output.log' has appeared; following new file - printed when a new log file is created, only if -F is used.

How can I have tcpdump write to file and standard output the appropriate data?

I want to have tcpdump write raw packet data into a file and also display packet analysis into standard output as the packets are captured (by analysis I mean the lines it displays normally when -w is missing).
Can anybody please tell me how to do that?
Here's a neat way to do what you want:
tcpdump -w - -U | tee somefile | tcpdump -r -
What it does:
-w - tells tcpdump to write binary data to stdout
-U tells tcpdump to write each packet to stdout as it is received, rather than buffering them and outputting in chunks
tee writes that binary data to a file AND to its own stdout
-r - tells the second tcpdump to get its data from its stdin
Since tcpdump 4.9.3 4.99.0, the --print option can be used:
tcpdump -w somefile --print
Wednesday, December 30, 2020, by mcr#sandelman.ca, denis and fxl.
Summary for 4.99.0 tcpdump release
[...]
User interface:
[...]
Add --print, to cause packet printing even with -w.
tcpdump ${ARGS} &
PID=$!
tcpdump ${ARGS} -w ${filename}
kill $PID
If you want a way to do it without running tcpdump twice, consider:
sudo tcpdump port 80 -w $(tty) | tee /tmp/output.txt
From the interactive command prompt you could use $TTY instead of $(tty) but in a script the former wouldn't be set (though I'm not sure how common it is to run tcpdump in a script).
Side-note: it's not very Unix-y the way tcpdump by default makes you write to a file. Programs should by default write to stdout. Redirection to a file is already provided by the shell constructs. Maybe there's a good reason tcpdump is designed this way but I don't know what that is.

Capture nethogs output in log file

I want to check the network bandwidth used by my process.
For this i found that nethogs tool is useful. Using this tool i can see which process is eating up a network bandwidth and process behaviour.
But how do I capture data from nethogs for a my process and store it into log file ?
You can run nethogs in background in tracemode and write output to a file like this:
sudo nethogs -t eth1 &> /var/tmp/nethogs.log &
Download and build the nethogs-parser as described here.
Then after you have accumulated enough data you can run the parser to see the results:
./hogs -type=pretty /var/tmp/nethogs.log
Make sure to kill the running nethogs process when you are done collecting data.
More info here on automating the task.
I dont know when these options got implemented but you can use nethogs -t or nethogs -b, the pid and user are strangely placed at the end of the pid command string, but easy enough to parse.
I think you need to use the latest cvs version 0.8.1-SNAPSHOT
You can use this command to capture output:
nethogs -d 5 | sed 's/[^[:print:][:cntrl:]]//g' > output.txt
The right command of nethogs is
nethogs -d 1 eth0 > output.txt
You need to specify the network interface otherwise, the default interface eth0 will be used. Sometime, nethogs might not show the proper output because of the network interface. It is always better to provide the network interface and generate some traffic during the experimentation. You can print the output to a file by adding > output.txt
-d argument specifies how frequently the output will be shown. Here, I gave 1, this indicates that the output will be shown per second.
Hope this might be useful.

How to grep specific lines from nmap -O output?

I got many files named like 192.168.203.txt as the output of
sudo nmap -O --top-ports 192.168.203.* >>192.168.203.txt
The output looks like as below:
Nmap scan report for 192.168.203.29
Host is up (0.00067s latency).
PORT STATE SERVICE
21/tcp closed ftp
22/tcp closed ssh
23/tcp closed telnet
25/tcp closed smtp
80/tcp open http
110/tcp closed pop3
139/tcp filtered netbios-ssn
443/tcp closed https
445/tcp filtered microsoft-ds
3389/tcp filtered ms-wbt-server
Device type: general purpose
Running: Microsoft Windows 2008|7
OS CPE: cpe:/o:microsoft:windows_server_2008::sp2 cpe:/o:microsoft:windows_7
OS details: Microsoft Windows Server 2008 SP2, Microsoft Windows 7 or Windows Server 2008 SP1
Network Distance: 6 hops
I just want to grep the IP like 192.168.203.29 with http or ssh or other ports open sepetately. Maybe I will pipe all the result IP into a file named http_open_ip.txt.
I have tried grep ftp with commands:
cat *.txt|grep -B 3 "ftp"|grep -B3 "open"|grep "192.168."|awk '{print $5}'|sort -t . -k 3,3n -k 4,4n> ftp_open_ip.txt
Thus, I got a file ftp_open_ip.txt. But I found this command not work with other keywords like ssh stmp. What should I do ?
I am not exactly clear about what you want. Perhaps, you want to look in all the files and for all those IP which will have a http port open in one file, ssh port open in another file. So same IP may be present in multiple files.
Assuming that, below is an awk solution
awk 'BEGIN{http_open="http_open";ssh_open="ssh_open";ftp_open="ftp_open"}
/Nmap scan report for/{ip=$5}
/ftp/ && /open/{print "ftp open for " ip >> ftp_open}
/ssh/ && /open/{print "ssh open for " ip >> ssh_open }
/http/ && /open/{print "http open for " ip >> http_open}
' <filename>
It assumes that the file is containing data in same order shown in your example. So Nmap scan report for line has the IP in 5th field. Now after that, the rest is really simple. if ftp, ssh, http etc are found along with open status, we redirect the output in corresponding file.
Instead of all this challenging text processing, you should use Nmap's own features that make this kind of output processing easier. First, Nmap offers many output formats, and options to output to files. Start with this command:
sudo nmap -oA my-scan-%y%m%d -O --top-ports 192.168.203.0/24
The -oA my-scan-%y%m%d option will save the output of your scan in 3 different files:
$ ls
my-scan-20130520.gnmap my-scan-20130520.nmap my-scan-20130520.xml
For quick queries, the .gnmap file will be the easiest to use with awk or grep, but it doesn't have all the useful information that is included in the normal (.nmap) or XML formats. If you plan to make this script automated, or extend it in any way, your best bet will be to use the XML output.
Here are a couple commands that will do what you want with these output formats:
awk '/\/open\/tcp\/\/http\//{print $2}' my-scan-*.gnmap > http_open_ip.txt
xmlstarlet sel -t -m "//port[#protocol='tcp' and service/#name='ssh' and state/#state='open']/ancestor::host/address[#addrtype='ipv4']" -v '#addr' -n my-scan-*.xml > ssh_open_ip.txt

Resources