2 tcpdumps from a remote host - linux

I need to collect tcpdump from different interfaces from a remote host.
Currently I do it in the following way:
ssh remotehost "tcpdump -i iface1 -w - " > iface1_dump.pcap &
ssh remotehost "tcpdump -i iface2 -w - " > iface2_dump.pcap
I wonder if there a way to realize it in single ssh connect (maybe some complex shell redirect is a cure) to minimize packets loss while second ssh command is on its way
2 conditions:
remote host has really limited disk space, so I can't tcpdump locally on that host
tcpdump -i any affects Ethernet headers, so I can't use it

You create a file "SEND_DATA" on the target machine, in which you put these 2 lines:
tcpdump -i iface1 -w - | nc <IP> <PORT1>
tcpdump -i iface2 -w - | nc <IP> <PORT2>
Every machine has netcat, so it works. You mark SEND_DATA executable
Next, you open a listening port on your PC, and run the script on remote machine:
> OUT1 nc -l -p PORT1
> OUT2 nc -l -p PORT1
ssh remotehost SEND_DATA
In this moment, the files OUT1 & OUT2 will start receiving data.
You also need to consult the manuals from 2 versions of nc, because I saw that the parameters differ sometimes.

The two ssh's is probably the nicest method, but you could also sed 's/^/one /' on the first one, in the background, and sed 's/&/two /' on the second, and then pull them apart on the local host with "egrep '^one ' | sed 's/one //'" for example.
You could also save the output in a pair of files, and scp them back when "done".

ssh remotehost "tcpdump -iiface1 -w- &
tcpdump -iiface2 -w- >&2 2>/dev/null" >iface1_dump.pcap 2>iface2_dump.pcap

Related

Multi Hop, Self Closing, Background SSH Tunnel

I'm trying to create a multi hop ssh tunnel, that is in the background, and self closing. What I have is...
ssh -L 9999:localhost:9999 user#host1 -t ssh -L 9999:localhost:27017 -N
-i ./mypem.pem user#host2
This successfully creates a multi hop ssh tunnel that closes the port on host1 after I close the tunnel. However, if I try to fork it to the background, like this...
ssh -f -L 9999:localhost:9999 user#host1 -t ssh -L 9999:localhost:27017
-N -i ./mypem.pem user#host2
It runs in the background, but killing the process locally does open the port back on host 1.
Any ideas for a self closing, multi hop, background ssh tunnel? Thanks!
For self-closing, one can use a simple sleep X command.
$ ssh -f -L 9999:localhost:9999 user#host1 \
-t 'ssh -L 9999:localhost:27017 -i ./mypem.pem user#host2 "sleep 3600"'
Note, this doesn't use the -N option, which is for not running a command, instead, we run a command sleep 3600. This keeps the tunnel up for 1 hour and closes itself after that.

Bash script to block mac adresses

I'm currently working on designing a linux server that has a bash script that runs every time someone new accesses the website that the server is hosting. What this bash script should do is extracting the new mac address of the client and add that mac address with other text in the before.rules file located in ufw directory.
To just extract the mac addresses from arp i'm using:
arp -a | grep eth1 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}'
The line of code that needs to be added in before.rules is:
-A ufw-before-input -m mac --mac-source "the extracted mac address" -j DROP
So anyone having any idea of how i could do this script?
Define a variable to contain your first command (which gives you the MAC address), then output the second line of code with that value in it.
#!/bin/bash
ufwdir='SOME/DIRECTORY'
mac=$(arp -a | grep eth1 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}')
echo "-A ufw-before-input -m mac --mac-source \"$mac\" -j DROP" >>$UFWDIR/before.rules
Important / disclaimer:
use >> to write to the file, if you use > it will truncate the file before adding that one single line!
I have not tested this code, I simply reused your commands from your question.

How to capture tcpdump to a compress file in linux

I have a DNS server and I want to capture DNS traffic to get all the IPs which use my DNS server.
For this I start using following tcpdump command and capture them to a file:
tcpdump -n -i eth0 dst port 53 >> dns_data.log
But the file size is high when I run this for long time. How can I capture this to a compress file? I tried below command but its not working.
tcpdump -n -i eth0 dst port 53 | bzip2 -c >> dns_data.bz2
Try something like tcpdump -G 3600 -w 'trace_%Y-%m-%d_%H:%M:%S.pcap' -z gzip
-G N means rotate every N (3600) seconds.
-z command means run command(gzip) after rotation.

unix netcat utility on linux, checking if connection was made

I am using netcat utility on linux to receive outputs from a program on a windows machine. My problem being that the program on the windows machine does not always give an output.
How can i check that either a connection has been made to netcat ?
What i am doing till now is "nc -l -v 9103 > output" then i check the size of output, the problem this poses is that netcat only write to a file after a certain buffer size has been reached or a new line char is encountered, so some cases evne though a connection has been made the file size is detected as zero.
How can i check if someone has made a connection with netcat.
I tried using
nc -l -v -e someprog.exe 9103 > output
but my netcat doesnt seem to support this
below are the options i have
$ nc -h
usage: nc [-46DdhklnrStUuvzC] [-i interval] [-p source_port]
[-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_version]
[-x proxy_address[:port]] [hostname] [port[s]]
Command Summary:
-4 Use IPv4
-6 Use IPv6
-D Enable the debug socket option
-d Detach from stdin
-h This help text
-i secs Delay interval for lines sent, ports scanned
-k Keep inbound sockets open for multiple connects
-l Listen mode, for inbound connects
-n Suppress name/port resolutions
-p port Specify local port for remote connects
-r Randomize remote ports
-s addr Local source address
-T ToS Set IP Type of Service
-C Send CRLF as line-ending
-t Answer TELNET negotiation
-U Use UNIX domain socket
-u UDP mode
-v Verbose
-w secs Timeout for connects and final net reads
-X proto Proxy protocol: "4", "5" (SOCKS) or "connect"
-x addr[:port] Specify proxy address and port
-z Zero-I/O mode [used for scanning]
Port numbers can be individual or ranges: lo-hi [inclusive]
verbose mode will write connectivity to stderr, and you can redirect stderr to a file, the verbose log has something like
connect to [xxx] from [xxxx]
try
nc -l -v -p 9103 -k 1> output 2>connect.log
and monitor connect.log for connectivity
if you don't use -k , netcat quits after 1st connection.
If you can upgrade your copy of netcat: the modern versions (1.10, for one) have an option to execute a program (or a shell command) upon connect. Otherwise, you can make the netcat think it runs in a terminal (to disable buffering of stdout), by using for example script (it just saves everything on stdin/stdout/stderr in the given file). Or use logging features of screen and tmux.

How to purge connections left open by SSH ProxyCommand?

I have a webserver WWW1 and a front-facing proxy PRX. I use SSH ProxyCommand to connect to WWW1's internal IP (private IP) via PRX (private+public IP). For some connections (not all) I see a network connection left open after I'm finished. These add up!
~/.ssh/config
Host *
ServerAliveInterval 5
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
Host WWW1 WWW2 WWW3
User foo
ProxyCommand ssh -q -a -x PRX nc %h 22
IdentityFile ~/.ssh/id_foo_WWWx
On PRX, lsof | grep WWW1:ssh shows 124 open connections at the moment. On WWW1, the same command shows 243 open connections. There are similar open connections for WWW2, WWW3 etc.
WWW1 and PRX are Debian. Client connections are coming from a mix of Debian, Ubuntu and OSX10.6. I use Emacs Tramp but this has no special configuration (AFAIK) outside of my ~/.ssh/config.
I'm concerned about running out of internal ports, and ideally I want these connections to clean themselves up without intervention. Ideally by configuring them to kill themselves off; failing that a command I can kill old processes with is fine!
A better way would be to use the -W option of SSH, so you could put
ProxyCommand ssh -q -a -x PRX -W %h:22
instead of
ProxyCommand ssh -q -a -x PRX nc %h 22
This way you get rid of dependence on nc too.
Don't know whether it matters but I use nc -w 1 %h %p

Resources