Search and kill process, and start new process on bash script - linux

I need a script for running a new process every hour.
I created a bash script that is scheduled to run every hour through cron. It only works the first time but fails otherwise.
If run from shell, it works perfectly.
Here is the script:
#!/bin/sh
ps -ef | grep tcpdump | grep -v grep | awk '{print $2}' | xargs kill
sleep 2
echo "Lanzando tcpdump"
tcpdump -ni eth0 -s0 proto TCP and port 25 -w /root/srv108-$(date +%Y%m%d%H%M%S).smtp.pcap
cron
#hourly /root/analisis.sh > /dev/null 2>&1
Why is the cron job failing?

This is the answer the OP added to the question itself.
Correction of the script after the comments (it works fine)
#!/bin/bash
pkill -f tcpdump
/usr/sbin/tcpdump -ni eth0 -s0 proto TCP and port 25 -w /root/srv108-$(date +%Y%m%d%H%M%S).smtp.pcap
That is, I just needed to use the full path to tcpdump.

The failure may be related to the cron job never finishing - you are starting a new tcpdump in the foreground, which will run forever.
Try this simplified script:
#!/bin/bash
killall tcpdump
tcpdump -ni eth0 -s0 proto TCP and port 25 -w /root/srv108-$(date +%Y%m%d%H%M%S).smtp.pcap&

Related

Why is the shell output txt file empty?

Kindly assist. I am working on a script that will perform a telnet test to a specific ip address on a specific TCP port and below is my script.
#! /bin/sh
nc -z -v -w5 192.168.88.55 3389 | tee results.txt
During execution, a "results.txt" file is created but it is empty. I want it to have the output of the script after execution.
I have managed to resolve it by making the below modifications to the script
#! /bin/sh
nc -z -v -w5 192.168.88.55 3389 2>&1 | tee results.txt
sleep 5
exit
It is now able to write the output to the results.txt file.
Thank you.

how to find network connections from a PID

I have a PID and i am trying to find the network connections that are attached to that PID.
i have placed the pid into a variable $PID.
So far I have tried using netstat to do it. I have tried the following
netstat -p | grep $PID
netstat | grep $PID
but these do not seem to grep anything or find what im looking for. What would be the best way to find these?
strace will do the job:
strace -p $PID -f -e trace=network -s 10000
-s 1000 increases the maximum string size to print, which is 32 by default.
Check out this question on unix.stackexchange.com for alternatives.
You can also use netstat. Just replace -p with -nap.
netstat -nap | grep {CMD-of-PID}
its from cmd of pid actually.

Bash script to update Deluged Interface IP fails to run as cron job, but works when run manually

I'm running a Debian Linux variant, OpenVPN, and Deluge.
My VPN provider has short IP leases, so every few days I need to update Deluge's interface IP so that it downloads only on tun0. I put together a script to automate this - it basically puts the current tun0 IP into $tun0ip, then does a grep check against the deluge daemon config file to see if that string is present (this is a dirty way to do it, but I guess it works).
My problem is this: When I call the script manually, it works as intended - it kills deluged and then relaunches it, specifying the new IP with deluged -i $tun0ip . However, when I run the script as a cron job, it fails - it passes a null or zero value to $tun0ip, and then deluged -i $tun0ip doesn't work without a valid IP specified, so the application fails to launch. Script below. Am I doing something wrong here? I really appreciate any help!
#!/bin/bash
tun0ip=$( ifconfig tun0 | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
if grep -q $tun0ip "/root/.config/deluge/core.conf"; then
#Great, the IP is correct - don't have to do anything.
echo "IP Looks good - no changes were made."
else
echo "tun0 IP not found in config file. Killing deluged and rebooting with $tun0ip as interface."
killall deluged
sleep 5
deluged -i $tun0ip
fi
I had to specifiy /sbin/ifconfig, like this:
tun0ip=$( /sbin/ifconfig tun0 | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*'$

How to get Netcat to return unused ports

I tried this command to try to find unused ports. What I want it to do is run netcat on a range of ports, find the ones that are not running services, grep the first one of those lines of output, and then use sed to output the port number only.
nc -z <my url> 5000-5010 | grep -m 1 succeeded | sed 's/[^0-9]//g'
But when I try and launch a service using the port that is returned, I get a message saying the port is currently in use.
I found out netcat success means a service is running on the port, but when I try this instead
nc -zv <my url> 5000-5010 | grep -m 1 failed | sed 's/[^0-9]//g'
I get nothing, even though most lines of output contain the word failed.
Going through the man pages revealed that netcat -z only returns output for successful results, though why line after line of failed connection appears on my terminal window is still beyond me.
How can I use netcat to view the first port a connection failed on?
To get a list of closed (unused) ports on a linux system you can use:
Realtime Output:
#!/bin/bash
remoteHost=stackoverflow.com
for i in {80..100}
do
nc -v -z -w 1 $remoteHost $i &> /dev/null && echo "Port $i Open" || echo "Port $i Closed"
done
You can change the timeout, currently set to 1 sec (-w 1), to a higher value if needed.
Output:
Port 80 Open
Port 81 Closed
Port 82 Closed
Port 83 Closed
Port 84 Closed
Port 85 Closed
etc..
nc arguments:
-v Have nc give more verbose output.
-z Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option.
-w timeout
If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. The -w flag has no effect on the -l option, i.e. nc will listen forever for a connection, with or without the -w flag. The default is no timeout.
Resources
nc man
The nc -v command writes the failed/succeeded messages on standard error, not the standard output. You can redirect stderr to stdout using 2>&1.
nc -zv <my url> 5000-5010 2>&1 | grep -m 1 failed
to get the failed line.
See http://www.cyberciti.biz/faq/redirecting-stderr-to-stdout/
By the way, I suggest you use awk to get the port number from the output line:
nc -zv <my url> 5000-5010 2>&1 | grep -m 1 failed | awk '{print $6}'
which prints the value in the 6th column of the output line.

BASH - how can i make the log file accessable via TCP port when-ever requires?

How can i have a logs on TCP port available, so that it can be remotely tested by someone else ? for example:
MAINSERVER> tail -f /etc/httpd/logs/access_log | grep -e fruit_Python -e fruit_BASH -e fruit_C | .... TCP 9999 ... make this available ....??
NOW, from my Laptop remotely i want to do this temporary:
MYLAPTOP> tail -f http://MAINSERVER:9999 | grep -e grab_BASH
Any idea please?
You can use netcat (nc) to do this:
Server side (listen for connection):
tail -f /foo/bar |nc -l -k -p 9999
-l listen
-k listen for another connection after current completed
Client side (connecting):
nc MAINSERVER 9999 | grep whatever_you_like
You can use bash as well to connect to /dev/tcp/host/port but sometimes it's not suported (compiled in to Bash) for security reasons.
Client:
grep whatever_you_like < /dev/tcp/MAINSERVER/9999

Resources