Why is the shell output txt file empty? - linux

Kindly assist. I am working on a script that will perform a telnet test to a specific ip address on a specific TCP port and below is my script.
#! /bin/sh
nc -z -v -w5 192.168.88.55 3389 | tee results.txt
During execution, a "results.txt" file is created but it is empty. I want it to have the output of the script after execution.

I have managed to resolve it by making the below modifications to the script
#! /bin/sh
nc -z -v -w5 192.168.88.55 3389 2>&1 | tee results.txt
sleep 5
exit
It is now able to write the output to the results.txt file.
Thank you.

Related

is it possible to run bash commands over shell running on telnet?

So we have Embedded Linux board running Linux.
We can connect to that board using telnet and that spawns shell and gives access to it.
Now I am writing a bash script where I want to run commands on that shell and get its output.
e.g. My commands are something like below command over telnet and see if that was successful or not.
test -c /dev/null
When I run it like below I always get 1 as exit status
{ test -c /dev/null; sleep 1;} | telnet <board ip addr>
If possible I don't want to use expect,
Any suggestion/pointers ?
With SSH could trivially and robustly have done:
ssh yourhost 'test -c /dev/null'
With a simple shell on a TCP port, you could somewhat robustly but annoyingly have used:
echo 'test -c /dev/null; echo $?' | nc -q 1 yourhost 1234
telnet is instead notoriously timing sensitive and tricky to script, so since you don't want to do it robustly with expect, you can try to kludge it:
{ sleep 1; echo 'test -c /dev/null; echo $?'; sleep 1; } | telnet somehost

Search and kill process, and start new process on bash script

I need a script for running a new process every hour.
I created a bash script that is scheduled to run every hour through cron. It only works the first time but fails otherwise.
If run from shell, it works perfectly.
Here is the script:
#!/bin/sh
ps -ef | grep tcpdump | grep -v grep | awk '{print $2}' | xargs kill
sleep 2
echo "Lanzando tcpdump"
tcpdump -ni eth0 -s0 proto TCP and port 25 -w /root/srv108-$(date +%Y%m%d%H%M%S).smtp.pcap
cron
#hourly /root/analisis.sh > /dev/null 2>&1
Why is the cron job failing?
This is the answer the OP added to the question itself.
Correction of the script after the comments (it works fine)
#!/bin/bash
pkill -f tcpdump
/usr/sbin/tcpdump -ni eth0 -s0 proto TCP and port 25 -w /root/srv108-$(date +%Y%m%d%H%M%S).smtp.pcap
That is, I just needed to use the full path to tcpdump.
The failure may be related to the cron job never finishing - you are starting a new tcpdump in the foreground, which will run forever.
Try this simplified script:
#!/bin/bash
killall tcpdump
tcpdump -ni eth0 -s0 proto TCP and port 25 -w /root/srv108-$(date +%Y%m%d%H%M%S).smtp.pcap&

Bash - Grepping nc output in a for loop

I'm currently trying to grep the output for a nc in a bash single line loop, to only show me lines with openstring within. I've already tried --line-buffered with no success. Can anybody give me a light about what I'm doing wrong? Here goes the command:
root#kali:~# for host in $(seq 200 254); do nc -nvv -w 1 -z 192.168.15.$host 80 | grep --line-buffered "open"; done
Redirect stderr to stdout so you can grep it.
You can do that by putting 2>&1 anywhere between do and |.

pseudo-terminal error will not be allocated because stdin is not a terminal - sudo

There are other threads with this same topic but my issue is unique. I am running a bash script that has a function that sshes to a remote server and runs a sudo command on the remote server. I'm using the ssh -t option to avoid the requiretty issue. The offending line of code works fine as long as it's NOT being called from within the while loop. The while loop basically reads from a csv file on the local server and calls the checkAuthType function:
while read inputline
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done < configfile.csv
This is the function that sits at the top of the script (outside of any while loops):
function checkAuthType()
{
if [ $2 == linux ]; then
LINE=`ssh -t $1 'sudo grep "PasswordAuthentication" /etc/ssh/sshd_config | grep -v "yes\|Yes\|#"'`
fi
if [ $2 == unix ]; then
LINE=`ssh -n $1 'grep "PasswordAuthentication" /usr/local/etc/sshd_config | grep -v "yes\|Yes\|#"'`
fi
<more irrelevant code>
}
So, the offending line is the line that has the sudo command within the function. I can change the command to something simple like "sudo ls -l" and I will still get the "stdin is not a terminal" error. I've also tried "ssh -t -t" but to no avail. But if I call the checkAuthType function from outside of the while loop, it works fine. What is it about the while loop that changes the terminal and how do I fix it? Thank you one thousand times in advance.
Another option to try to get around the problem would be to redirect the file to a different file descriptor and force read to read from it instead.
while read inputline <&3
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done 3< configfile.csv
I am guessing you are testing with linux. You should try add the -n flag to your (linux) ssh command to avoid having ssh read from stdin - as it normally reads from stdin the while loop is feeding it your csv.
UPDATE
You should (usually) use the -n flag when scripting with SSH, and the flag is typically needed for 'expected behavior' when using a while read-loop. It does not seem to be the main issue here, though.
There are probably other solutions to this, but you could try adding another -t flag to force pseudo-tty allocation when stdin is not a terminal:
ssh -n -t -t
BroSlow's approach with a different file descriptor seems to work! Since the read command reads from fd 3 and not stdin,
ssh and hence sudo still have or get a tty/pty as stdin.
# simple test case
while read line <&3; do
sudo -k
echo "$line"
ssh -t localhost 'sudo ls -ld /'
done 3<&- 3< <(echo 1; sleep 3; echo 2; sleep 3)

I am trying to send mail which will redirect the content of log files in logfile.txt in same directory.But its failing

Please find my script below:-
#!/bin/bash
date=`date +%Y%m%d`
ssh root#server-ip "ls -lrth /opt/log_$date/"
ssh root#server-ip "cd /opt/log_$date/; for i in `cat *.log`;do echo $i >> /opt/log_$date/logfile.txt; done;cat /opt/log_$date/logfile.txt| mail -s \"Apache backup testing\" saranjeet.singh#*****.com"
Any help will be appreciated. Thanks
Because you use double quotes, your backticks are getting evaluated on the local host before the SSH command executes.
A much better fix in this case is to avoid them altogether, though;
ssh root#server-ip "cat /opt/log_$date/*.log |
tee /opt/log_$date/logfile.txt" |
mail -s ...

Resources