Why my named pipe input command line just hangs when it is called? - linux

Why my named pipe input command line just hangs when it is called?
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Send command to a background process
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server. And worked the first time I did it. Since it them they do not work anymore. Every time I do ./send.sh commands the command line hangs until I hit Ctrl+C.
It also hangs and does nothing when I do directly echo commamd > /tmp/srv-input
The scripts
It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
pkill -f hlds
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
cat > /tmp/srv-input &
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
cat /tmp/srv-input | ./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
#!/bin/sh
echo "$#" > /tmp/srv-input
# Successful execution
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
I always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt

You would be better off just having hlds_run read directly from the pipe instead of having cat pipe it in.
Try
./hlds_run … > my_logs.txt 2>&1 < /tmp/srv-input &
Instead of
cat /tmp/srv-input | ./hlds_run …

Related

Stop exec and telnet from showing login outputs

I have script that access module locally using the code
exec 3<> /dev/tcp/127.0.0.1/5037 ; echo -e "my command here" >&3 ; cat <&3
In the telnet session, I got the lines
Remote connection from 127.0.0.1:51698 to 127.0.0.1:5037
Closing connection to 127.0.0.1:51698
These outputs appears also with telnet sessions (without script)
How can I stop them as the script is running multiple timer per min and is spamming the console?
You can redirect it in a .txt file.
This might help you:
your_command > log.txt 2>&1
This will leave your console clean while all the logs will be saved on log.txt

bash: what to do when stdout does not exist

In a very simplified scenario, I have a script that looks like this:
mv test _test
sleep 10
echo $1
mv _test test
and if I execute it with:
ssh localhost "test.sh foo"
the test file will have an underscore in the name as long as the script is running, and when the script is finished, it will send foo back. The script SHOULD keep running, even if you terminate the ssh command by pressing ctrl+c or if you lose connection the the server, but it doesn't (the file is not renamed back to "test"). So, I tried the following:
nohup ssh localhost "test.sh foo"
and it makes ssh immune to ctrl+c but flaky connection to the server still causes trouble. After some debugging, it turns out that the script WILL actually reach the end IF THERE IS NO ECHO IN IT. And when you think about it, it makes sense - when the connection is dropped, there is no more stdout (ssh socket) to echo to, so it will fail, silently.
I can, of course, echo to a file and then get the file, but I would prefer something smarter, along the lines of test tty && echo $1 (but tty invoked like this always returns false). Any suggestions are greatly appreciated.
The following command does what you want:
ssh -t user#host 'nohup ~/test.sh foo > nohup.out 2>&1 & p1=$!; tail -f ~/nohup.out & wait $p1'
... test.sh is located in the users home directory
Explanation:
1.) "ssh -t user#host " ... pretty clear ... starts remote session
2.) "nohup ~/test.sh foo > nohup.out 2>&1" ... starts the test.sh script with nohup in background
3.) "p1=$!;" ... stores the child pid of the previous command in p1
4.) "tail -f ~/nohup.out &" ... tail nohup.out in background to see the output of test.sh
5.) "wait $p1" ... waits for proccess test.sh (which pid is stored in p1) to finish
The above command works even if you interrupt it with ctrl+c.
you can use ...
ssh -t localhost "test.sh foo"
... to force a tty allocation
As st0ne suggested, tail fails, but does not cause the script to terminate, as opposed to cat and echo. So, there is no need for nohup, redirecting stdout to a temporary file, etc. just plain and simple:
mv test _test
sleep 10
echo $1 | tail
mv _test test
and execute it with:
ssh localhost "test.sh foo"

Why this Debian-Linux autostart netcat script won't autostart?

I placed a link to my scripts in the rc.local to autostart it on linux debian boot. It starts and then stops at the while loop. It's a netcat script that listens permantently on port 4001.
echo "Start"
while read -r line
do
#some stuff to do
done < <(nc -l -p 4001)
When I start this script as root with command ./myscript it works 100% correctly. Need nc (netcat) root level access or something else?
EDIT:
rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/etc/samba/SQLScripts
exit 0
rc.local starts my script "SQLScripts"
SQLScripts
#! /bin/sh
# The following part always gets executed.
echo "Starting SQL Scripts" >> /var/log/SQLScriptsStart
/etc/samba/PLCCheck >> /var/log/PLCCheck &
"SQLScripts" starts "PLCCheck" (for example only one)
PLCCheck
#!/bin/bash
echo "before SLEEP" >> /var/log/PLCCheck
sleep 5
echo "after SLEEP" >> /var/log/PLCCheck
echo "vor While" >> /var/log/PLCCheck
while read -r line
do
echo "in While" >> /var/log/PLCCheck
done < <(netcat -u -l -p 6001)
In an rc script you have root level access by default. What does "it stops at the while loop" mean? It quits after a while, or so? I guess you need to run your loop in the background in order to achieve functionality usual in autostart scripts:
echo "Starting"
( while read -r line
do
#some stuff to do
done << (nc -l -p 4001) ) &
echo "Started with pid $( jobs -p )"
I have tested yersterday approximatly the same things, and I have discover that you can bypass the system and execute your netcat script with the following crontask. :
(every minute, but you can ajust that as you want.)
* * * * * /home/kali/script-netcat.sh // working for me
#reboot /home/kali/script-netcat.sh // this is blocked by the system.
According to me, I think that by default debian (and maybe others linux distrib) block every script that try to execute a netcat command.

Read command in bash script not waiting for user input when piped to bash?

Here is what I'm entering in Terminal:
curl --silent https://raw.githubusercontent.com/githubUser/repoName/master/installer.sh | bash
The WordPress installing bash script contains a "read password" command that is supposed to wait for users to input their MySQL password. But, for some reason, that doesn't happen when I run it with the "curl githubURL | bash" command. When I download the script via wget and run it via "sh installer.sh", it works fine.
What could be the cause of this? Any help is appreciated!
If you want to run a script on a remote server without saving it locally, you can try this.
#!/bin/bash
RunThis=$(lynx -dump http://127.0.0.1/example.sh)
if [ $? = 0 ] ; then
bash -c "$RunThis"
else
echo "There was a problem downloading the script"
exit 1
fi
In order to test it, I wrote an example.sh:
#!/bin/bash
# File /var/www/example.sh
echo "Example read:"
read line
echo "You typed: $line"
When I run Script.sh, the output looks like this.
$ ./Script.sh
Example read:
Hello World!
You typed: Hello World!
Unless you absolutely trust the remote scripts, I would avoid doing this without examining it before executing.
It wouldn't stop for read:
As when you are piping in a way you are forking a child which has been given input from parent shell.
You cannot give the values back to parent(modify parent's env) from child.
and through out this process you are always in parent process.

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources