How dangerous is this bash script? - linux

WARNING: Dangerous script. Do not run from your command line!
Saw this in a company joke email. Can someone explain to me why this bash script is more dangerous than a normal 'rm -rf' command?:
nohup cd /; rm -rf * > /dev/null 2>&1 &
Particularly, why is nohup used and what are the elements at the end for?
WARNING: Dangerous script. Do not run from your command line!

You can try something less "dangerous":
nohup cd /; find * >/dev/null 2>&1 &
I'm getting this:
nohup: ignoring input and appending output to `nohup.out'
nohup: cannot run command `cd': No such file or directory
[2] 16668
So, nohup part does nothing, it only triggers an error. The second part (of the original script) tries to remove everything in your current directory, and cannot be stopped by Ctrl-C, because it runs in the background. All its output is redirected to void, so you do not see any 'access denied' progress messages.

2>&1 takes stderr (file handle 2) and redirects to stdout (file handle 1). & by itself places the rm command in the background. nohup allows a job to keep running even after the person who started it logs out.
In other words, this command does its best to wipe out the entire file system, even if the user ragequits their terminal/shell.

The joke is kind of broken, obviously it has not been tested, he meant
nohup sh -e "cd / ; rm -rf *" > /dev/null 2>&1 &
or
nohup rm -rf / > /dev/null 2>&1 &
otherwise the nohup cd /; part is considered one separate line by the shell. and the second line just spawn rm -rf * which recursively rm your current directory (less the files with name started with . )

nohup means that it will ignore the hangup signal, meaning that it will keep running even if the user is no longer signed in.
cd / moves the user to the root directory
rm -rf * removes all files recursively(traverses all directories) and forcefully(doesn't care if files are in use)
The piece on the end redirects all output to nowhere. It should essentially format your drive to nothing.

nohup [..] & makes it run in the background even after the user has logged out (making it harder to stop, I suppose)
2>&1 redirects stderr to stdout
> /dev/null discards anything coming from stdout
The command would basically appear to do nothing, as your filesystem slowly gets destroyed in the background.

Related

Running a process with the TTY detached

I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.

Start lots of background jobs but keep their logs separated

I have little experiences in shell commands in unix.
So far, I have checked stackOverflow and know how to run simple shell scripts in order by
using echo
echo $(sh dosomthing1.sh)
echo $(sh dosomthing2.sh)
directly using sh xxx and wait
sh dosomthing1.sh
wait
sh dosomthing2.sh
using &&
sh dosomthing1.sh && sh dosomthing2.sh
But these ways seem to be helpless to solve my problem...
Here is my problem:
I have a basic shell script to do a maven compile and then using "nohup xxx &" to start a java application in background. the script is shown below:
#get the input env parameter
env=$1
#goto application root directory
cd /applicationDir
#to compile
mvn install -Dmaven.test.skip=true
#to start with parameter env
nohup java -jar -Dspring.profiles.active=$env myApplication.jar &
#to tail the log
tail -20f myApplication.log
I have too many different applications with the same startup scripts and it is hard to start them one by one. I need to start them with one command.
All the shell scripts are expected to be processed one by one in order. If there are any exceptions, skip and run the next one.
And when I tried to write a script like this:
sh start1.sh
wait
echo "application 1 was start up"
sh start2.sh
wait
echo "application 2 was start up"
...
sh startxxx.sh
wait
echo "application xxx was start up"
Though all the children shell scripts will process in order as what I expected, and the output infomations looked like the shell is functioning well, but the fact is only the last application will be started, all the previous command "nohup xxxx &" will be shut down.
Also I have tried to write like this:
sh start1.sh &
sh start2.sh &
...
sh startxxx.sh &
Although the result was what I want, all the application will be started well, but during processing the scripts, because of the parallel running of the scripts, the consoled output is unreadable. It comes to a good result but not a graceful way.
I have no idea how to solve this problem...
Please help me with this, thank you very much!
When you have a script with commands, you cam do chmod +x start.sh. Now the script can be started with ./start.sh. You will avoid an additional sh process and with ls -l you can see which scripts are executable.
In your scripts you have tail -f. This will be very confusing for a backgound process. Start the scripts in the background and view the logging from the console. I do hope that each script is using a different myApplication.jar and myApplication.log.
When the logging in the logfile is duplicated in stdout (your commandline window), you can throw that logging away.
./start1.sh > /dev/null 2>&1 &
./start2.sh > /dev/null 2>&1 &
./startxxx.sh > /dev/null 2>&1 &
The processes will be killed when you logout before the scripts are terminated. This can be avoided with nohup:
nohup ./start1.sh > /dev/null 2>&1 &
nohup ./start2.sh > /dev/null 2>&1 &
nohup ./startxxx.sh > /dev/null 2>&1 &
Edit:
OPS wants to start programs in a fixed order.
Starting scripts exactly one after another in order, should be possible by calling them in the right order (perhaps with an additional sleep 1).
When you need to wait for program 1 finished some init stuff, you need to check that. Use 1 script calling all scripts and add some control statements, like
nohup java something &
while ! grep -q "Started" myApplication.log; do
sleep 1
done
When the java program has an error the while will wait for ever, so replace this with some max retrycount
for ((retry=0l retry<100; retry++)); do
grep -q "Started" myApplication.log && break
sleep 1
done
https://man7.org/linux/man-pages/man8/cron.8.html
This might help you. Cron is a task scheduler, which you can use to run programs in sequence. If the man page is difficult to understand, look for tutorials on it; I'm sure some would exist.

pkill with -f flag in crontab not running command after semi colon

I wanted to kill a process and remove a flag indicating that process is running. cron:
00 22 * * 1-5 pkill -f script.sh >log 2>&1 ; rm lock >log 2>&1
This works perfectly when I run it on terminal. But in crontab rm is not running. All I can think of is that whole line after -f flag is being taken as arguments for pkill.
Any reason why this is happening?
Keeping them as separate cron entries is working. Also pkill without -f flag is running (though it doesn't kill process as I want pattern to be searched in whole command).
Ran into this problem today and just wanted to post a working example for those who run into this:
pkill -f ^'python3 /Scripts/script.py' > /dev/null 2>&1 ; python3 /Scripts/script.py > /tmp/script.log 2>&1
This runs pkill and searches the whole command (-f) that starts with (regex ^) python3 /Scripts/script.py. As such, it'll never kill itself because it does not start with that command (it starts with pkill).
the short answer: it simply killed itself!
my answer explained:
if you let a command get started by a crond it'll be executed in a subshell. most probably the line you'll find in ps or htop will look like this:
/bin/sh -c pkill -f script.sh >log 2>&1 ; rm lock >log 2>&1
(details may vary. e.g. you might have bash instead of sh)
the point is, that the whole line got one PID (process id) and is one of the command lines which pgrep/pkill is parsing when using the '-f' parameter. as stated in the man page:
-f, --full
The pattern is normally only matched against the process name. When -f is set, the full command line is used.
now your pkill is looking for any command line in your running process list, which somehow contains the expression 'script.sh' and eventually will find that line at some point. as a result of it's finding, it'll get that PID and terminate it. unfortunately the very same PID holds the rest of you command chain, which just got killed by it self.
so you basically wrote a 'suicide line of commands' ;)
btw: i just did the same thing today and thats how i found your question.
hope this answer helps, even if it comes a little late
kind regards
3.141592 and nanananananananananananaBATMAN's answer is correct.
I worked around this problem like this.
00 22 * * 1-5 pkill -f script.[s][h] >log 2>&1 ; rm lock >log 2>&1
This works because script.[s][h](string) is not matched with script.[s][h](regex).

How to run a script in background (linux openwrt)?

I have this script:
#!/bin/sh
while [ true ] ; do
urlfile=$( ls /root/wget/wget-download-link.txt | head -n 1 )
dir=$( cat /root/wget/wget-dir.txt )
if [ "$urlfile" = "" ] ; then
sleep 30
continue
fi
url=$( head -n 1 $urlfile )
if [ "$url" = "" ] ; then
mv $urlfile $urlfile.invalid
continue
fi
mv $urlfile $urlfile.busy
wget -b $url -P $dir -o /www/wget.log -c -t 100 -nc
mv $urlfile.busy $urlfile.done
done
The script basically checks for any new URLs at wget-download-link.txt for every 30 seconds and if there's a new URL it'll download it with wget, the problem is that when I try to run this script on Putty like this
/root/wget/wget_download.sh --daemon
it's still running in the foreground, I still can see the terminal output. How do I make it run in the background ?
In OpenWRT there is neither nohup nor screen available by default, so a solution with only builtin commands would be to start a subshell with brackets and put that one in the background with &:
(/root/wget/wget_download.sh >/dev/null 2>&1 )&
you can test this structure easily on your desktop for example with
(notify-send one && sleep 15 && notify-send two)&
... and then close your console before those 15 seconds are over, you will see the commands in the brackets continue execution after closing the console.
The following command will also work:
((/root/wget/wget_download.sh)&)&
This way you don't have to install the 'nohub' command in the tight memory space of the router used for OpenWrt.
I found this somewhere several years ago. It works.
The &at the end of script should be enough, if you see output from the script it means, that stdout and/or stderr is not closed, or not redirect to /dev/null
You can use this answer:
How to redirect all output to /dev/null
I am using openwrt merlin and the only way to get it working was using the crud cron manager[1]. Nohub and screen are not available as solutions.
cru a pinggw "0 * * * * /bin/ping -c 10 -q 192.168.2.254"
works like charm
[1][https://www.cyberciti.biz/faq/how-to-add-cron-job-on-asuswrt-merlin-wifi-router/]
https://openwrt.org/packages/pkgdata/coreutils-nohup
opkg update
opkg install coreutils-nohup
nohup yourscript.sh &
You can use nohup.
nohup yourscript.sh
or
nohup yourscript.sh &
Your script will keep running even if you close your putty session, and all the output will be written to a text file in same directory.
nohup is often used in combination with the nice command to run processes on a lower priority.
nohup nice yourscript.sh &
See: http://en.wikipedia.org/wiki/Nohup
For busybox in Openwrt Merlin system, I got a better solution which combined cru and date command
cru a YOUR_UNIQUE_CRON_NAME "`date -D '%s' +'%M %H %d %m *' -d $(( \`date +%s\`+2*60 ))` YOUR_CMD_HERE"
which add a cron job running 2 minutes later, and only run once.
Inspired by PlagTag's idea.
In another way these code would tried:
ssh admin#192.168.1.1 "/jffs/your_script.sh &"
Simple and without any programs like nohup screen...
(BTW: worked on Asus-Merlin firmware)
Try this:
nohup /root/wget/wget_download.sh >/dev/null 2>&1 &
It will go to the background so when you close your Putty session, it will be still running, and it won't send messages to the terminal.

cd doesn't work when redirecting output?

Here's a puzzler: can anyone explain why cd fails when the output is redirected to a pipe?
E.g.:
james#machine:~$ cd /tmp # fine, no problem
james#machine:~$ cd /tmp | grep 'foo' # doesn't work
james#machine:~$ cd /tmp | tee -a output.log # doesn't work
james#machine:~$ cd /tmp >out.log # does work
Verified on OSX, Ubuntu and RHEL.
Any ideas?
EDIT: Seem strange that I'm piping the output of cd? The reason is that it's from a function wrapping arbitrary shell commands with log entries and dealing with output.
When you redirect the output, it spawns a child shell process, changes the directory in the child process, and exits. When you don't redirect the output, it doesn't spawn any new process because it is a built-in shell command.

Resources