Is there a programm in Linux which will execute a command and send an email if it fails? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a process which is started by Сron like this:
timeout 1h /app/longprocess.sh
Now I want to be notified by email if something goes wrong with it. Imagine this:
notifyme maintainer#example.org timeout 1h /app/longprocess.sh
where notifyme is a supposed command which will send an email to maintainer#example.org with the output of the command in case the command exits with a non-zero status. Is there something like this?

No, but (untested)
notifyme() {
local recipient=$1
shift
output=$( "$#" 2>&1 )
rc=$?
echo "$output"
if [[ $rc -ne 0 ]]; then
mailx -s "Non-zero exit status ($rc) for command: $*" "$recipient" <<< "$output"
fi
}

Use the logical OR operator ||. When || is added to a command, whatever is listed after it will only get executed if the command listed before it returns a non-zero status (that is, if it errors out.) For example, using the code you provided:
timeout 1h /app/longprocess.sh || mail -s "job failed" maintainer#example.org
In the above command, mail -s "job failed" maintainer#example.org will only execute if timeout 1h /app/longprocess.sh is not successful.

Instead of using the || operator as suggested in another answer, I recommend using && and always pinging an external monitoring service. The monitoring service would alert you if a ping is missing. This has the benefit of also getting alerted if your cronjob didn't start up for whatever reason. One such monitor is wdt.io.
Your cronjob would look something like
timeout 1h /app/longprocess.sh && curl somemonitor.com/longprocess

Related

Got error "passwords do not match" using passwd in docker [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I wanna create a user had random password for FTP in Docker. The section of Dockerfile like below:
RUN useradd web
RUN export ftppass=`cat /proc/sys/kernel/random/uuid`
RUN echo -e "$ftppass\n$ftppass" | passwd web
But I got an error in last line:
New password: Retype new password: Sorry, passwords do not match.
passwd: Authentication token manipulation error
passwd: password unchanged
Why the passwords do not match even I using a same variable?
update:
I found the output of echo -e "$ftppass\n$ftppass" is:
Step 9/15 : RUN echo -e "$ftppass\n$ftppass"
---> Running in ddd97df41d85
-e
Removing intermediate container ddd97df41d85
---> a64b606ea898
Step 10/15 : ...
Why it's not works for echo -e and where are my $ftppass?
Resolved, the new section of Dockerfile is:
RUN useradd web
RUN cat /proc/sys/kernel/random/uuid > /etc/vsftp-password
RUN echo "web:"`cat /etc/vsftp-password` | chpasswd
RUN echo "ftp info web:"`cat /etc/vsftp-password`
Thanks anyone and happy new year~
Instead of using passwd in a creative way, you should rather look at its manual page, which mentions chpasswd in the See Also section.
The passwd command is meant for interactive use, and it doesn't read its input like most other programs, since the input is not echoed to the screen.

Bash script check permissions to run command on remote [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a local development machine and from my bash script am sending commands to the remote server.
How can I write bash code to check if I am allowed to run the remote command so that I can handle the success/failure response from my script?
Alternatively, how can I capture the output so that I can parse it and detect if it succeeded. The difficulty with parsing is that the ssh command might trigger a password prompt so I can't interfere with that.
That bash script uses ssh -qt to send the remote commands
Command
ssh user#host -qt "sudo -u www /usr/local/bin/php /mnt/data/script.php"
Output:
[sudo] password for xxx:
Sorry, user xxx is not allowed to execute '/usr/local/bin/php /mnt/data/script.php' as www on host.domain.com
Assuming that user != root above: you can't - there's no way to read /etc/sudoers or /etc/sudoers.d/* in a normally set-up Linux box if you're not root, so apart from trial & error there's nothing to be done.
As for capturing the result - that's fairly simple (parsing it, of course, is a different story, depending on what you're doing over there).
output=$( ssh user#host -qt "sudo -u www /usr/local/bin/php /mnt/data/script.php" 2>&1 )
After the execution (and you typing the password for sudo)
echo $? # gives you the return-code of what happened on the far end, if it's a success that should be 0
echo $output # gives you the strings to parse

Keep process running on remote machine after exiting ssh session inside bash script [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a bash script in which I login to a remote machine over ssh and run iperf and then logout and do other things locally. I want iperf to keep running after the bash script logs out. I have tried nohup, disown and setsid, but they don't seem to work when I use them inside the bash script. I have also tried running iperf inside another script, that didn't work either.
Here's the part of the script with nohup example:
ssh root#10.101.10.35 &>/dev/null & << EOF
nohup iperf -s -B 192.168.99.1 &>/dev/null &
EOF
You need to redirect stdin, stdout and stderr to somewhere else as opposed to your terminal like so:
ssh root#10.101.10.35 'iperf -s -B 192.168.99.1 < /dev/null > /tmp/iperf_combined.log 2>&1 &'
stdin is taken from /dev/null (nothing is entered)
stdout and stderr goes to /tmp/iperf_combined.log
The process will run on the remote machine until you will manually kill it or until the script/command will exit on its own.
Edit (as a reply to the poster's comment):
If you want to run multiple commands in the same ssh session, you may use:
ssh -T root#10.101.10.35 << EOF
iperf -s -B 192.168.99.1 < /dev/null > /tmp/iperf_combined_1.log 2>&1 &
iperf -s -B random_ip2 < /dev/null > /tmp/iperf_combined_2.log 2>&1 &
EOF
As per ssh man page:
-T Disable pseudo-tty allocation.
Detailed explanation on psqudo-tty here

How to kill certain process running more than 36 hours and containing certain phrasse in its command? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
on the Linux (CentOS 6), i want to kill process containing "pkgacc" in its command (so no exact command but just partial match) if it is running more than 36 hours.
There is related question: How do you kill all Linux processes that are older than a certain age? but any of the solutions provided do not work for me.
when executing:
if [[ "$(uname)" = "Linux" ]];then killall --older-than 1h someprocessname;fi
It just return killall usage page on how to use killall, in its manual page there is no mention about "--older-than" switch.
It is infinitely easier to invoke a program in a wrapper like timeout from GNU coreutils than to go hunting for them after the fact. In particular because timeout owns its process, there is no ambiguity that it kills the right process. Thus
timeout 36h pkgaccess --pkg_option --another_option package_name
where I made up the names and options for the pkgaccess command since you didn't give them. This process will run no longer than 36 hours.
I think you could do something like
ps -eo pid,cmd,etime
then you could parse the output with grep searching for you process,
something like that:
ps -eo pid,cmd,etime | grep pkgacc
you will have some output with one or more result, the last column from the output must be the time of running process, so one more little bash effort
and you could check if the time is greater than 36 hours.
#!/bin/bash
FOO=$(ps -eo pid,cmd,etime | grep -m 1 pkgacc | awk '{ print $1" "$3 }'| sed -e 's/\://g')
IFS=' ' read -r -a array <<< "$FOO"
if [ "${array[1]}" -gt "360000" ]; then
echo "kill the process: ${array[0]}"
else
echo "process was not found or time less than 36 hours"
fi
I think that could solve part of your problem,
see that I do not explicit kill the process but just indicate
what it is. You could improve the idea from that.

I keep on getting error in rc.local [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have the following script added to my rc.local file
#wait until the internet connection is ready
wget -q --tries=10 --timeout=20 -O - http://google.com > /dev/null
while [ $? -ne 0 ]; do
printf "Waiting for internet connection"
sleep 10;
wget -q --tries=10 --timeout=20 -O - http://google.com > /dev/null;
done
If I am connected to internet everything just works fine but if I am offline I get error
[FAIL] startpar: service(s) returned failure: rc.local ... failed!
What is wrong here and how could I fix it, I want rc.local to wait for internet connection before it continues to another script
As it appears to have solved your problem, this is how I would suggest writing your code:
while ! wget -q --tries=10 --timeout=20 -O - http://google.com > /dev/null; do
echo 'Waiting for internet connection'
sleep 10
done
Remember that [ is not syntax, it is a command whose exit status determines whether your while loop continues or not. You should just test the exit status of the command you are trying to run directly.
As for why this has solved your problem, to be honest I'm not sure. The error message that you have shown isn't really specific enough to indicate where the problem lies and I can't see anything actually wrong with what you did before. Perhaps you should try adding set -xv to your rc.local file to enable debugging.

Resources