I'm writing a little script to use the webcam on the laptop and then email across the photo to me. The ffmpeg usage has to have a exit code for it to work so with this exit the mail function will not get called. What am I doing wrong?
#!/bin/bash
MAIL_ADDR=user#example.com
ts=`date +%s`
list=$(ls | tail -n 1)
function mcheese(){
mkdir /tmp/cheese
cd /tmp/cheese
echo -e "Cheese " | mutt -s "$TS Cheese" $MAIL_ADDR -a $list
}
function cheese(){
ffmpeg -f video4linux2 -s vga -i /dev/video0 -vframes 3 /tmp/cheese/vid-$ts.%01d.jpg
exit 0
}
cheese
mcheese
You setup list in one directory, then change directory and use it.
This is unlikely to work.
Use bash -x to work out where your script is actually failing.
Related
I'm a bash noob, and I am trying to set up a sort of "hot reload" functionality for a project I'm working on using inotifywait. Ubuntu 20.04 if that matters.
Here is what I hoped would have worked:
inotifywait -m -r ../.. -e modify,create,delete |
while read line; do
custom_command
done
I'm having two problems:
Issue #1 is that custom_command takes some time to work, and so if I make more changes to the directory in the meantime, custom command appears to "queue up" custom_command, where really I just want it to keep the most recent one and drop the others.
Issue #2 is that I'm getting some sort of "double output." So for example if I bash auto-exec.sh and auto-exec.sh looks like this:
inotifywait -m -r . -q -e modify,create,delete
Then each time a change registers, I get this as output (not a mistake that it's doubled -- I get two identical lines each time there is a modification):
./ MODIFY auto-exec-testfile.txt
./ MODIFY auto-exec-testfile.txt
I should note I've tried making changes both with Visual Code Studio and gedit, with the same results.
If I modify the bash file like so:
inotifywait -m -r . -q -e modify,create,delete |
while read line; do
echo "$line"
echo "..."
done
I get the following output each time there is a change:
./ MODIFY auto-exec-testfile.txt
...
./ MODIFY auto-exec-testfile.txt
...
If I modify bash_test.sh to the following:
inotifywait -m -r . -q -e modify,create,delete |
while read line; do
echo "help me..."
done
Then I get the following each time a change is made:
help me...
help me...
What happened to the the ./ MODIFY ... line?? Presumably there's something I don't understand about bash, stdout or similar /related concepts here?
And finally, if I change the .sh file to the following:
inotifywait -m -r . -q -q -e modify,create,delete |
while read _; do
echo "help me..."
done
Then I get no output at all. This one I think I understand, because the -q -q means that inotifywait is in "super silent" mode, so there is no log and therefore nothing to trigger the while.
What I'd love to do is just trigger the code once when something changes, and drop all but the most recent execution. I'm not sure doing this using a while is entirely necessary, but I tried inotifywait -m -r . -q -q -e modify,create,delete | echo "help me..", and the script printed "help me..." once at startup, then exited on modification.
Assistance very much appreciated.
EDIT - 20201-Mar-23
I removed -m and create from the inotifywait line, and it appears to work as expected, except that it doesn't stay "up" in monitor mode. So this at least only gives me one entry from inotifywait:
notifywait -r .. -q -e modify,delete |
while read line1; do
echo ${line1}
done
Related:
inotifywait - pause monitoring while executing command
https://unix.stackexchange.com/questions/140679/using-inotify-to-monitor-a-directory-but-not-working-100
inotifywait not performing the while loop in bash script
while inotifywait -e close_write,delete .; do
pkill custom_command
custom_command&
done
I'm creating a sh script on my raspberry for a timelapse.
I've included in the script 4 command that will successively take place, each command tested and working. Now my question is: how to come back to the first command after the last one, indefinitely?
#!/bin/bash
sudo raspistill -w 1024 -h 768 -o /home/pi/timelapse/a%04d.jpg -t 600000 -tl 30000
sudo kill $(ps ax | grep 'timelapse' | awk '{print $1}')
sudo avconv -r 10 -i /home/pi/timelapse/a%04d.jpg -r 10 -vcodec libx264 -crf 20 -g 15 timelaps$
sudo rm /home/pi/timelapse/*.jpg
So after sudo rm /home/pi/timelapse/*.jpg I want to go back to the first command.
Would you have any idea?
thanks.
You can use a loop:
#!/bin/sh
while true; do
...
done
or, re-invoke the script:
#!/bin/sh
...
exec $0 "$#"
Frankly, either one of these seems risky in your case since you're doing no error checking at all, and you run the risk of entering a relatively fast loop of commands continuously failing. At the very least, you should pause for a bit by using while sleep 1; instead of while true;
So I am running into a problem with unix scripts that use curl to make rest calls. I have one script, that runs two other scripts inside of it.
cat example.sh
FILE="file1.txt"
RECIP="wilfred#blamagam.com"
rm -f $FILE
./script1.sh > $FILE
mail -s "subject" $RECIP < $FILE
RECIP="bob#blamagam.com"
rm -f $FILE
./script2.sh > $FILE
mail -s "subject" $RECIP < $FILE
exit 0
Each script makes rest calls to the same service. It is my understanding that script1.sh should completely finish before script2.sh is ran, however that is not the case. In the logs for the rest service I see a rest call from the second script in the middle of the first one still executing. The second script then fails because of this (it does not get any data returned).
I am modifying this process so I am not the one who originally wrote it. I am not seeing any forked processes, or background processes at all and I have been banging my head against the wall.
I do know that script2.sh works. Whenever script1.sh takes under a minute script2.sh works just fine, but more often than not script1.sh takes over a min, causing the second script to fail.
This is ran by a cron, and the contents of the files are mailed out, so I cant just default to running them manually. Any suggestions for what to look into would be much appreciated!
EDIT: Here is a high pseudo code example
script1.sh
ITEMS=`/usr/bin/curl -m 10 -k -u userName:passWord -L https://server/rest-service/rest?where=clause=value;clause2=value2&sel=field 2>/dev/null | sed s/<\/\?Attribute[^>]*>/\n/g | grep -v '^<' | grep -v '^$' | sed 's/ //g'`
echo "\n Subject for these metrics"
echo "$ITEMS"
Both scripts have lots of entries like this. There are 2 or 3 for loops but they are simple and I do not see any background processes being called. Its a large script so I could only provide a snippet. Could the rest call into pipes be causing an issue?
Edit:
Just tested this on my system and it seems to work.
cat example.sh
FILE="file1.txt"
RECIP="wilfred#blamagam.com"
rm -f "$FILE"
(./script1.sh > "$FILE") &
procscript1=$!
wait "$procscript1"
mail -s "subject" "$RECIP" < "$FILE"
RECIP="bob#blamagam.com"
rm -f "$FILE"
(./script2.sh > "$FILE") &
procscript2=$!
wait "$procscript2"
mail -s "subject" "$RECIP" < "$FILE"
exit 0
Put the script executions in the background with the &.
Get the process id's for each script execution.
Use the wait command to block until the execution is done.
I have an old Syno NAS and wish to use the "shred" command to wipe this disks inside. The idea is to let the command run to complete on the box itself without the need of a computer.
So far I have managed...
1) to get the right parameters for 'shred'
* runs in the background using the &
2) get that command to output the progress (-v option) to a file shred.txt
* to see from the file what the progress is
shred -v -f -z -n 2 /dev/hdd 2>&1 | tee /volume1/backup/shred.txt &
3) ssh tunnel the command so I can turn off my laptop while its running
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd > /dev/null 2>&1 &'"
The problem is that I can't combine 2) and 3)
I tried to combine them like this, but the resulting file remained empty:
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 | tee /volume1/backup/shred.txt > /dev/null &'"
It might be a case of the NOOBS but I can't figure out how to get this done.
Any suggestions?
Thanks. Vince
Commands sh and tee are not needed in here:
ssh -n root#host 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 >/volume1/backup/shred.txt &' >/dev/null
The final >/dev/null is optional, it will just disregard any greetings from other hosts.
Tried the following command (based on Grzegorz suggestion) and included the opening date stamp and the before mentioned - stupidly forgotten - verbose switch. Last version of the command string:
ssh -n root#host 'date > /volume1/backup/shred_sda.txt; nohup /opt/bin/shred -v -f -z -n 4 /dev/sda 2>&1 >> /volume1/backup/shred_sda.txt # >/dev/null'
The last thing to figure out is how to include the date stamp when the shred command has completed.
I'm a newbie to linux scripting and am having an issue with a script that I got from the web and am trying to modify.
Here is the script
#!/bin/bash
if (($# ==0))
then
echo "Usage: flvto3gp [flv files] ..."
exit
fi
while (($# !=0 ))
do
ffmpeg -ss 00:00:10 -t 1 -s 400x300 -i $1 -f mjpeg /home/zavids/rawvids/thumbs/$1.jpg
shift
done
echo "Finished"
echo "\"fakap all those nonsense!\""
echo ""
So I'm grabbing a screenshot from a video and saving it as a jpeg. The problem is the extension of the video file is retained so finished file is video.flv.jpg (for example). How can I get rid of that video extension?
Change this line
ffmpeg -ss 00:00:10 -t 1 -s 400x300 -i $1 -f mjpeg /home/zavids/rawvids/thumbs/$1.jpg
to this
ffmpeg -ss 00:00:10 -t 1 -s 400x300 -i $1 -f mjpeg /home/zavids/rawvids/thumbs/${1%.*}.jpg
That strips the extension from the input file before using it to create the name of the output file, using bash parameter expansion.
You can try to use this :
${string%substring}
It deletes shortest match of $substring from back of $string.
For your case :
${1%.flv}
This code will substitute .flv from the end of your first argument.
You can have a lot of details here too : http://tldp.org/LDP/abs/html/string-manipulation.html