How to run script multiple times and after every execution of command to wait until the device is ready to execute again? - linux

I have this bash script:
#!/bin/bash
rm /etc/stress.txt
cat /dev/smd10 | tee /etc/stress.txt &
for ((i=0; i< 1000; i++))
do
echo -e "\nRun number: $i\n"
#wait untill module restart and bee ready for next restart
dmesg | grep ERROR
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
echo -e "\nADB device booted successfully\n"
done
I want to restart module 1000 times using this script.
Module is like android device witch has linux inside it. But I use Windows.
AT+CFUN=1,1 - reset
When I push script, after every restart I need a command which will wait module and start up again and execute script 1000 times. Then I do pull in .txt file and save all output content.
Which command should I use?
I try commands like wait, sleep, watch, adb wait-for-device, ps aux | grep... Nothing works.
Can someone help me with this?

I find the solution. This is how my script actually looks:
#!/bin/bash
cat /dev/smd10 &
TEST=$(cat /etc/output.txt)
RESTART_TIMES=1000
if [[ $TEST != $RESTART_TIMES ]]
then
echo $((TEST+1)) > /etc/output.txt
dmesg
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
fi
These are the steps that you need to do:
adb push /path/to/your/script /etc/init.d
cd /etc
cat outputfile.txt - make an output file and write inside file 0 ( echo 0 > output.txt )
cd init.d
ls - you should see rc5.d
cd .. then cd rc5.d - go inside
ln -s ../init.d/yourscript.sh S99yourscript.sh
ls - you should see S99yourscript.sh
cd .. return to init.d directory
chmod +x yourscript.sh - add permision to your script
./yourscript.sh

Related

Need help using the pipe command in terminal (Linux / shell file)

Doing an assignment for class that needs to be done using commands in the terminal. I have a shell file (temp1.sh) created in the home directory, and a shell file (temp2.sh) created in a folder (randomFolder). When I run temp2.sh I need to display the amount of characters in temp1.sh. I need to use the pipe command to accomplish this.
So I figure I need to change directory to the home directory then open the file temp1.sh and use thewc -c command to display the characters. I have been trying many different ways to execute this task and somehow can't get it to work. Any help would be appreciated. Without using a pipe I can get it to work, but I can't seem to write out this command line properly while using a pipe.
What I have done so far:
cd ~
touch temp1.sh
chmod 755 temp1.sh
echo 'This file has other commands that are not relevant and work' >> temp1.sh
mkdir randomFolder
cd randomFolder
touch temp2.sh
chmod 755 temp2.sh
echo cd ~ | wc -c temp1.sh >> temp2.sh
This last line tells me there is no such file "temp1.sh" after I run it. if I redirect to home then type wc -c temp1.sh, I get the desired output. I want this output to happen when I run temp2.sh.
Example without using pipe command:
echo wc -c ~/temp1.sh >> temp2.sh
This gives me the desired output when I run temp2.sh. However I need to accomplish this while using the pipe command.
Your code is close to working. The first part is fine:
cd ~
touch temp1.sh
chmod 755 temp1.sh
echo 'This file has other commands that are not relevant and work' >> temp1.sh
mkdir randomFolder
cd randomFolder
touch temp2.sh
chmod 755 temp2.sh
All of that should work. You problem is this part:
echo cd ~ | wc -c temp1.sh >> temp2.sh
You need to separate the cd ~ from something that runs some command and pipes the output to wc, and get the whole lot stored in temp2.sh. That could be something like:
echo "cd $HOME" > temp2.sh
echo "cat temp1.sh | wc -c" >> temp2.sh
The key point here is using separate lines for the cd command and the wc command. Using > for the first command ensures that you don't have stray garbage from previous failed attempts in temp2.sh. You can achieve the same result in multiple ways, including:
echo "cd; cat temp1.sh | wc -c" > temp2.sh
echo "cd ~; while read -r line; do echo "$line"; done < temp1.sh | wc -c" > temp2.sh
And then, finally, you need to execute temp2.sh. You might use any of these, though some (which?) depend on how your PATH is set:
./temp2.sh
temp2.sh
sh temp2.sh
sh -x temp2.sh
$HOME/randomFolder/temp2.sh
~/randomFolder/temp2.sh

Ubuntu: Starting a shell script with args at boot

I'm running Ubuntu Server 14.04 and I'm trying to get a shell script to run at boot. The problem is the script requires args, one of which is a file (a database) and the other is a port number which is found in the same folder as the script. I'm fairly new to this.
When I go in the folder in terminal for example, I can type:
sh script.sh potato 1234
script.sh is the script, potato is the filename, and 1234 is the port number. Works fine when I do it manually.
I tried adding a crontab, #reboot script.sh potato 1234, of course it didn't work, it couldn't find the script.
So I tried:
#reboot */path/to/my/script.sh* potato 1234
again, didn't work. Figured it couldn't find the database.
So I tried:
#reboot path/to/my/script.sh /path/to/my/potato 1234
Still no dice.
I tried running it in term with paths as well
sh path/to/my/script.sh potato 1234
of course failed - the script up and told me it couldn't find the db, as it should.
sh /path/to/my/script.sh /path/to/my/potato 1234 returned no errors, but it didn't start either.
This is what the script I'm trying to start looks like:
if [ $# -lt 1 -o $# -gt 2 ]; then
echo 'Usage: restart dbase-prefix [port]'
exit 1
fi
if [ ! -r $1.db ]; then
echo "Unknown database: $1.db"
exit 1
fi
if [ -r $1.db.new ]; then
mv $1.db $1.db.old
mv $1.db.new $1.db
rm -f $1.db.old.Z
compress $1.db.old &
fi
if [ -f $1.log ]; then
cat $1.log >> $1.log.old
rm $1.log
fi
echo `date`: RESTARTED >> $1.log
nohup ./moo $1.db $1.db.new $2 >> $1.log 2>&1 &``
Any clues?
You need to see the output of your program. cron mails the output to the owner of the crontab or to the address specified in the MAILTO environment variable, on top of the crontab.
Be careful about execution environment. Typically, most of the basic environment variables (PATH, HOME, etc.) will not be set, and might lead to execution errors.
For more information, see the cron man page:
man 5 crontab
These posts might also help you :
crontab PATH and USER
Where are cron errors logged?

Shell script to run two scripts when server load is above 20

I need a script that I can run on a cron every 5 minutes that will check if server load is above 20 and if it is it will run two scripts.
#!/bin/bash
EXECUTE_ON_AVERAGE="15" # if cpu load average for last 60 secs is
# greater or equal to this value, execute script
# change it to whatever you want :-)
while true; do
if [ $(echo "$(uptime | cut -d " " -f 13 | cut -d "," -f 1) >= $EXECUTE_ON_AVERAGE" | bc) = 1 ]; then
sudo s-
./opt/tomcat-latest/shutdown.sh
./opt/tomcat-latest/startup.sh
else
echo "do nothing"
fi
sleep 60
done
I then chmod +x the file.
When I run it I get this:
./script.sh: line 10: ./opt/tomcat-latest/shutdown.sh: No such file or directory
./script.sh: line 11: ./opt/tomcat-latest/startup.sh: No such file or directory
From the looks of it, your script is trying to execute the two scripts from the current working directory into opt/tomcat-latest/ -- which doesn't exist. You should confirm the full file paths for the two shell scripts and then use that instead of the current path.
Also, I'd recommend that you create a cron to do this task. Here's some documentation about the crontab. https://www.gnu.org/software/mcron/manual/html_node/Crontab-file.html
check the permission to execute the files shutdown.sh and startup.sh
Is sudo -s not sudo s-
And I recommend to put a sleep (seconds)
sudo -s /opt/tomcat-latest/shutdown.sh
sleep 15
sudo -s /opt/tomcat-latest/startup.sh
Or better
sudo -s /opt/tomcat-latest/shutdown.sh && sudo -s /opt/tomcat-latest/startup.sh
The startup.sh will executed only if shutdown.sh was executed with success.

Restart process on file change in Linux

Is there a simple solution (using common shell utils, via a util provided by most distributions, or some simple python/... script) to restart a process when some files change?
It would be nice to simply call sth like watch -cmd "./the_process -arg" deps/*.
Update:
A simple shell script together with the proposed inotify-tools (nice!) fit my needs (works for commands w/o arguments):
#!/bin/sh
while true; do
$# &
PID=$!
inotifywait $1
kill $PID
done
This is an improvement on the answer provided in the question. When one interrupts the script, the run process should be killed.
#!/bin/sh
sigint_handler()
{
kill $PID
exit
}
trap sigint_handler SIGINT
while true; do
$# &
PID=$!
inotifywait -e modify -e move -e create -e delete -e attrib -r `pwd`
kill $PID
done
Yes, you can watch a directory via the inotify system using inotifywait or inotifywatch from the inotify-tools.
inotifywait will exit upon detecting an event. Pass option -r to watch directories recursively. Example: inotifywait -r mydirectory.
You can also specify the event to watch for instead of watching all events. To wait only for file or directory content changes use option -e modify.
Check out iWatch:
Watch is a realtime filesystem monitoring program. It is a tool for detecting changes in filesystem and reporting it immediately.It uses a simple config file in XML format and is based on inotify, a file change notification system in the Linux kernel.
than, you could watch files easily:
iwatch /path/to/file -c 'run_you_script.sh'
I find that this suits the full scenario requested by the PO quite very well:
ls *.py | entr -r python my_main.py
To run in the background, you'll need the -n non-interactive mode.
ls | entr -nr go run example.go &
See also http://eradman.com/entrproject/, although a bit oddly documented. Yes, you need to ls the file pattern you want matched, and pipe that into the entr executable. It will run your program and rerun it when any of the matched files change.
PS: It's not doing a diff on the piped text, so you don't need anything like ls --time-style
There's a perl script call lrrr (little restart runner, really) that I'm a contributor on. I use it daily at work.
You can install it with cpanm App::lrrr if you have a perl and cpanm installed, and then use it as follows:
lrrr -w dirs,or_files,to-watch your_cmd_here
The w flag marks off files or directories to watch. Currently, it kills the process you ran if a file is changed, but I'm gonna add a feature soon to toggle that.
Please let me know if there's anything to be added!
I use this "one liner" to restart long-running processes based on file changes
trap 'kill %1' 1 2 3 6; while : ; do YOUR_EXE & inotifywait -r YOUR_WATCHED_DIRECTORY -e create,delete,modify || break; kill %1; sleep 3; done
This will start the process, keep its output to the same console, watch for changes, if there is one, it will shut down the process, wait three seconds (for further within-same-second writes or process shutdown times), then do the above again.
ctrl-c & ssh-disconnect will be respected and the process will exit once you're done.
For legibility:
trap 'kill %1' 1 2 3 6
while :
do
YOUR_EXE &
inotifywait \
-r YOUR_WATCHED_DIRECTORY \
-e create,delete,modify \
|| break
kill %1
sleep 3
done
E.g. for a package.json-ran project
"module" : "./dist/server.mjs",
"scripts" : {
"build" : "cd ./src && rollup -c ",
"watch" : "cd ./src && rollup -c -w",
"start" : "cd ./dist && node --trace-warnings --enable-source-maps ./server.mjs",
"test" : "trap 'kill %1' 1 2 3 6; while : ; do npm run start & inotifywait -r ./dist -e create,delete,modify || break; kill %1; sleep 3; done"
},
"dependencies" : {
Here now you can run npm run watch (which compiles from src to dist) in one activity, npm run test (the server runner & restarter) in another, and as you edit ./src files the builder process will update ./dist and the server will restart for you to test.
I needed a solution for golang's go run command which spawns a subprocess. So combining the answers above and pidtree gave me this script.
#!/bin/bash
# go file to run
MAIN=cmd/example/main.go
# directories to recursively monitor
DIRS="cmd pkg"
# Based on pidtree from https://superuser.com/a/784102/524394
pidtree() {
declare -A CHILDS
while read P PP; do
CHILDS[$PP]+=" $P"
done < <(ps -e -o pid= -o ppid=)
walk() {
echo $1
for i in ${CHILDS[$1]}; do
walk $i
done
}
for i in "$#"; do
walk $i
done
}
sigint_handler()
{
kill $(pidtree $PID)
exit
}
trap sigint_handler SIGINT
while true; do
go run $MAIN &
PID=$!
inotifywait -e modify -e move -e create -e delete -e attrib -r $DIRS
PIDS=$(pidtree $PID)
kill $PIDS
wait $PID
sleep 1
done
-m switch of inotifywait tool
As no answer here address -m switch of inotifywait tool, I will share this: think parallel!
How I do this:
If I want to trig when a file is modified, I will use CLOSE-WRITE event.
Instead on while true I use -m switch of inotifywait command.
As many editors write on new file then rename file, I watch for directory instead and wait for an event with correct filename.
#!/bin/sh
cmdFile="$1"
tempdir=$(mktemp -d)
notif="$tempdir/notif"
mkfifo "$notif"
inotifywait -me close_write "${cmdFile%/*}" >"$notif" 2>&1 &
notpid=$!
exec 5<"$notif"
rm "$notif"
rmdir "$tempdir"
"$#" & cmdPid=$!
trap "kill $notpid \$cmdPid; exit" 0 1 2 3 6 9 15
while read dir evt file <&5;do
case $file in
${cmdFile##*/} )
date +"%a %d %b %T file '$file' modified."
kill $cmdPid
"$#" & cmdPid=$!
;;
esac
done

launch process in background and modify it from bash script

I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.

Resources