Basic bash shell scripting on nmon - linux

I'm having a problem trying to run nmon using my own script where nmon is deployed in the linux environment.
Based on this script, I am required to execute command "test.sh 2 5", with variables represented by value 2 and 5
#!/bin/bash
#sh test.sh variable1 variable2
./nmon -f -s$1 -c $2
total=$(( $1 * $2 ))
echo "------------------------------------------------"
echo -e "Providing $2 snapshots with interval of $1s"
echo -e "Saving into $HOSTNAME. Completing in $total seconds\n\n"
However, I am receiving the following output:
[osmusr#bssosmappv4001 ~]$ sh nmonscript2.sh 2 4
------------------------------------------------
Providing 4 snapshots with interval of 2s
secondsnto bssosmappv4001. Completing in 8
May I know which part did I missed out? Why is it not displaying the output correctly?

total has a carriage return (0x0D/\r/^M) after it. Most likely the script has windows line endings (\r\n), and the \r is getting tacked onto the total assignment. Run the file through dos2unix.

Related

Script in crontab to be executed only if is equal or exceeds a value

I currently have a script in crontab for rsync (and some other small stuff). Right now the script is executed every 5 minutes. I modified the script to look for a specific line from the rsync part (example from my machine, not the actual code):
#!/bin/bash
Number=`/usr/bin/rsync -n --stats -avz -e ssh 1/ root#127.0.0.1 | grep "Number of regular files transferred" | cut -d':' -f 2 | tr -d 040\054\012`
echo $Number
Let's say the number is 10. If the number is 10 or below I want the script executed through the crontab. But if the number is bigger I want to be executed ONLY manually.
Any ides?
Maybe you can use an argument to execute it manually, for example:
if [[ $Number -le 10 || $1 == true ]];then
echo "executing script..."
fi
This will execute if $Number is less or equal to 10 or if you execute it with true as the first positional argument, so if $Number is greater than 10 it won't execute in your crontab and you can execute your script manually with ./your_script true.

clear my script logs every 10 second

I have script with name : run.sh
This is my script code :
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
and when run this script give me output like this every second :
[09:54] 2014 Hello
[09:55] 2014 Hi
[09:57] 2014 How Are you ?
and many like this (thousands in hour !)
and my server get slow in 5 hour.
i check print commands in bot.lua but there are no way to remove print it.
can you add some codes to clear my script logs every 10 second ?
Thanks a lot.
My Script Output Doesn't Save Anywhere and Just Show me in terminal
I want a code such as clear command on linux terminal , clear my script logs every 10 minute or 5 minute.
After 5 day of script running i can (sometimes can't) login my server and my server get very slow and i must wait 3 or 5 minute to login my server and this amazing after login my server my server again get fast !
and i forgot say i use byobu screen for run my scripts and I think screen get my server slow down.
I don't think that something as simple as this would cause your server to slow down, but you can add a check to your script to calculate the size or line count of your log file every time it runs.
This function assumes you are redirecting your output to a log file. Set the variables to whatever makes the most sense.
log_check() {
line_count=$(wc -l $log_file | awk '{print $1}')
size_check=$(du -ax $log_file | awk '{print $1}')
max_file_size="1500"
max_file_length="1000"
if [[ $line_count >= $max_file_length || $size_check >= $max_file_size ]]; then
echo "" > $log_file
fi
}
I would also recommend using [[ ]] over [ ] since this is a bash script, as long as you don't plan in it being posix compliant and only plan on using it with bash [[]] is always better than [].
EDIT:
Since you are logging output to the terminal and not a file you can literally use the clear command in your script.
Try this out and see how the functionality works
for i in {1..20}; do
echo $i
if (( i == 10 )); then
clear
fi
done
I'm assuming your code has a loop somewhere, if not it will be a bit more complex to clear the terminal session. I'm not really sure what part of your code is actually printing anything to stdout, I'm guessing it's this piece here
./tg/tgcli -s ./bot/bot.lua $#
You could try something like this, which will background your initial process and then run clear every 60 seconds to clear the terminal window. Is there any reason you're not writing the output to a log file? That alone could solve some of your issues as well.
#!/bin/bash
./tg/tgcli -s ./bot/bot.lua $# &
pid="$!"
check_pid() {
ps -ef |grep "$pid"|grep -v 'grep' &>/dev/null
}
cnt=1
until ! check_pid; do
if (( cnt == 6 )); then
clear
cnt=1
fi
sleep 10
((cnt++))
done

awk output when run in background

Im wondering why awk print different output when run in background
My script:
#!/bin/bash
echo "Name of shell is $SHELL"
relase=`uname -r`
echo "Release is: $relase"
if [ $SHELL != "/bin/bash" ] || [ $relase != "3.13.0-32-generic" ] ; then
echo "Warning, different configuration"
fi
if [ $# -eq 0 ] ; then
echo "Insert name of shell"
read sname
else
sname=$1
fi
awk -v sname="$sname" 'BEGIN {FS=":"} {if ($7 == sname) print $1 }' </etc/passwd &
When i run awk without ampersand, output is:
petr#PetrLinux-VirtualBox:~/Documents$ ./script1 /bin/bash
Name of shell is /bin/bash
Release is: 3.13.0-32-generic
root
petr
but when i run awk with ampersand - in background, output is folowing:
petr#PetrLinux-VirtualBox:~/Documents$ ./script1 /bin/bash
Name of shell is /bin/bash
Release is: 3.13.0-32-generic
petr#PetrLinux-VirtualBox:~/Documents$ root
petr
First record (root) is not printed on single line. Please tell me why ańd if there is way how to print on single line while running on background. Thanks.
What you see is a mix of two outputs. The first output is of your shell, printing the command prompt (petr#PetrLinux-VirtualBox:~/Documents$). The second output is root from your script.
As your shell script runs in the background, you now have two processes writing to your terminal window: the bash (printing the prompt), and your script, printing the awk-output. This then just mixes up.
The only way to prevent that is to redirect the output of the script to a file or other device, instead of your console. For example:
$ ./script1 /bin/bash &> output.txt &
The output is the same. It just appears to be different because two processes write on the same channel (your terminal) and mix their output. One process is the awk script and the other is your shell which prints a new prompt.
There is no way to determine the precise point in which the output will switch from one process to the other. It can be different on different systems (with the same software), it can also depend on the load of the computer and lots of other things.
The only decent solution is to redirect the output into a different stream, e. g. a file using > outfile.

bash script - can't get for loop working

Background Info:
I'm trying to follow the example posted here: http://www.cyberciti.biz/faq/bash-for-loop/
I would like loop 9 times using a control variable called "i".
Problem Description
My code looks like this:
for i in {0..8..1}
do
echo "i is $i"
tmpdate=$(date -d "$i days" "+%b %d")
echo $tmpdate
done
When I run this code, the debug prints show me:
"i is {0..8..1}"
instead of being a value between 0 and 8.
What I've Checked So Far:
I've tried to check my version of bash to make sure it supports this type of syntax. I'm running version 4,2,25(1)
I also tried using C like syntax where you do for (i=0;i<=8;i++) but that doesn't work either.
Any suggestions would be appreciated.
Thanks.
EDIT 1
I've also tried the following code:
for i in {0..8};
do
echo "i is $i"
tmpdate=$(date -d "$i days" "+%b %d")
echo $tmpdate
done
And...
for i in {0..8}
do
echo "i is $i"
tmpdate=$(date -d "$i days" "+%b %d")
echo $tmpdate
done
They all fail with the same results.
I also tried:
#!/bin/bash
for ((i=0;i<9;i++));
do
echo "i is $i"
tmpdate=$(date -d "$i days" "+%b %d")
echo $tmpdate
done
And that gives me the error:
test.sh: 4: test.sh: Syntax error: Bad for loop variable
FYI. I'm running on ubuntu 12
EDIT 2
Ok... so i think Weberick tipped me off to the issue...
To execute the script, I was running "sh test.sh"
when in the code I had defined it as a BASH script! My bad!
But here's the thing. Ultimately, I need it to work in both bash and sh.
so now that I'm being careful to make sure that I invoke the script the right way... I've noticed the following results:
when defined as a bash script and i execute using bash, the C-style version works!
when defined as an sh script and i execute using sh, the C-style version fails
me#devbox:~/tmp/test$ sh test.sh
test.sh: 5: test.sh: Syntax error: Bad for loop variable
when defined as an sh script and i execute using sh the NON c style version ( aka for i in {n ..x}), I get the "i is {0..8}" output.
PS. The ";" doesn't make a difference if you have the do on the next line...just FYI.
Ubuntu's default shell is dash, which doesn't recognise either of the bashisms (brace expansion, C-style for loop) you tried. Try running your script using bash explicitly:
bash myscript.sh
or by setting the shebang to #!/bin/bash. Make sure NOT to run the script with sh myscript.sh.
dash should work if you use seq:
for i in $(seq 0 1 8); do
echo "$i"
done
Just {0..8} should work in bash, the default increment is 1. If you want to use a C-style for loop in bash:
for ((i=0;i<9;i++)); do
echo "$i"
done
I'm confident that
#!/bin/bash
for ((i=0;i<9;i++))
do
echo "i is $i"
tmpdate=$(date -d "$i days" "+%b %d")
echo $tmpdate
done
work on Ubuntu 12.04
If you still have an error, can you please try running
chmod +x test.sh
then
./test.sh
And the result is
i is 0
Apr 04
i is 1
Apr 05
i is 2
Apr 06
i is 3
Apr 07
i is 4
Apr 08
i is 5
Apr 09
i is 6
Apr 10
i is 7
Apr 11
i is 8
Apr 12
I'm no expert at bash but according to tdlp you need a ; after the for statement. There are many ways to to a range. This is one of them.
#!/bin/bash
for i in `seq 1 8`; do
echo $i
done
The site you quote says
Bash v4.0+ has inbuilt support for setting up a step value using {START..END..INCREMENT} syntax:
So you can just use {0..8..1} when you have a bash version greater than 4.0, which I guess is not the case (try bash --version in your terminal). Instead of {0..8..1} you can also use {0..8}.
If you have an older version you can use instead of {START..END..INCREMENT} the command $(seq START INCREMENT END) in the for loop.

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

Resources