I am using a KSH script to execute a binary (program) that has the following syntax to execute correctly:
myprog [-v | --verbose (optional)] [input1] [input2]
The program prints nothing & returns exit code 0 (zero) on success. On failure it prints ERROR messages to STDERR & returns exit status > 0. If -v option is specified it prints verbose details to STDOUT both in case of success and failure.
To make this usable and reduce chances of argument swapping and user controlled logging I used a ksh shell script to invoke this binary. The syntax to run the ksh shell script is as:
myshell.sh [-v (optional)] [-a input1] [-b input2]
If -v option is specified, ksh redirects STDOUT to <execution_date_time>_out.log and STDERR to <execution_date_time>_err.log. My ksh script is as follows:
myshell.sh :
#! /bun/ksh
verbopt=""
log=""
arg1=""
arg2=""
dateTime=`date +%y-%m-%d_%H:%M:%S`
while getopts "va:b:" arg
do
case $arg in
v) # verbose output
verbopt="-v"
log="1>${dateTime}_out.log 2>${dateTime}_err.log"
;;
a) # Input 1
arg1=$OPTARG
;;
b) # Input 2
arg2=$OPTARG
;;
*) # usage
echo "USAGE: myshell.sh [-v] [-a input1] [-b input2]"
exit 2
;;
esac
done
if [[ -z $arg1|| -z $arg2]]
then
echo "Missing arguments"
exit 2
fi
myprog $verbopt $arg1 $arg2 $log
exit $?
The problem here is, all the output STDERR & STDOUT is printed on the screen (i.e, No redirection took place) as well as no *.log files were created after successful or unsuccessful execution (i.e, exit status: 0 or >0 respectively).
Can anyone help me out on this?
Thanks.
Rather than trying to monkey patch redirections into the command line, just redirect the streams when you parse the flags. That is:
while getopts "va:b:" arg
do
case $arg in
v) # verbose output
verbopt="-v"
exec 1>${dateTime}_out.log 2>${dateTime}_err.log
;;
...
You need to be a little careful, since you do some error checking after this and you probably don't want your later error messages going to the *_err.log, but that's fairly trivial to fix. (eg, error check sooner, or do a test -n "$verbopt" && exec > ... after the error check, or similar)
The problem is that > is not expanded in the value of $log.
I'm afraid you will need to use a conditional for this, for example:
cmd="myprog $verbopt $arg1 $arg2"
if [ "$log" ]; then
$cmd 1>${dateTime}_out.log 2>${dateTime}_err.log
else
$cmd
fi
I would use the idiom exec redirection, which runs the rest of the script as if the given redirection had been supplied when it was run:
if need_to_log; then
exec >stdout_file 2>stderr_file
fi
this command will be logged if the above if statement was true
If you need to restore stdout and stderr afterward for the script to do more unlogged things, you can just run the logging part in a subshell:
(
if need_to_log; then
exec >stdout_file 2>stderr_file
fi
this command will be logged if the above if statement was true
)
this command will not be logged regardless
I would also build the command in an array, so you can add things like -v to it without having to have a separate variable for each possible parameter. If the order in which the -a and -b arguments are supplied to myprog doesn't matter, you can just add those to the array instead of having separate variables as well.
You can see my version below. Besides the above changes, I also don't bother getting the timestamp if not logging, since it's unneeded, and send error messages to standard error instead of standard out using the ksh builtin print.
Here's what I put together:
#!/usr/bin/env ksh
# new array syntax requires ksh93+; for older ksh, use this:
# set -A cmd myprog
cmd=(myprog) # build up the command to run in an array
log_flag=0 # nonzero if the command should be logged
input_a= # the two input filenames
input_b=
while getopts 'va:b:' arg; do
case $arg in
v) # verbose output
# older ksh: set -A cmd "${cmd[#]}" -v
cmd+=(-v)
log_flag=1
;;
a) # Input 1
input_a=$OPTARG
;;
b) # Input 2
input_b=$OPTARG
;;
*) # usage
print -u2 "USAGE: $0 [-v] [-a input1] [-b input2]"
exit 2
;;
esac
done
if [[ -z $input_a || -z $input_b ]]; then
print -u2 "$0: Missing arguments"
exit 2
fi
if (( log_flag )); then
timestamp=$(date +%y-%m-%d_%H:%M:%S)
exec >"${timestamp}_out.log" 2>"${timestamp}_err.log"
fi
"${cmd[#]}" "$input_a" "$input_b"
Your timestamp uses the two-digit year (%y); that and the underscore between the components are the only deviations from the ISO 8601 standard, so I would recommend you go ahead and adopt the standard format. That'd be %Y-%m-%dT%H:%M:%S, or, in C libraries with newer versions of strftime, %FT%T.
You could also be a little more clever and make log_flag a string that is either empty or -q, pass that to the command, and test it against the empty string to determine whether or not to open the log files, but I find the logic easier to follow with the simple 0/1 value treated as a Boolean.
Take a look at the eval command.
Replace ...
myprog $verbopt $arg1 $arg2 $log
with:
eval myprog $verbopt $arg1 $arg2 $log
I don't know what your myprog does but here's a simple example using eval to run date (valid command) and date xyz (invalid command), redirecting output to log.stdout/log.stderr accordingly:
$ cat logout
log='1>log.stdout 2>log.stderr'
'rm' -rf log.std* > /dev/null 2>&1
echo ""
echo 'eval date ${log}'
eval date ${log}
echo ""
echo "++++++++++++ log.stdout"
cat log.stdout
echo "++++++++++++ log.stderr"
cat log.stderr
echo "++++++++++++"
'rm' -rf log.std* > /dev/null 2>&1
echo ""
echo 'eval date xyz ${log}'
eval date xyz ${log}
echo ""
echo "++++++++++++ log.stdout"
cat log.stdout
echo "++++++++++++ log.stderr"
cat log.stderr
echo "++++++++++++"
Now run the script:
$ logout
eval date ${log}
++++++++++++ log.stdout
Sun Jul 23 15:56:01 CDT 2017
++++++++++++ log.stderr
++++++++++++
eval date xyz ${log}
++++++++++++ log.stdout
++++++++++++ log.stderr
date: invalid date `xyz'
++++++++++++
I have script with name : run.sh
This is my script code :
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
and when run this script give me output like this every second :
[09:54] 2014 Hello
[09:55] 2014 Hi
[09:57] 2014 How Are you ?
and many like this (thousands in hour !)
and my server get slow in 5 hour.
i check print commands in bot.lua but there are no way to remove print it.
can you add some codes to clear my script logs every 10 second ?
Thanks a lot.
My Script Output Doesn't Save Anywhere and Just Show me in terminal
I want a code such as clear command on linux terminal , clear my script logs every 10 minute or 5 minute.
After 5 day of script running i can (sometimes can't) login my server and my server get very slow and i must wait 3 or 5 minute to login my server and this amazing after login my server my server again get fast !
and i forgot say i use byobu screen for run my scripts and I think screen get my server slow down.
I don't think that something as simple as this would cause your server to slow down, but you can add a check to your script to calculate the size or line count of your log file every time it runs.
This function assumes you are redirecting your output to a log file. Set the variables to whatever makes the most sense.
log_check() {
line_count=$(wc -l $log_file | awk '{print $1}')
size_check=$(du -ax $log_file | awk '{print $1}')
max_file_size="1500"
max_file_length="1000"
if [[ $line_count >= $max_file_length || $size_check >= $max_file_size ]]; then
echo "" > $log_file
fi
}
I would also recommend using [[ ]] over [ ] since this is a bash script, as long as you don't plan in it being posix compliant and only plan on using it with bash [[]] is always better than [].
EDIT:
Since you are logging output to the terminal and not a file you can literally use the clear command in your script.
Try this out and see how the functionality works
for i in {1..20}; do
echo $i
if (( i == 10 )); then
clear
fi
done
I'm assuming your code has a loop somewhere, if not it will be a bit more complex to clear the terminal session. I'm not really sure what part of your code is actually printing anything to stdout, I'm guessing it's this piece here
./tg/tgcli -s ./bot/bot.lua $#
You could try something like this, which will background your initial process and then run clear every 60 seconds to clear the terminal window. Is there any reason you're not writing the output to a log file? That alone could solve some of your issues as well.
#!/bin/bash
./tg/tgcli -s ./bot/bot.lua $# &
pid="$!"
check_pid() {
ps -ef |grep "$pid"|grep -v 'grep' &>/dev/null
}
cnt=1
until ! check_pid; do
if (( cnt == 6 )); then
clear
cnt=1
fi
sleep 10
((cnt++))
done
I'm having a problem trying to run nmon using my own script where nmon is deployed in the linux environment.
Based on this script, I am required to execute command "test.sh 2 5", with variables represented by value 2 and 5
#!/bin/bash
#sh test.sh variable1 variable2
./nmon -f -s$1 -c $2
total=$(( $1 * $2 ))
echo "------------------------------------------------"
echo -e "Providing $2 snapshots with interval of $1s"
echo -e "Saving into $HOSTNAME. Completing in $total seconds\n\n"
However, I am receiving the following output:
[osmusr#bssosmappv4001 ~]$ sh nmonscript2.sh 2 4
------------------------------------------------
Providing 4 snapshots with interval of 2s
secondsnto bssosmappv4001. Completing in 8
May I know which part did I missed out? Why is it not displaying the output correctly?
total has a carriage return (0x0D/\r/^M) after it. Most likely the script has windows line endings (\r\n), and the \r is getting tacked onto the total assignment. Run the file through dos2unix.
I would like to store a command to use at a later time in a variable (not the output of the command, but the command itself).
I have a simple script as follows:
command="ls";
echo "Command: $command"; #Output is: Command: ls
b=`$command`;
echo $b; #Output is: public_html REV test... (command worked successfully)
However, when I try something a bit more complicated, it fails. For example, if I make
command="ls | grep -c '^'";
The output is:
Command: ls | grep -c '^'
ls: cannot access |: No such file or directory
ls: cannot access grep: No such file or directory
ls: cannot access '^': No such file or directory
How could I store such a command (with pipes/multiple commands) in a variable for later use?
Use eval:
x="ls | wc"
eval "$x"
y=$(eval "$x")
echo "$y"
Do not use eval! It has a major risk of introducing arbitrary code execution.
BashFAQ-50 - I'm trying to put a command in a variable, but the complex cases always fail.
Put it in an array and expand all the words with double-quotes "${arr[#]}" to not let the IFS split the words due to Word Splitting.
cmdArgs=()
cmdArgs=('date' '+%H:%M:%S')
and see the contents of the array inside. The declare -p allows you see the contents of the array inside with each command parameter in separate indices. If one such argument contains spaces, quoting inside while adding to the array will prevent it from getting split due to Word-Splitting.
declare -p cmdArgs
declare -a cmdArgs='([0]="date" [1]="+%H:%M:%S")'
and execute the commands as
"${cmdArgs[#]}"
23:15:18
(or) altogether use a bash function to run the command,
cmd() {
date '+%H:%M:%S'
}
and call the function as just
cmd
POSIX sh has no arrays, so the closest you can come is to build up a list of elements in the positional parameters. Here's a POSIX sh way to run a mail program
# POSIX sh
# Usage: sendto subject address [address ...]
sendto() {
subject=$1
shift
first=1
for addr; do
if [ "$first" = 1 ]; then set --; first=0; fi
set -- "$#" --recipient="$addr"
done
if [ "$first" = 1 ]; then
echo "usage: sendto subject address [address ...]"
return 1
fi
MailTool --subject="$subject" "$#"
}
Note that this approach can only handle simple commands with no redirections. It can't handle redirections, pipelines, for/while loops, if statements, etc
Another common use case is when running curl with multiple header fields and payload. You can always define args like below and invoke curl on the expanded array content
curlArgs=('-H' "keyheader: value" '-H' "2ndkeyheader: 2ndvalue")
curl "${curlArgs[#]}"
Another example,
payload='{}'
hostURL='http://google.com'
authToken='someToken'
authHeader='Authorization:Bearer "'"$authToken"'"'
now that variables are defined, use an array to store your command args
curlCMD=(-X POST "$hostURL" --data "$payload" -H "Content-Type:application/json" -H "$authHeader")
and now do a proper quoted expansion
curl "${curlCMD[#]}"
var=$(echo "asdf")
echo $var
# => asdf
Using this method, the command is immediately evaluated and its return value is stored.
stored_date=$(date)
echo $stored_date
# => Thu Jan 15 10:57:16 EST 2015
# (wait a few seconds)
echo $stored_date
# => Thu Jan 15 10:57:16 EST 2015
The same with backtick
stored_date=`date`
echo $stored_date
# => Thu Jan 15 11:02:19 EST 2015
# (wait a few seconds)
echo $stored_date
# => Thu Jan 15 11:02:19 EST 2015
Using eval in the $(...) will not make it evaluated later:
stored_date=$(eval "date")
echo $stored_date
# => Thu Jan 15 11:05:30 EST 2015
# (wait a few seconds)
echo $stored_date
# => Thu Jan 15 11:05:30 EST 2015
Using eval, it is evaluated when eval is used:
stored_date="date" # < storing the command itself
echo $(eval "$stored_date")
# => Thu Jan 15 11:07:05 EST 2015
# (wait a few seconds)
echo $(eval "$stored_date")
# => Thu Jan 15 11:07:16 EST 2015
# ^^ Time changed
In the above example, if you need to run a command with arguments, put them in the string you are storing:
stored_date="date -u"
# ...
For Bash scripts this is rarely relevant, but one last note. Be careful with eval. Eval only strings you control, never strings coming from an untrusted user or built from untrusted user input.
For bash, store your command like this:
command="ls | grep -c '^'"
Run your command like this:
echo $command | bash
Not sure why so many answers make it complicated!
use alias [command] 'string to execute'
example:
alias dir='ls -l'
./dir
[pretty list of files]
I tried various different methods:
printexec() {
printf -- "\033[1;37m$\033[0m"
printf -- " %q" "$#"
printf -- "\n"
eval -- "$#"
eval -- "$*"
"$#"
"$*"
}
Output:
$ printexec echo -e "foo\n" bar
$ echo -e foo\\n bar
foon bar
foon bar
foo
bar
bash: echo -e foo\n bar: command not found
As you can see, only the third one, "$#" gave the correct result.
I faced this problem with the following command:
awk '{printf "%s[%s]\n", $1, $3}' "input.txt"
I need to build this command dynamically:
The target file name input.txt is dynamic and may contain space.
The awk script inside {} braces printf "%s[%s]\n", $1, $3 is dynamic.
Challenge:
Avoid extensive quote escaping logic if there are many " inside the awk script.
Avoid parameter expansion for every $ field variable.
The solutions bellow with eval command and associative arrays do not work. Due to bash variable expansions and quoting.
Solution:
Build bash variable dynamically, avoid bash expansions, use printf template.
# dynamic variables, values change at runtime.
input="input file 1.txt"
awk_script='printf "%s[%s]\n" ,$1 ,$3'
# static command template, preventing double-quote escapes and avoid variable expansions.
awk_command=$(printf "awk '{%s}' \"%s\"\n" "$awk_script" "$input")
echo "awk_command=$awk_command"
awk_command=awk '{printf "%s[%s]\n" ,$1 ,$3}' "input file 1.txt"
Executing variable command:
bash -c "$awk_command"
Alternative that also works
bash << $awk_command
As you don't specify any scripting language, I would recommand tcl, the Tool Command Language for this kind of purpose.
Then in the first line, add the appropriate shebang:
#!/usr/local/bin/tclsh
with appropriate location you can retrieve with which tclsh.
In tcl scripts, you can call operating system commands with exec.
#!/bin/bash
#Note: this script works only when u use Bash. So, don't remove the first line.
TUNECOUNT=$(ifconfig |grep -c -o tune0) #Some command with "Grep".
echo $TUNECOUNT #This will return 0
#if you don't have tune0 interface.
#Or count of installed tune0 interfaces.
First of all, there are functions for this. But if you prefer variables then your task can be done like this:
$ cmd=ls
$ $cmd # works
file file2 test
$ cmd='ls | grep file'
$ $cmd # not works
ls: cannot access '|': No such file or directory
ls: cannot access 'grep': No such file or directory
file
$ bash -c $cmd # works
file file2 test
$ bash -c "$cmd" # also works
file
file2
$ bash <<< $cmd
file
file2
$ bash <<< "$cmd"
file
file2
Or via a temporary file
$ tmp=$(mktemp)
$ echo "$cmd" > "$tmp"
$ chmod +x "$tmp"
$ "$tmp"
file
file2
$ rm "$tmp"
Be careful registering an order with the: X=$(Command)
This one is still executed. Even before being called. To check and confirm this, you can do:
echo test;
X=$(for ((c=0; c<=5; c++)); do
sleep 2;
done);
echo note the 5 seconds elapsed
It is not necessary to store commands in variables even as you need to use it later. Just execute it as per normal. If you store in variables, you would need some kind of eval statement or invoke some unnecessary shell process to "execute your variable".
I'm in doubt of the diference and which one is the better quote to execute a command in shell script.
For example, I have this two examples:
echo "The name of the computer is `uname -n`"
echo "The name of the computer is $(uname -n)"
Which one is better? Or there is no diference?
The $(...) one is generally recommended because it nests easier. Compare:
date -d "1970-01-01 $(echo "$(date +%s)-3600"|bc) sec UTC"
date -d "1970-01-01 `echo \"\`date +%s\`-3600\"|bc` sec UTC "