How to capture linux command log into the file? - linux

Let's say I have the below command.
STATE_NOT_C_COUNT=`mongo --host "${DB_HOST}" --port 27017 "${MONGO_DATABASE}" --eval "db.$MONGO_DATABASE.count({\"state\" : {"'"$ne"'":\"C\"},\"physicalTableName\":\"table_name\"},{nolock:true})" | tail -1`
When I used to run the above command, got the exception like
exception: connect failed
I want to capture this exception in into the file via the error function.
error(){
if [ "$?" -ne "0" ]; then
echo "$1" 2>&1 error_log
exit 1
fi
}
I'm using the above function like this:
error $STATE_NOT_C_COUNT
But I'm not able to capture the exception through the function in files.

What you are doing is terrible. Let the program that fails print its error messages to stderr, and ensure that stderr is pointed to the right thing. However, the major issue you are having is just lack of quotes. Try:
error "$STATE_NOT_C_COUNT"
The issue is that the command error $STATE_NOT_C_COUNT is subject to field splitting, so if $STATE_NOT_C_COUNT contains any whitespace it is split into arguments, and you are only writing the first one. Another alternative is to write echo "$#" in the function, but this will squash whitespace. However, it cannot be stressed enough that this is a terrible approach, completely against the unix philosophy. The program should write its error to stderr, and you should let them go there. Just make sure stderr is pointed where you want it. The only possible reason to capture stderr is if you want to write it to multiple locations, so you might pipe it to tee or to a syslogger, or some other message bus, but doing such a thing is questionable.

Related

How do i use other command in the same line with echo

I want to use a command to be printed on the same line as the string.
for example, i want to print something like:
hostname: cpu1
but when i use the command like this it doesnt work
echo 'hostname:' hostname
You need to use $() to evaluate:
echo 'hostname:' $(hostname)
Two answers are already given saying that you "need to" and "should" use command substitution by doing:
echo "hostname: $(hostname)"
but I will disagree with both. Although that works, it is aesthetically unappealing. That command instructs the shell to run hostname and read its output, and then pass that output as part of the argument to echo. But all that work is unnecessary. For this simple use case, it is "simpler" to do:
printf "hostname: "; hostname
(Using printf to suppress the newline, and avoiding echo -n because echo really should be treated as deprecated). This way, the output of hostname goes directly to the shell's stdout without any additional work being done by the shell. I put "simpler" in quotes because an argument could be made that humans find printf "hostname: %s\n" "$(hostname)" or echo "hostname: $(hostname)" to be simpler, and perhaps looking at much code that does things that way warps your mind and even makes it look simpler, but a few moments reflection should reveal that indeed it is not.
OTOH, there are valid reasons for collecting the output and writing the message with echo/printf. In particular, by doing it that way, the message will (most likely) be written with one system call and not be subject to interleaving with messages from other processes. If you printf first and then execute hostname, other processes' data may get written between the output of printf and the output of hostname. As always, YMMV.
This is what you should do:
echo "hostname: `hostname`"
Words enclosed in backticks are read by the command line as commands even when they are inside a string, like in this case.
Have a great day! :)

How to use set -x without showing stdout?

Within CI, I am running a bash script that calls many bash scripts.
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
This doest not disable the stdout returned by the script.
The Gitlabi-CI runners stop logging after 100MB of log, It says Job's log exceeded limit of 10240000 bytes.
I know the log script can only grow up.
How can I optimize the output log size?
I don't need to have all the stdout, I can have stderr but then it will be a long running script without information.
Is there a way to display the commands which is running like when doing set -x?
Edit
Reading the answers, I was not able to solve my issue. I need to add that I am using nodejs to run the bash script that run the long bash script.
This is how I call my node script within .gitlab-ci.yml:
scripts:
- node my_script.js
Within my_script.js, I have:
exports.handler = () => {
const ls = spawn('bash', [path.join(__dirname, 'release.sh')], { stdio: 'inherit' });
ls.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
process.exitCode = code;
}
});
};
Within my_script.sh, I have:
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
You can selectively redirect file handles with exec.
exec >stdout 2>stderr
This however loses the connection to the terminal, so there is no simple way to output anything to the terminal after this point.
You can instead duplicate a file handle with m>&n where m is the number of the file descriptor to duplicate and n is the number of the new one (choose a big number like 99 to not accidentally clobber an existing handle).
exec 98<&1 # stdout
exec 99<&2 # stderr
exec >/dev/null 2>&1
:
To re-enable output,
exec 1<&98 2<&99
If you redirected to a temporary file instead of /dev/null you could obviously now show the tail of those files to the caller.
tail -n 100 "$TMPDIR"/stdout "$TMPDIR"/stderr
(On a shared server, probably use mktemp to create a unique temporary directory at the beginning of your script; static hard-coded file names make it impossible to run two builds at the same time.)
As you usually can't predict where the next error will happen, probably put all of this in a wrapper script which performs the redirection, runs the build, and finally displays the tail end of the temporary log files. Some build servers probably want to see some signs of life in the log file every few minutes, so perhaps tail a few lines every once in a while in a loop, too.
On the other hand, if there is just a single build command, the whole build job's stdout and stderr can simply be redirected to a log file, and you don't need to exec things back and forth. If you need to enable output selectively for portions of the script, use exec as above; but for wholesale redirection, just redirect the one command.
In summary, maybe your build script would look something like this.
#!/bin/sh
t=$(mktemp -t -d cibuild.XXXXXXXX) || exit
trap 'kill $buildpid; wait $buildpid; tail -n 500 "$t"/*; rm -rf "$t"' 0 1 2 3 5 15
# Your original commands here
${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}">"$t"/stdout 2>"$t"/stderr &
buildpid=$!
while kill -0 $buildpid; do
sleep 180
date
tail -n 1 "$t"/*
done
wait
A flaw with this approach is that you lose timing information. A proper solution woud let you see when each line was produced, and display standard output and standard error intermixed in the order the messages were printed, perhaps with visible time stamps, and even with coloring hints (red time stamps for stderr?)
Option 1
If your script will output the error message to stderr, you can ignore all output to stdout by using command > /dev/null, where /dev/null is a black hole that will take away any output to it.
Option 2
If there's any pattern on your error message, you can use grep to filter out those error messages.
Edit 1:
To show the command that is running, you can supply -x command to bash; therefore, your command will be
bash -x ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
bash will print the command executed to stderr
Edit 2:
If you want to reduce the size of the output file, you can pass it to gzip by using ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" | gzip > logfile.
To read the content of the logfile, you can use zcat logfile.

What does this if-statement from a bash script do?

I am new to bash scripting and learning through some examples. One of the examples that I saw is using an if-statement to test if a previously assigned output file is valid, like this:
if [ -n "$outputFile" ] && ! 2>/dev/null : >> $outputFile ; then
exit 1
fi
I understand what [ -n "$outputFile" ] does but not the rest of the conditional. Can someone explain what ! 2>/dev/null : >> $outputFile mean/does?
I have googled for answers but most links found were explanations on I/O redirection, which are definitely relevant but still unclear about the ! : >> structure.
That's some oddly written code!
The : command is built into bash. It's equivalent to true; it does nothing, successfully.
: >> $outputFile
attempts to do nothing, and appends the (empty) output to $outputFile -- which has already been confirmed to be a non-empty string. The >> redirection operator will create the file if it doesn't already exist.
I/O redirections such as 2>/dev/null can appear anywhere in a simple command; they don't have to be at the end. So the stdout of the : command (which is empty) is appended to $outputFile, and any stderr output is redirected to /dev/null. Any such stderr output would be the result of a failure in the redirection, since the : command itself does nothing and won't fail to do so. I don't know why the redirection of stdout (onto the end of $outputFile and the redirection of stderr (to /dev/null) are on opposite sides of the : command.
The ! operator is a logical "not"; it checks whether the following command succeeded, and inverts the result.
The net result, written in English-ish text is:
if "$outputFile" is set and is not an empty string, and if we don't have permission to write to it, then terminate the script with a status of 1.
In short, it tests whether we're able to write to $outputFile, and bails out if we don't.
The script is attempting to make sure $outputFile is writable in a not-so-obvious way.
: is the null command in bash, it does nothing. The fact that stderr is redirected to /dev/null is simply to suppress the permission denied error, should one occur.
If the file is not writable, then the command fails, which makes the condition true since it's negated with ! and the script exits.

egrep command with piped variable in ssh throwing No Such File or Directory error

Ok, here I'm again, struggling with ssh. I'm trying to retrieve some data from remote log file based on tokens. I'm trying to pass multiple tokens in egrep command via ssh:
IFS=$'\n'
commentsArray=($(ssh $sourceUser#$sourceHost "$(egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))
echo ${commentsArray[0]}
echo ${commentsArray[1]}
commax=${#commentsArray[#]}
echo $commax
where $v is something like below but it's length is dynamic. Meaning it can have many file names seperated by pipe.
UserComments/propagateBundle-2013-10-22--07:05:37.jar|UserComments/propagateBundle-2013-10-22--07:03:57.jar
The output which I get is:
oracle#172.18.12.42's password:
bash: UserComments/propagateBundle-2013-10-22--07:03:57.jar/New: No such file or directory
bash: line 1: UserComments/propagateBundle-2013-10-22--07:05:37.jar/nouserinput: No such file or directory
0
Thing worth noting is that my log file data has spaces in it. So, in the code piece I've given, the actual comments which I want to extract start after the jar file name like : UserComments/propagateBundle-2013-10-22--07:03:57.jar/
The actual comments are 'New Life Starts here' but the logs show that we are actually getting it till 'New' and then it breaks at space. I tried giving IFS but of no use. Probably I need to give it on remote but I don't know how should I do that.
Any help?
Your command is trying to run the egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log on the local machine, and pass the result of that as the command to run via SSH.
I suspect that you meant for that command to be run on the remote machine. Remove the inner $() to get that to happen (and fix the quoting):
commentsArray=($(ssh $sourceUser#$sourceHost "egrep '$v' '/$INSTALL_DIR/$PROP_BUNDLE.log'"))
You should use fgrep to avoid regex special interpretation from your input:
commentsArray=($(ssh $sourceUser#$sourceHost "$(fgrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))

read command is not taking input from the terminal

I dont know if it is weird that read is not taking the input from the terminal.
The configure script, which is used in source code making process, should ask the user to give the input to select the type of Database either MYSQL or ORACLE(below is the code).
MYSQLLIBPATH="/usr/lib/mysql"
echo "Enter DataBase-Type 1-ORACLE, 2-MySQL (default MySQL):"
read in
echo $? >> /tmp/error.log
if test -z "$in" -o "$in" = "2"
then
DATABASE=-DDB_MYSQL
if true; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
echo "Enter Mysql Library Path: (eg: $MYSQLLIBPATH (default))"
read in
echo $? >> /tmp/error.log
if test -n "$in"
then
MYSQLLIBPATH=`echo $in`
fi
echo "Mysql Lib path is $MYSQLLIBPATH"
else
if false; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
DATABASE=-DDB_ORACLE
LD_PATH=
fi
But, the read command is not asking for the user input. Its failing to take the input from the stdin.
When I checked the status of the command in the error.log it was showing
1
1
Could anyone tell why read is failing to take the input from the stdin.
Are there any builtin variable which can block read taking the input?
Most likely read executes with standard input redirected from a file that has reached EOF. If the above is not the whole of your configure code, check that there are no input redirections. Could the code above be a part of a function which was invoked with some input from a pipe or a file? Otherwise check how configure is executed - are there any redirections?
Otherwise, the universal advice applies: try simplifying and stripping down your code until it is obvious what's happening.
BTW, it is not a good idea to make configure interactive, if you want to have your program packaged for a distribution - it's not easy to control execution of interactive programs. Consider adding support for supplying parameters through command line options.

Resources