How to add newline into error messages from "exec" command in tcl - linux

I am trying to execute a binary file (generated by C code) through TCL "exec" command. This binary throws exception after executing some code and printing some output. I want to see those error messages printed by binary, but all the error messages are coming into one single line after deleting newline charecter.
I have already tried -keepnewline and -ignorestderr switches in exec commands but nothing comes to rescue.
This is how I am executing the binary
exec abc.out
I have tried
exec -keepnewline -ignorestderr abc.out
C file (from which binary is generated) contains some 100 printf statements, each followed by newline characters. But all newline characters are deleted by exec and all 100 lines are coming in one single line. I guess all messages are going into std error, while deletes newline characters, but not sure. Is there a way by which I can have all messages in different line.

You might want to redirect stdout and stderr of the child process to the parent (Tcl) process:
exec ># stdout 2># stderr abc.out

Related

How to capture linux command log into the file?

Let's say I have the below command.
STATE_NOT_C_COUNT=`mongo --host "${DB_HOST}" --port 27017 "${MONGO_DATABASE}" --eval "db.$MONGO_DATABASE.count({\"state\" : {"'"$ne"'":\"C\"},\"physicalTableName\":\"table_name\"},{nolock:true})" | tail -1`
When I used to run the above command, got the exception like
exception: connect failed
I want to capture this exception in into the file via the error function.
error(){
if [ "$?" -ne "0" ]; then
echo "$1" 2>&1 error_log
exit 1
fi
}
I'm using the above function like this:
error $STATE_NOT_C_COUNT
But I'm not able to capture the exception through the function in files.
What you are doing is terrible. Let the program that fails print its error messages to stderr, and ensure that stderr is pointed to the right thing. However, the major issue you are having is just lack of quotes. Try:
error "$STATE_NOT_C_COUNT"
The issue is that the command error $STATE_NOT_C_COUNT is subject to field splitting, so if $STATE_NOT_C_COUNT contains any whitespace it is split into arguments, and you are only writing the first one. Another alternative is to write echo "$#" in the function, but this will squash whitespace. However, it cannot be stressed enough that this is a terrible approach, completely against the unix philosophy. The program should write its error to stderr, and you should let them go there. Just make sure stderr is pointed where you want it. The only possible reason to capture stderr is if you want to write it to multiple locations, so you might pipe it to tee or to a syslogger, or some other message bus, but doing such a thing is questionable.

How to use set -x without showing stdout?

Within CI, I am running a bash script that calls many bash scripts.
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
This doest not disable the stdout returned by the script.
The Gitlabi-CI runners stop logging after 100MB of log, It says Job's log exceeded limit of 10240000 bytes.
I know the log script can only grow up.
How can I optimize the output log size?
I don't need to have all the stdout, I can have stderr but then it will be a long running script without information.
Is there a way to display the commands which is running like when doing set -x?
Edit
Reading the answers, I was not able to solve my issue. I need to add that I am using nodejs to run the bash script that run the long bash script.
This is how I call my node script within .gitlab-ci.yml:
scripts:
- node my_script.js
Within my_script.js, I have:
exports.handler = () => {
const ls = spawn('bash', [path.join(__dirname, 'release.sh')], { stdio: 'inherit' });
ls.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
process.exitCode = code;
}
});
};
Within my_script.sh, I have:
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
You can selectively redirect file handles with exec.
exec >stdout 2>stderr
This however loses the connection to the terminal, so there is no simple way to output anything to the terminal after this point.
You can instead duplicate a file handle with m>&n where m is the number of the file descriptor to duplicate and n is the number of the new one (choose a big number like 99 to not accidentally clobber an existing handle).
exec 98<&1 # stdout
exec 99<&2 # stderr
exec >/dev/null 2>&1
:
To re-enable output,
exec 1<&98 2<&99
If you redirected to a temporary file instead of /dev/null you could obviously now show the tail of those files to the caller.
tail -n 100 "$TMPDIR"/stdout "$TMPDIR"/stderr
(On a shared server, probably use mktemp to create a unique temporary directory at the beginning of your script; static hard-coded file names make it impossible to run two builds at the same time.)
As you usually can't predict where the next error will happen, probably put all of this in a wrapper script which performs the redirection, runs the build, and finally displays the tail end of the temporary log files. Some build servers probably want to see some signs of life in the log file every few minutes, so perhaps tail a few lines every once in a while in a loop, too.
On the other hand, if there is just a single build command, the whole build job's stdout and stderr can simply be redirected to a log file, and you don't need to exec things back and forth. If you need to enable output selectively for portions of the script, use exec as above; but for wholesale redirection, just redirect the one command.
In summary, maybe your build script would look something like this.
#!/bin/sh
t=$(mktemp -t -d cibuild.XXXXXXXX) || exit
trap 'kill $buildpid; wait $buildpid; tail -n 500 "$t"/*; rm -rf "$t"' 0 1 2 3 5 15
# Your original commands here
${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}">"$t"/stdout 2>"$t"/stderr &
buildpid=$!
while kill -0 $buildpid; do
sleep 180
date
tail -n 1 "$t"/*
done
wait
A flaw with this approach is that you lose timing information. A proper solution woud let you see when each line was produced, and display standard output and standard error intermixed in the order the messages were printed, perhaps with visible time stamps, and even with coloring hints (red time stamps for stderr?)
Option 1
If your script will output the error message to stderr, you can ignore all output to stdout by using command > /dev/null, where /dev/null is a black hole that will take away any output to it.
Option 2
If there's any pattern on your error message, you can use grep to filter out those error messages.
Edit 1:
To show the command that is running, you can supply -x command to bash; therefore, your command will be
bash -x ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
bash will print the command executed to stderr
Edit 2:
If you want to reduce the size of the output file, you can pass it to gzip by using ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" | gzip > logfile.
To read the content of the logfile, you can use zcat logfile.

Prevent script running with same arguments twice

We are looking into building a logcheck script that will tail a given log file and email when the given arguments are found. I am having trouble accurately determining if another version of this script is running with at least one of the same arguments against the same file. Script can take the following:
logcheck -i <filename(s)> <searchCriterion> <optionalEmailAddresses>
I have tried to use ps aux with a series of grep, sed, and cut, but it always ends up being more code than the script itself and seldom works very efficiently. Is there an efficient way to tell if another version of this script is running with the same filename and search criteria? A few examples of input:
EX1 .\logcheck -i file1,file2,file3 "foo string 0123" email#address.com
EX2 .\logcheck -s file1 Hello,World,Foo
EX3 .\logcheck -i file3 foo email#address1.com,email#address2.com
In this case 3 should not run because 1 is already running with parameters file3 and foo.
There are many solutions for your problem, I would recommend creating a lock file, with the following format:
arg1Ex1 PID#(Ex1)
arg2Ex1 PID#(Ex1)
arg3Ex1 PID#(Ex1)
arg4Ex1 PID#(Ex1)
arg1Ex2 PID#(Ex2)
arg2Ex2 PID#(Ex2)
arg3Ex2 PID#(Ex2)
arg4Ex2 PID#(Ex2)
when your script starts:
It will search in the file for all the arguments it has received (awk command or grep)
If one of the arguments is present in the list, fetch the process PID (awk 'print $2' for example) to check if it is still running (ps) (double check for concurrency and in case of process ended abnormally previously garbage might remain inside the file)
If the PID is still there, the script will not run
Else append the arguments to the lock file with the current process PID and run the script.
At the end, of the execution you remove the lines that contains the arguments that have been used by the script, or remove all lines with its PID.

How long is the limitation of the length of the file `/proc/{processId}/cmdline`?

I have a java process and its classpath includes many jars, so the startup command is very long.
Assume that the process id is 110101, when I check the command by command cat /proc/110101/cmdline, I find the command is incomplete, it only contains about 4000 characters.
Is there any other method to show the original startup command?
man proc
/proc/[pid]/cmdline
This holds the complete command line for the process, unless
the process is a zombie. In the latter case, there is nothing
in this file: that is, a read on this file will return 0
characters. The command-line arguments appear in this file as
a set of strings separated by null bytes ('\0'), with a
further null byte after the last string.
It may not give you all args by cat the file.

Is it possible to make stdout and stderr output be of different colors in XTerm or Konsole?

Is it even achievable?
I would like the output from a command’s stderr to be rendered in a different color than stdout (for example, in red).
I need such a modification to work with the Bash shell in the Konsole, XTerm, or GNOME Terminal terminal emulators on Linux.
Here's a solution that combines some of the good ideas already presented.
Create a function in a bash script:
color() ( set -o pipefail; "$#" 2>&1>&3 | sed $'s,.*,\e[31m&\e[m,' >&2 ) 3>&1
Use it like this:
$ color command -program -args
It will show the command's stderr in red.
Keep reading for an explanation of how it works. There are some interesting features demonstrated by this command.
color()... — Creates a bash function called color.
set -o pipefail — This is a shell option that preserves the error return code of a command whose output is piped into another command. This is done in a subshell, which is created by the parentheses, so as not to change the pipefail option in the outer shell.
"$#" — Executes the arguments to the function as a new command. "$#" is equivalent to "$1" "$2" ...
2>&1 — Redirects the stderr of the command to stdout so that it becomes sed's stdin.
>&3 — Shorthand for 1>&3, this redirects stdout to a new temporary file descriptor 3. 3 gets routed back into stdout later.
sed ... — Because of the redirects above, sed's stdin is the stderr of the executed command. Its function is to surround each line with color codes.
$'...' A bash construct that causes it to understand backslash-escaped characters
.* — Matches the entire line.
\e[31m — The ANSI escape sequence that causes the following characters to be red
& — The sed replace character that expands to the entire matched string (the entire line in this case).
\e[m — The ANSI escape sequence that resets the color.
>&2 — Shorthand for 1>&2, this redirects sed's stdout to stderr.
3>&1 — Redirects the temporary file descriptor 3 back into stdout.
Here's an extension of the same concept that also makes STDOUT green:
function stdred() (
set -o pipefail;
(
"$#" 2>&1>&3 |
sed $'s,.*,\e[31m&\e[m,' >&2
) 3>&1 |
sed $'s,.*,\e[32m&\e[m,'
)
You can also check out stderred: https://github.com/sickill/stderred
I can't see that there is any way for the terminal emulator to do this.
The interface between the terminal emulator and the shell/app is via a pseudo-tty, where the terminal emulator is on the master side and the shell/app on the other. The shell/app have both stdout and stderr connected to the same pty, so when the terminal emulator reads from the pty for the shell/app output it can no longer tell which was written to stdout and which to stderr.
You will have to use one of the solutions that intercepts the data between the application and the slave-pty and inserts escape codes to control the terminal output colo(u)r.
Here is a little Awk script that will print everything you pass it in red.
#!/usr/bin/awk -f
{ printf("%c[%dm%s%c[0m\n", 0x1B, 31, $0, 0x1B); fflush() }
It simply prints each line it receives on stdin within the necessary escape codes to display it in red. It is followed by an escape code to reset the terminal.
(If you need a different color, change the second argument in the above printf call from 31 to the number corresponding to the desired color.)
Save it to colr.awk, do a chmod a+x, and use it like so:
$ my_program | ./colr.awk
It has the drawback that lines may not be displayed in order, because stderr goes directly to the console, while stdout is piped through an additional process.
A simple solution to color stdout in red is to pipe it through grep:
program | grep .
This should not require installing anything, as grep should be already installed everywhere.
Taken from Dennis’s comment on superuser.com.
I think you should use the standard escape sequences on stderr. Have a look at this.
Hilite will do this. It's a lightweight solution, but you have to invoke it for each command, eg. hilite gcc myprog.c. A more radical approach is built in to my experimental shell Gush which shows stderr from all commands run in red, stdout in black. Either way is very useful for software builds where you have lots of output with a few error messages that could easily be missed if not highlighted.

Resources