catch error message and create Journal entry - linux

The following code I put on top of many scripts I am using...
#!/bin/bash
# redirect all error messages to the protocol,
# then print the same message to stdout
exec 1> >(logger -i -s -t $0 -p 4) 2>&1
This line of code will make protocol entries of errors occurred while running my scripts.
But this code doesn't work like I want when communicate via stdin and stdout. I want to log error messages only.
And to be honest, I don't know how I managed to get this line of coding to work.
Nonetheless, I am searching for a code to replace this combination of exec and logger with a function put in trap "createErrorMessage" ERR. But I don't know how to catch/receive error messages this way.
My goal is to create protocol entries of all error messages only.
To be clear, I don't want to use $? after every piece of code and I don't want to catch every piece of code with Variable=$().
Is this even possible ?

While logger sends output to syslog, systemd-cat performs the same kind of function for systemd. See for example:
echo "hello" | systemd-cat
journalctl | tail -10
If you are running your scripts as systemd service units, then there's no need to use systemd-cat: By default systemd will send STDOUT and STDERR of services it controls the journal.
See man systemd-cat for more about that tool.

Maybe do not understand what do you want, but let say have this script mytest.sh:
date > jj
cat jj - jjj # jj + stdin + (nonexistenxt) jjj #e.g. error too
mkdir jj #error
so, when you will use this as script with redirections,
echo "Hello world" | bash mytest.sh > output
you will get:
1.) in the file output
Thu Apr 6 21:21:13 CEST 2017
Hello world
and on the screen - the errors
cat: jjj: No such file or directory
mkdir: jj: File exists
Now, change the above script too
((
date > jj
cat jj - jjj # jj + stdin + (nonexistext jjj)
mkdir jj #error
) 3>&1 1>&2 2>&3 | tee >(logger -i -t $0 -p 4)) 3>&1 1>&2 2>&3
Note, the removed the -s from the logger args.
Now when you will use it again
echo "Hello world" | bash mytest.sh > output
in the output will be the stdout as you expected
the stderr will do to screen (and you can redirect it again)
and the logger will log all errors.
As you sure know, it works
swaps the stdout and stderr
pipes the stdout (now the stderr) to the logger process
swaps the stdout/stderr back
It probably could be simpler, because the logger by using the -s could duplicate the messages into the stderr itself, but this works universally. Unfortunately, it is inefficient thus using 2 more forks. note the ((.
Using the:
somefunc() { some actions...; }
trap 'somefunc' ERR
will not help you as you expecting. Doing some fancy redirections in the somefunc is too late, because the somefunc triggered after the error happens, e.g. the error-message is already printed to stderr.

Related

Running a process with the TTY detached

I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.

bash: what to do when stdout does not exist

In a very simplified scenario, I have a script that looks like this:
mv test _test
sleep 10
echo $1
mv _test test
and if I execute it with:
ssh localhost "test.sh foo"
the test file will have an underscore in the name as long as the script is running, and when the script is finished, it will send foo back. The script SHOULD keep running, even if you terminate the ssh command by pressing ctrl+c or if you lose connection the the server, but it doesn't (the file is not renamed back to "test"). So, I tried the following:
nohup ssh localhost "test.sh foo"
and it makes ssh immune to ctrl+c but flaky connection to the server still causes trouble. After some debugging, it turns out that the script WILL actually reach the end IF THERE IS NO ECHO IN IT. And when you think about it, it makes sense - when the connection is dropped, there is no more stdout (ssh socket) to echo to, so it will fail, silently.
I can, of course, echo to a file and then get the file, but I would prefer something smarter, along the lines of test tty && echo $1 (but tty invoked like this always returns false). Any suggestions are greatly appreciated.
The following command does what you want:
ssh -t user#host 'nohup ~/test.sh foo > nohup.out 2>&1 & p1=$!; tail -f ~/nohup.out & wait $p1'
... test.sh is located in the users home directory
Explanation:
1.) "ssh -t user#host " ... pretty clear ... starts remote session
2.) "nohup ~/test.sh foo > nohup.out 2>&1" ... starts the test.sh script with nohup in background
3.) "p1=$!;" ... stores the child pid of the previous command in p1
4.) "tail -f ~/nohup.out &" ... tail nohup.out in background to see the output of test.sh
5.) "wait $p1" ... waits for proccess test.sh (which pid is stored in p1) to finish
The above command works even if you interrupt it with ctrl+c.
you can use ...
ssh -t localhost "test.sh foo"
... to force a tty allocation
As st0ne suggested, tail fails, but does not cause the script to terminate, as opposed to cat and echo. So, there is no need for nohup, redirecting stdout to a temporary file, etc. just plain and simple:
mv test _test
sleep 10
echo $1 | tail
mv _test test
and execute it with:
ssh localhost "test.sh foo"

Redirect all output in a bash script when using set -x, capture pid and all output

I'm modifying an old script and for some reason it uses a subshell. I'm not sure if maybe the subshell is what's tripping me up. What I really want is to start a service and capture all of STDOUT and STDERR to a file as well as it's PID. Additionally, however I want some debug information in the log file. Consider the script below (startFoo.sh):
#!/bin/bash
VARIABLE=$(something_dynamic)
echo "Some output"
(
# Enable debugging
set -x
foo -argument1=bar \
-argument2=$VARIABLE
# Disable debugging
set +x
) > /tmp/foo_service.log 2>&1 &
OUTER_PID=$!
echo $OUTER_PID > foo.pid
This seems to work in that I'm capturing most of the output to the log as well as the PID, but for some reason not all of the output is redirected. When I run the script, I see this in my terminal:
[me#home ~]$ sudo startFoo.sh
Some output
[me#home ~]$ + foo -argument1=bar -argument2=value
How can I squash the output in my prompt that says [me#home ~]$ + foo...?
Note, this question may be related to another question: redirect all output in a bash script when using set -x, however my specific usage is different.
UPDATE: My script now looks like this, but something is still not quite right:
#!/bin/bash
VARIABLE=$(something_dynamic)
echo "Some output"
(
# Enable debugging
set -x
foo -argument1=bar \
-argument2=$VARIABLE
# Disable debugging
set +x
) > /tmp/foo_service.log 2>&1 &
PID=$!
echo $PID > foo.pid
However, when I do this, the PID file contains the PID for startFoo.sh, not the actual invocation of foo which is what I really want to capture and be able to kill. Ideally I could kill both startFoo.sh and foo with one PID, but I'm not sure how to do that. How is this normally handled?
UPDATE: The solution (thanks to a conversation with #konsolebox) is below:
#!/bin/bash
VARIABLE=$(something_dynamic)
echo "Some output"
{
# Enable debugging
set -x
foo -argument1=bar \
-argument2="$VARIABLE" &
PID=$!
echo $PID > foo.pid
# Disable debugging
set +x
} > /tmp/foo_service.log 2>&1
Change 2>&1> /tmp/foo_service.log to >/tmp/foo_service.log 2>&1.
You should first redirect fd 1 to the file, then let fd 2 duplicate it. What you're doing on the former is that you first redirect fd 2 to 1 which only copies the default stdout, not the file's fd which is opened after it.

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Example of using named pipes in Linux shell (Bash)

Can someone post a simple example of using named pipes in Bash on Linux?
One of the best examples of a practical use of a named pipe...
From http://en.wikipedia.org/wiki/Netcat:
Another useful behavior is using netcat as a proxy. Both ports and hosts can be redirected. Look at this example:
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request.
This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo backpipe
nc -l 12345 0<backpipe | nc www.google.com 80 1>backpipe
Here are the commands:
mkfifo named_pipe
echo "Hi" > named_pipe &
cat named_pipe
The first command creates the pipe.
The second command writes to the pipe (blocking). The & puts this into the background so you can continue to type commands in the same shell. It will exit when the FIFO is emptied by the next command.
The last command reads from the pipe.
Open two different shells, and leave them side by side. In both, go to the /tmp/ directory:
cd /tmp/
In the first one type:
mkfifo myPipe
echo "IPC_example_between_two_shells">myPipe
In the second one, type:
while read line; do echo "What has been passed through the pipe is ${line}"; done<myPipe
First shell won't give you any prompt back until you execute the second part of the code in the second shell. It's because the fifo read and write is blocking.
You can also have a look at the FIFO type by doing a ls -al myPipe and see the details of this specific type of file.
Next step would be to embark the code in a script!
Creating a named pipe
$ mkfifo pipe_name
On Unix-likes named pipe (FIFO) is a special type of file with no content. The mkfifo command creates the pipe on a file system (assigns a name to it), but doesn't open it. You need to open and close it separately like any other file.
Using a named pipe
Named pipes are useful when you need to pipe from/to multiple processes or if you can't connect two processes with an anonymous pipe. They can be used in multiple ways:
In parallel with another process:
$ echo 'Hello pipe!' > pipe_name & # runs writer in a background
$ cat pipe_name
Hello pipe!
Here writer runs along the reader allowing real-time communication between processes.
Sequentially with file descriptors:
$ # open the pipe on auxiliary FD #5 in both ways (otherwise it will block),
$ # then open descriptors for writing and reading and close the auxiliary FD
$ exec 5<>pipe_name 3>pipe_name 4<pipe_name 5>&-
$
$ echo 'Hello pipe!' >&3 # write into the pipe through FD #3
...
$ exec 3>&- # close the FD when you're done
$ # (otherwise reading will block)
$ cat <&4
Hello pipe!
...
$ exec 4<&-
In fact, communication through a pipe can be sequential, but it's limited to 64 KB (buffer size).
It's preferable to use descriptors to transfer multiple blocks of data in order to reduce overhead.
Conditionally with signals:
$ handler() {
> cat <&3
>
> exec 3<&-
> trap - USR1 # unregister signal handler (see below)
> unset -f handler writer # undefine the functions
> }
$
$ exec 4<>pipe_name 3<pipe_name 4>&-
$ trap handler USR1 # register handler for signal USR1
$
$ writer() {
> if <condition>; then
> kill -USR1 $PPID # send the signal USR1 to a specified process
> echo 'Hello pipe!' > pipe_name
> fi
> }
$ export -f writer # pass the function to child shells
$
$ bash -c writer & # can actually be run sequentially as well
$
Hello pipe!
FD allows data transfer to start before the shell is ready to receive it. Required when used sequentially.
The signal should be sent before data to prevent a deadlock if pipe buffer will fill up.
Destroying a named pipe
The pipe itself (and its content) gets destroyed when all descriptors to it are closed. What's left is just a name.
To make the pipe anonymous and unavailable under the given name (can be done when the pipe is still open) you could use the rm con­sole com­mand (it's the opposite of mkfifo command):
$ rm pipe_name
Terminal 1:
$ mknod new_named_pipe p
$ echo 123 > new_named_pipe
Terminal 1 created a named pipe.
It wrote data in it using echo.
It is blocked as there is no receiving end (as pipes both named and unnamed need receiving and writing ends to it)
Terminal 2:
$ cat new_named_pipe
$ 123
$
From Terminal 2, a receiving end for the data is added.
It read the data in it using cat.
Since both receiving and writing ends are there for the new_named_pipe it displays the information and blocking stops
Named pipes are used everywhere in Linux, most of the char and block files we see during ls -l command are char and block pipes (All of these reside at /dev).
These pipes can be blocking and non-blocking, and the main advantage is these provides the simplest way for IPC.

Resources