Bash: how to cleanly log processed lines of ssh/ bash output? - linux

I wrote a linux bash script with tee and grep to log and timestamp the actions I take in my various ssh sessions. It works, but the logged lines are mixed together sometimes and are full of control characters. How can I properly escape control and other characters not visible in the original sessions and log each line separately?
I am learning bash and the linux interface, so any other suggestions to improve the script would be extremely welcome!
Here is my script (used as a wrapper for the ssh command):
#! /bin/bash
logfile=~/logs/ssh.log
desc="sshlog ${#}"
tab="\t"
format_line() {
while IFS= read -r line; do
echo -e "$(date +"%Y-%m-%d %H:%M:%S %z")${tab}${desc}${tab}${line}"
done
}
echo "[START]" | format_line >> ${logfile}
# grep is used to filter out command line output while keeping commands
ssh "$#" | tee >(grep -e '\#.*\:.*\$' --color=never --line-buffered | format_line >> ${logfile})
echo "[END]" | format_line >> ${logfile}
And here is a screenshot of the jarbled output in the log file:
A note on the solution: Tiago's answer took care of the nonprinting characters very well. Unfortunately, I just realized that the jumbling is being caused by backspaces and using the up and down keys for command completion. That is, the characters are being piped to grep as soon as they appear, and not line-by-line. I will have to ask about this in another question.
Update: I figured out a way to (almost always) handle up/down completion, backspace completion, and control characters.

You can remove those characters with:
perl -lpe 's/[^[:print:]]//g'
Not filtered:
perl -e 'for($i=0; $i<=255; $i++){print chr($i);}' | cat -A
^#^A^B^C^D^E^F^G^H^I$
^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z^[^\^]^^^_ !"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~^?M-^#M-^AM-^BM-^CM-^DM-^EM-^FM-^GM-^HM-^IM-^JM-^KM-^LM-^MM-^NM-^OM-^PM-^QM-^RM-^SM-^TM-^UM-^VM-^WM-^XM-^YM-^ZM-^[M-^\M-^]M-^^M-^_M- M-!M-"M-#M-$M-%M-&M-'M-(M-)M-*M-+M-,M--M-.M-/M-0M-1M-2M-3M-4M-5M-6M-7M-8M-9M-:M-;M-<M-=M->M-?M-#M-AM-BM-CM-DM-EM-FM-GM-HM-IM-JM-KM-LM-MM-NM-OM-PM-QM-RM-SM-TM-UM-VM-WM-XM-YM-ZM-[M-\M-]M-^M-_M-`M-aM-bM-cM-dM-eM-fM-gM-hM-iM-jM-kM-lM-mM-nM-oM-pM-qM-rM-sM-tM-uM-vM-wM-xM-yM-zM-{M-|M-}M-~M-^?
Filtered:
perl -e 'for($i=0; $i<=255; $i++){print chr($i);}' | perl -lpe 's/[^[:print:]]//g' | cat -A
$
!"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~$
Explanation:
I am printing the whole ASCII table with:
perl -e 'for($i=0; $i<=255; $i++){print chr($i);}'
I am identifying non printable chars with:
cat -A
I am filtering non printable chars with:
perl -lpe 's/[^[:print:]]//g'
Edit: It seems to me that you need to remove ANSI color chars:
Example:
perl -MTerm::ANSIColor -e 'print colored("yellow on_magenta","yellow on_magenta"),"\n"'| sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" | perl -lpe 's/[^[:print:]]//g'
Adapting to your code:
format_line() {
while IFS= read -r line; do
line=$(sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" <<< "$line")
line=$(perl -lpe 's/[^[:print:]]//g' <<< "$line")
echo -e "$(date +"%Y-%m-%d %H:%M:%S %z")${tab}${desc}${tab}${line}"
done
}
I also edited your grep command:
ssh "$#" | tee >(grep -Po '(?<=\$).*' --color=never --line-buffered | format_line >> ${logfile})
Below the output of my test:
2014-06-26 10:11:10 +0100 sshlog tiago#localhost [START]
2014-06-26 10:11:15 +0100 sshlog tiago#localhost whoami
2014-06-26 10:11:16 +0100 sshlog tiago#localhost exit
2014-06-26 10:11:16 +0100 sshlog tiago#localhost [END]

While writing your own script is a great learning experience, you can also use script to record everything printed on your terminal to a file.
The resulting file will still contains the control characters but there are multiple ways to get rid of them as described in How to clean up output of linux 'script' command.

Related

Is it possible to do watch logfile with tail -f and pipe updates/changes over netcat to another local system? [duplicate]

This question already has answers here:
Piping tail output though grep twice
(2 answers)
Closed 4 years ago.
There is a file located at $filepath, which grows gradually. I want to print every line that starts with an exclamation mark:
while read -r line; do
if [ -n "$(grep ^! <<< "$line")" ]; then
echo "$line"
fi
done < <(tail -F -n +1 "$filepath")
Then, I rearranged the code by moving the comparison expression into the process substitution to make the code more concise:
while read -r line; do
echo "$line"
done < <(tail -F -n +1 "$filepath" | grep '^!')
Sadly, it doesn't work as expected; nothing is printed to the terminal (stdout).
I prefer to write grep ^\! after tail. Why doesn't the second code snippet work? Why putting the command pipe into the process substitution make things different?
PS1. This is how I manually produce the gradually growing file by randomly executing one of the following commands:
echo ' something' >> "$filepath"
echo '!something' >> "$filepath"
PS2. Test under GNU bash, version 4.3.48(1)-release and tail (GNU coreutils) 8.25.
grep is not line-buffered when its stdout isn't connected to a tty. So it's trying to process a block (usually 4 KiB or 8 KiB or so) before generating some output.
You need to tell grep to buffer its output by line. If you're using GNU grep, this works:
done < <(tail -F -n +1 "$filepath" | grep '^!' --line-buffered)
^^^^^^^^^^^^^^^

Shell script : sed command is ignoring INFO lines given main 2>&1

main 2>&1 is giving INFO, ERROR and Running status lines.
But when I use sed on it, it is giving INFO and Running status lines along with ERROR lines but they are not in the order where they have to.
Jumbled status are at the end after job executed succefully.
main 2>&1 | sed "/ERROR.*[0-9]\{1\}.[0-9]\{1\}.[0-9]\{1\}.[0-9]\{1\}-[0-9]\{3\}.*20[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}_[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}.*(stderr)/ s/ERROR/NOTE/" | tee -a $log_file
These are the lines which are missing after I use sed command:
if you append /\(Running\|INFO\ERROR... in sed as below, this can also print Running and info along with ERROR lines
main 2>&1 | sed "/\(Running\|INFO\|ERROR.*[0-9]\{1\}.[0-9]\{1\}.[0-9]\{1\}.[0-9]\{1\}-[0-9]\{3\}.*20[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}_[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}.*(stderr)\)/ s/ERROR/NOTE/" | tee -a $log_file

Bash output to screen and logfile differently

I have been trying to get a bash script to output different things on the terminal and logfile but am unsure of what command to use.
For example,
#!/bin/bash
freespace=$(df -h / | grep -E "/" | awk '{print $4}')
greentext="\033[32m"
bold="\033[1m"
normal="\033[0m"
logdate=$(date +"%Y%m%d")
logfile="$logdate"_report.log
exec > >(tee -i $logfile)
echo -e $bold"Quick system report for "$greentext"$HOSTNAME"$normal
printf "\tSystem type:\t%s\n" $MACHTYPE
printf "\tBash Version:\t%s\n" $BASH_VERSION
printf "\tFree Space:\t%s\n" $freespace
printf "\tFiles in dir:\t%s\n" $(ls | wc -l)
printf "\tGenerated on:\t%s\n" $(date +"%m/%d/%y") # US date format
echo -e $greentext"A summary of this info has been saved to $logfile"$normal
I want to omit the last output (echo "A summary...") in the logfile while displaying it in the terminal. Is there a command to do so? It would be great if a general solution can be provided instead of a specific one because I want to apply this to other scripts.
EDIT 1 (after applying >&6):
Files in dir: 7
A summary of this info has been saved to 20160915_report.log
Generated on: 09/15/16
One option:
exec 6>&1 # save the existing stdout
exec > >(tee -i $logfile) # like you had it
#... all your outputs
echo -e $greentext"A summary of this info has been saved to $logfile"$normal >&6
# writes to the original stdout, saved in file descriptor 6 ------------^^^
The >&6 sends echo's output to the saved file descriptor 6 (the terminal, if you're running this from an interactive shell) rather than to the output path set up by tee (which is on file descriptor 1). Tested on bash 4.3.46.
References: "Using exec" and "I/O Redirection"
Edit As OP found, the >&6 message is not guaranteed to appear after the lines printed by tee off stdout. One option is to use script, e.g., as in the answers to this question, instead of tee, and then print the final message outside of the script. Per the docs, the stdbuf answers to that question won't work with tee.
Try a dirty hack:
#... all your outputs
echo >&6 # <-- New line
echo -e $greentext ... >&6
Or, equally hackish, (Note that, per OP, this worked)
#... all your outputs
sleep 0.25s # or whatever time you want <-- New line
echo -e ... >&6

cat file_name | grep "something" results "cat: grep: No such file or directory" in shell scripting

I have written shell script which reads commands from input file and execute commands. I have command like:
cat linux_unit_test_commands | grep "dmesg"
in the input file. I am getting below error message while executing shell script:
cat: |: No such file or directory
cat: grep: No such file or directory
cat: "dmesg": No such file or directory
Script:
#!/bin/bash
while read line
do
output=`$line`
echo $output >> logs
done < $1
Below is input file(example_commands):
ls
date
cat linux_unit_test_commands | grep "dmesg"
Execute: ./linux_unit_tests.sh example_commands
Please help me to resolve this issue.
Special characters like | and " are not parsed after expanding variables; the only processing done after variable expansion is word splitting and wildcard expansions. If you want the line to be parsed fully, you need to use eval:
while read line
do
output=`eval "$line"`
echo "$output" >> logs
done < $1
You might be wondering why its not working with cat command.
Then here is the answer for your question.
output=`$line` i.e. output=`cat linux_unit_test_commands | grep "dmesg"`
here the cat command will take (linux_unit_test_commands | grep "dmesg") all these as arguments i.e. fileNames.
From Man page:
SYNTAX : cat [OPTION]... [FILE]...
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Script is OK!
#!/bin/bash
while read line;
do
output=`$line`
echo $output >> logs
done < $1
To make it work you need to change 'cat: "dmesg": No such file or directory' to 'grep "dmesg" linux_unit_test_commands'. It will work!
cat linux_unit_test_commands
ls
date
grep "dmesg" linux_unit_test_commands

Find and highlight text in linux command line

I am looking for a linux command that searches a string in a text file,
and highlights (colors) it on every occurence in the file, WITHOUT omitting text lines (like grep does).
I wrote this handy little script. It could probably be expanded to handle args better
#!/bin/bash
if [ "$1" == "" ]; then
echo "Usage: hl PATTERN [FILE]..."
elif [ "$2" == "" ]; then
grep -E --color "$1|$" /dev/stdin
else
grep -E --color "$1|$" $2
fi
it's useful for stuff like highlighting users running processes:
ps -ef | hl "alice|bob"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
command | grep -iz -e "keyword1" -e "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case, -z for treating as a single file)
Alternatively,while reading files
grep -iz -e "keyword1" -e "keyword2" 'filename'
OR
command | grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case,-A and -B for no of lines before/after the keyword to be displayed)
Alternatively,while reading files
grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" 'filename'
command ack with --passthru switch:
ack --passthru pattern path/to/file
I take it you meant "without omitting text lines" (instead of emitting)...
I know of no such command, but you can use a script such as this (this one is a simple solution that takes the filename (without spaces) as the first argument and the search string (also without spaces) as the second):
#!/usr/bin/env bash
ifs_store=$IFS;
IFS=$'\n';
for line in $(cat $1);
do if [ $(echo $line | grep -c $2) -eq 0 ]; then
echo $line;
else
echo $line | grep --color=always $2;
fi
done
IFS=$ifs_store
save as, for instance colorcat.sh, set permissions appropriately (to be able to execute it) and call it as
colorcat.sh filename searchstring
I had a requirement like this recently and hacked up a small program to do exactly this. Link
Usage: ./highlight test.txt '^foo' 'bar$'
Note that this is very rough, but could be made into a general tool with some polishing.
Using dwdiff, output differences with colors and line numbers.
echo "Hello world # $(date)" > file1.txt
echo "Hello world # $(date)" > file2.txt
dwdiff -c -C 0 -L file1.txt file2.txt

Resources