I have deployed a post-receive hook script and I want to get whole output logs in a file as a block starting with date. I have tried one method but it take line by line log, meaning I have to put that command against every line to get the log. Is there any way to get the whole script log once starting with date?
Script file looks like this:
#!/bin/bash
any_command | ts '[%F %T]' >> /home/Man/Man_log.log
second_command | ts '[%F %T]' >> /home/Man/Man_log.log
third_command | ts '[%F %T]' >> /home/Man/Man_log.log
see i have to put this line | ts '[%F %T]' >> /home/Man/Man_log.log against every command to get log. And I have 90 lines, so it is not a perfect way to do this. I need an efficient way, like only one line in my script which takes the output of the whole script as log and stores it to another Man_log.log file starting with the date.
What i want is similar to this.
#!/bin/bash
ts '[%F %T]' >> /home/Man/Man_log.log #a command which can store logs of every command below this to a separate file starting with date
any_command
second_command
third_command
Probably the easiest way would be to modify your script to print the date everytime is called:
#!/bin/sh
date -u +"%Y-%m-%dT%H:%M:%SZ"
# your commands below
...
if you can't modify the script then you could group your commands for example:
(date && anycommand) >> out.log
Grouping a (list of commands) in parenthesis causes them to be executed as if they were a single unit. "sub-shell"
Let's say this is the script that should give us some logs:
#!/bin/bash
echo "log_line1
log_line2
log_line3
log_line4"
And let's call it script.sh.
Running it as ./script.sh | printf "%(%F)T {\n$(cat)\n}" >> Man_log.log, content of Man_log.log will be:
2018-05-04 {
log_line1
log_line2
log_line3
log_line4
}
Let me now explain what exactly the pipe does.
%(%F)T is replaced with current date in YEAR-MONTH-DAY format
$(cat) are the logs that are output of ./script.sh. In general it is data received as standard input.
Basically ./script.sh writes it's logs to standard output. The pipe then passes that standard output to standard input of echo. Running cat without parameters is the same as cat /dev/stdin.
Related
Can someone fix this for me.
It should copy a version log file to backup after moving to a repo directory
Then it automatically appends line given as input to the log file with some formatting.
That's it.
Assume existence of log file and test directory.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG |
VHENTRY="- **${LOGDATE}** | ${VHMSG}"
cat ${VHENTRY} >> versionlog.MD
shell output
virufac#box:~/Git/test$ ~/.logvh.sh
MSG > testing script
EOF
EOL]
EOL
e
E
CTRL + C to get out of stuck in reading lines of input
virufac#box:~/Git/test$ cat versionlog.MD
directly outputs the markdown
# Version Log
## version 0.0.1 established 01-22-2020
*Working Towards Working Mission 1 Demo in 0.1 *
- **01-22-2020** | discovered faker.Faker and deprecated old namelessgen
EOF
EOL]
EOL
e
E
I finally got it to save the damned input lines to the file instead of just echoing the command I wanted to enter on the screen and not executing it. But... why isn't it adding the lines built from the VHENTRY variable... and why doesn't it stop reading after one line sometimes and this time not. You could see I was trying to do something to tell it to stop reading the input.
After some realizing a thing I had done in the script was by accident... I tried to fix it and saw that the | at the end of the read command was seemingly the only reason the script did any of what it did save to the file in the first place.
I would have done this in python3 if I had know this script wouldn't be the simplest thing I had ever done. Now I just have to know how you do it after all the time spent on it so that I can remember never to think a shell script will save time again.
Use printf to write a string to a file. cat tries to read from a file named in the argument list. And when the argument is - it means to read from standard input until EOF. So your script is hanging because it's waiting for you to type all the input.
Don't put quotes around the path when it starts with ~, as the quotes make it a literal instead of expanding to the home directory.
Get rid of | at the end of the read line. read doesn't write anything to stdout, so there's nothing to pipe to the following command.
There isn't really any need for the VHENTRY variable, you can do that formatting in the printf argument.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG
printf -- '- **%s** | %s\n' "${LOGDATE}" "$VHMSG" >> versionlog.MD
I'm trying to convert this batch file that runs a python script into a bash script. I needed help converting a wait function in the batch file that waits for an action to complete into bash.
script.py wait-for-job <actionID> is the actual call that waits for the specific action to complete. The wait function basically assigns a value from the log file to a variable and then passes that variable as a parameter to a python script (script.py).
The log file is written continuously after each action and the last line (from which the action ID is fetched) looks something like this:
02/10/2019 00:00:00 AM Greenwich Mean Time print_action_id():250 INFO Action ID: 123456
The wait function in the batch file is as follows:
:wait
#echo off
for /f "tokens=11" %%i in (C:\Users\DemoUser\Dir\file.log) do ^
set ID=%%i
#echo on
script.py wait-for-job --action-id %ID%
EXIT /B 0
I tried implementing the same thing in bash like below but it did not seem to work (I'm new to shell scripting and I'm sure it's all wrong):
for $a in (tail -n1 /home/DemoUser/Dir/file.log); do
ID=$($a | awk { print $12})
script.py wait-for-job --action-id $ID
done
The following reads each line of the file and pulls out the ID and uses it to call a py script. First we declare the paths and variables. Then we run a loop.
#!/bin/bash
typeset file=/home/DemoUser/Dir/file.log
typeset py_script=/path/to/script.py
readonly PY=/path/to/python
while IFS= read -r line ;do
${PY} ${py_script} wait-for-job --action-id $(${line} | awk { print $12})
done < "${file}"
I am new to Shellscripting.I am working on a poc in which a script should read a log file and then append to a existing file for the purpose of alert.It should work as per below
There will be some predefined format according to which it will decide whether to append in file or not.For example:
WWXXX9999XS message
**XXX** - is a 3 letter acronym (application code) like for **tom** for tomcat application
9999 - is a 4 numeric digit in the range 1001-1999
**E or X** - For notification X ,If open/active alerts already existing for same error code and same message,new alerts will not be raised for existing one.Once you have closed existing alerts,it will raise alarm for new error.There is any change in message for same error code from existing one, it will raise a alarm even though open/active alerts present.
X option is only for drop duplicates on code and message otherwise all alert mechanisms are same.
**S** - is the severity level, I.e 2,3
**message** - is any text that will be displayed
The script will examine the log file, and look for error like cloud server is down,then it would append 'wwclo1002X2 cloud server is down'if its a new alert.
2.If the same alert is coming again,then it should append 'wwclo1002E2 cloud server is down
There are some very handy commands you can use to do this type of File manipulation. I've updated this in response to your comment to allow functionality that will check if the error has already been appended to the new file.
My suggestion would be that there is enough functionality here to warrant saving it in a bash script.
My approach would be to use a combination of less, grep and > to read and parse the file and then append to the new file. First save the following into a bash script (e.g. a file named script.sh)
#!/bin/bash
result=$(less $1 | grep $2)
exists=$(less $3 | grep $2)
if [[ "$exists" == "$result" ]]; then
echo "error, already present in file"
exit 1
else
echo $result >> $3
exit 0
fi
Then use this file in the command passing in the log file as the first argument, the string to search for as the second argument and the target results file as the third argument like this:
./script.sh <logFileName> "errorToSearchFor" <resultsTargetFileName>
Don't forget to run the file you will need to change the permissions - you can do this using:
chmod u+x script.sh
Just to clarify as you have mentioned you are new to scripting - the less command will output the entire file, the | command (an unnamed pipe) will pass this output to the grep command which will then search the file for the expression in quotes and return all lines from the file containing that expression. The output of the grep command is then appended to the new file with >>.
You may need to tailor the expression in quotes after grep to get exactly the output you want from the log file.
The filenames are just placeholders, be sure to update these with the correct file names. Hope this helps!
Note updated > to >> (single angle bracket overwrites, double angle bracket appends
I saw the line data=$(cat) in a bash script (just declaring an empty variable) and am mystified as to what that could possibly do.
I read the man pages, but it doesn't have an example or explanation of this. Does this capture stdin or something? Any documentation on this?
EDIT: Specifically how the heck does doing data=$(cat) allow for it to run this hook script?
#!/bin/bash
# Runs all executable pre-commit-* hooks and exits after,
# if any of them was not successful.
#
# Based on
# http://osdir.com/ml/git/2009-01/msg00308.html
data=$(cat)
exitcodes=()
hookname=`basename $0`
# Run each hook, passing through STDIN and storing the exit code.
# We don't want to bail at the first failure, as the user might
# then bypass the hooks without knowing about additional issues.
for hook in $GIT_DIR/hooks/$hookname-*; do
test -x "$hook" || continue
echo "$data" | "$hook"
exitcodes+=($?)
done
https://github.com/henrik/dotfiles/blob/master/git_template/hooks/pre-commit
cat will catenate its input to its output.
In the context of the variable capture you posted, the effect is to assign the statement's (or containing script's) standard input to the variable.
The command substitution $(command) will return the command's output; the assignment will assign the substituted string to the variable; and in the absence of a file name argument, cat will read and print standard input.
The Git hook script you found this in captures the commit data from standard input so that it can be repeatedly piped to each hook script separately. You only get one copy of standard input, so if you need it multiple times, you need to capture it somehow. (I would use a temporary file, and quote all file name variables properly; but keeping the data in a variable is certainly okay, especially if you only expect fairly small amounts of input.)
Doing:
t#t:~# temp=$(cat)
hello how
are you?
t#t:~# echo $temp
hello how are you?
(A single Controld on the line by itself following "are you?" terminates the input.)
As manual says
cat - concatenate files and print on the standard output
Also
cat Copy standard input to standard output.
here, cat will concatenate your STDIN into a single string and assign it to variable temp.
Say your bash script script.sh is:
#!/bin/bash
data=$(cat)
Then, the following commands will store the string STR in the variable data:
echo STR | bash script.sh
bash script.sh < <(echo STR)
bash script.sh <<< STR
I wrote a script that gets load and mem information for a list of servers by ssh'ing to each server. However, since there are around 20 servers, it's not very efficient to wait for the script to end. That's why I thought it might be interesting to make a crontab that writes the output of the script to a file, so all I need to do is cat this file whenever I need to know load and mem information for the 20 servers. However, when I cat this file during the execution of the crontab it will give me incomplete information. That's because the output of my script is written line by line to the file instead of all at once at termination. I wonder what needs to be done to make this work...
My crontab:
* * * * * (date;~/bin/RUP_ssh) &> ~/bin/RUP.out
My bash script (RUP_ssh):
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca
done
Thanks,
niefpaarschoenen
You can buffer the output to a temporary file and then output all at once like this:
outputbuffer=`mktemp` # Create a new temporary file, usually in /tmp/
trap "rm '$outputbuffer'" EXIT # Remove the temporary file if we exit early.
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca >> "$outputbuffer" # gather info to buffer file
done
cat "$outputbuffer" # print buffer to stdout
# rm "$outputbuffer" # delete temporary file, not necessary when using trap
Assuming there is a string to identify which host the mem/load data has come from you can update your txt file as each result comes in. Asuming the data block is one line long you could use
for comp in `cat ~/bin/servers`; do
output=$( ssh $comp ~/bin/ca )
# remove old mem/load data for $comp from RUP.out
sed -i '/'"$comp"'/d' RUP.out # this assumes that the string "$comp" is
# integrated into the output from ca, and
# not elsewhere
echo "$output" >> RUP.out
done
This can be adapted depending on the output of ca. There is lots of help on sed across the net.