I'm trying to make a shell script that searches some lines in .log file.
I need to first search the number and place them into a file, then search in that file for each ID and store them in array in order to search again in file for the responses.
Code:
#!/bin/bash
logFile='/path/to/file.log'
if [[ $1 = "" ]]; then
echo "Use ./analize.sh <number>";
exit 0
fi
result=rs-$1.log;
touch result;
grep -B 7 -A 1 $1 $logFile >> result;
IDarr[i]=($(cat result | grep 'ID:'))
while [ -n "$IDarr[i]" ]
do grep -B 2 -A 6 ${IDarr[n]} $result >> idmatched.log
done
exit 0
Thanks
logfile example:
017-06-13 12:00:32 - Outbount Request
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
ID: 132
Request: [FW1;14dwa90266681;12eqw28631898;99514;321;634579;224943746;2]
Encription: YES
Export-Value: 2048
Type: String
Content: {Status=[Failed], Target={"number":"9892213994","asID":"a321kpok41hadw-0-391-12391421.00","version":"09e2da8d-7379-39d9-9c52-be4aba0d49d1", t-qgs-ms=[4], java 1.7, h-ucc-dn=[***************************]}, accept-encoding=[identity], Content-Length=[300],}
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
2017-06-13 12:10:51 - Inbound Response
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
ID: 132
Response-Code: 200
Encription: YES
Export-Value: 2048
Headers: {Content-Length=[39], charset=UTF-8], Date=[13 Jun 2017 12:10:51 GMT]}
Load: {"result":"E9W2KE3MNA","Message":"Successful"}
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Related
Below is my bash script.
#!/bin/bash
message=$1
hostname=$2
severity=$3
eventname=$4
tagpath=$5
appname=$6
data="{"action":"EventsRouter","method":"add_event","data":[{"summary":"$message"},"device":"$hostname","message": "$message $eventname $tagpath","component":"$appname","severity":"$severity","evclasskey":"nxlog","evclass":"/nxlog/perf","monitor":"localhost"}],"type":"rpc","tid":1}"
echo "Total number of args : $#"
echo "message = $message"
echo "hostname = $hostname"
echo "appname = $appname"
echo "data = $data"
curl -u uname:password -k https://myurlcom/zport/dmd/evconsole_router -d $data
and when i try to run with sh tcp.sh value value value value value value
host:
'host,component:host,severity:host,evclasskey:nxlog,evclass:/nxlog/perf,monitor:localhost}],type:rpc,tid:1}'
is not a legal name (unexpected end of input) Total number of args : 6
message = message hostname = test appname = host data = curl: option
-d: requires parameter
I see that data has no value included.
This json has to be sent in this order for it to be accepted in the endpoint. Help me correct this.
Using jq to safely generate the desired JSON:
#!/bin/bash
parameters=(
--arg message "$1"
--arg hostname "$2"
--arg severity "$3"
--arg eventname "$4"
--arg tagpath "$5"
--arg appname "$6"
)
data=$(jq -n "${parameters[#]}" '
{action: "EventsRouter",
method: "add_event",
data: [ {summary: $message,
device: $hostname,
message: "\($message) \($eventname\) \($tagpath)",
component: $appname,
severity: $severity,
evclasskey: "nxlog",
evclass: "/nxlog/perf",
monitor: "localhost"
}
],
type: "rpc",
tid: 1
}'
curl -u uname:password -k https://myurlcom/zport/dmd/evconsole_router -d "$data"
Assuming:
message="my_message"
hostname="my_host"
severity="my_severity"
eventname="my_event"
tagpath="my_path"
appname="my_app"
If you run:
data="{"action":"EventsRouter","method":"add_event","data":[{"summary":"$message"},"device":"$hostname","message": "$message $eventname $tagpath","component":"$appname","severity":"$severity","evclasskey":"nxlog","evclass":"/nxlog/perf","monitor":"localhost"}],"type":"rpc","tid":1}"
you will get an error, because there is a not escaped white space before the string "my_event"
my_event: command not found
What happened? Since your json input has a lot of words between double quotes, you will have to enclose the whole string into single quotes, in order to preserve the double quotes inside of the string. But between single quotes, the bash variables will not be replaced by their value. So you will need to close the single quotes before each variable and reopen these again immediately after.
So that line of your script must become:
data='{"action":"EventsRouter","method":"add_event","data":[{"summary":"'$message'"},"device":"'$hostname'","message": "'$message $eventname $tagpath'","component":"'$appname'","severity":"'$severity'","evclasskey":"nxlog","evclass":"/nxlog/perf","monitor":"localhost"}],"type":"rpc","tid":1}'
If you execute:
echo "$data"
you will get:
{"action":"EventsRouter","method":"add_event","data":[{"summary":"my_message"},"device":"my_host","message": "my_message my_event my_path","component":"my_app","severity":"my_severity","evclasskey":"nxlog","evclass":"/nxlog/perf","monitor":"localhost"}],"type":"rpc","tid":1}
which is correct, I assume: the double quotes didn't disappear from your json data structure.
I need bash script to count processes of SPECIFIC users or all users. We can enter 0, 1 or more arguments. For example
./myScript.sh root deamon
should execute like this:
root 92
deamon 8
2 users has total processes: 100
If nothing is entered as parameter, then all users should be listed:
uuidd 1
awkd 2
daemon 1
root 210
kklmn 6
5 users has total processes: 220
What I have till now is script for all users, and it works fine (with some warnings). I just need part where arguments are entered (some kind of filter results). Here is script for all users:
cntp = 0 #process counter
cntu = 0 #user counter
ps aux |
awk 'NR>1{tot[$1]++; cntp++}
END{for(id in tot){printf "%s\t%4d\n",id,tot[id]; cntu++}
printf "%4d users has total processes:%4d\n", cntu, cntp}'
#!/bin/bash
users=$#
args=()
if [ $# -eq 0 ]; then
# all processes
args+=(ax)
else
# user processes, comma-separated list of users
args+=(-u${users// /,})
fi
# print the user field without header
args+=(-ouser=)
ps "${args[#]}" | awk '
{ tot[$1]++ }
END{ for(id in tot){ printf "%s\t%4d\n", id, tot[id]; cntu++ }
printf "%4d users has total processes:%4d\n", cntu, NR}'
The ps arguments are stored in array args and list either all processes with ax or user processes in the form -uuser1,user2
and -ouser= only lists the user field without header.
In the awk script I only removed the NR>1 test and variable cntp which can be replaced by NR.
Possible invocations:
./myScript.sh
./myScript.sh root daemon
./myScript.sh root,daemon
The following seems to work:
ps axo user |
awk -v args="$(IFS=,; echo "$*")" '
BEGIN {
# split args on comma
split(args, users, ",");
# associative array with user as indexes
for (i in users) {
enabled[users[i]] = 1
}
}
NR > 1 {
tot[$1]++;
cntp++;
}
END {
for(id in tot) {
# if we passed some arguments
# and its disabled
if (length(args) && enabled[id] == 0) {
continue
}
printf "%s\t%4d\n", id, tot[id];
cntu++;
}
printf "%4d users has total processes:%4d\n", cntu, cntp
}
'
Tested in repl.
I have DB error log file, it will grow continuously.
Now i want to set some error monitoring on that file for every 5 minutes.
The problem is i don’t want to scan whole file for every 5 minutes(when monitoring cron executed), because it may grow very big in future. Scanning through whole(big) file for every 5 mins will consume bit more resources.
So i just want to scan only the lines which were inserted/written to the log during last 5 mins interval.
Each error recorded in log will have Timestamp prepend to it like below:
180418 23:45:00 [ERROR] mysql got signal 11.
So i want to search with pattern [ERROR] only on lines which were added from last 5 mins(not whole file) and place the output to another file.
Please help me here.
Feel free if u need more clarification on my question.
I’m using RHEL 7 and i’m trying to implement above monitoring through bash shell script
Serializing the Byte Offset
This picks up where the last instance left off. If you run it every 5 minutes, then, it'll scan 5 minutes of data.
Note that this implementation knowingly can scan data added during an invocation's run twice. This is a little sloppy, but it's much safer to scan overlapping data twice than to never read it at all, which is a risk that can be run if relying on cron to run your program on schedule (likewise, sleeps can run over the requested time if the system is busy).
#!/usr/bin/env bash
file=$1; shift # first input: filename
grep_opts=( "$#" ) # remaining inputs: grep options
dir=$(dirname -- "$file") # extract directory name to use for offset storage
basename=${file##*/} # pick up file name w/o directory
size_file="$dir/.$basename.size" # generate filename to use to store offset
if [[ -s $size_file ]]; then # ...if we already have a file with an offset...
old_size=$(<"$size_file") # ...read it from that file
else
old_size=0 # ...otherwise start at the front.
fi
new_size=$(stat --format=%s -- "$file") || exit # Figure out current size
if (( new_size < old_size )); then
old_size=0 # file was truncated, so we can't trust old_size
elif (( new_size == old_size )); then
exit 0 # no new contents, so no point in trying to search
fi
# read starting at old_size and grep only that content
dd iflag=skip_bytes skip="$old_size" if="$file" | grep "${grep_opts[#]}"; grep_retval=$?
# if the read failed, don't store an updated offset
(( ${PIPESTATUS[0]} != 0 )) && exit 1
# create a new tempfile to store offset in
tempfile=$(mktemp -- "${size_file}.XXXXXX") || exit
# write to that temporary file...
printf '%s\n' "$new_size" > "$tempfile" || { rm -f "$tempfile"; exit 1; }
# ...and if that write succeeded, overwrite the last place where we serialized output.
mv -- "$tempfile" "$new_size" || exit
exit "$grep_retval"
Alternate Mode: Bisect For The Timestamp
Note that this can miss content if you're relying on, say, cron to invoke your code every 5 minutes on-the-dot; storing byte offsets can thus be more accurate.
Using the bsearch tool by Ole Tange:
#!/usr/bin/env bash
file=$1; shift
start_date=$(date -d 'now - 5 minutes' '+%y%m%d %H:%M:%S')
byte_offset=$(bsearch --byte-offset "$file" "$start_date")
dd iflag=skip_bytes skip="$byte_offset" if="$file" | grep "$#"
Another approach could be something like this:
DB_FILE="FULL_PATH_TO_YOUR_DB_FILE"
current_db_size=$(du -b "$DB_FILE" | cut -f 1)
if [[ ! -a SOME_PATH_OF_YOUR_CHOICE/last_size_db_file ]] ; then
tail --bytes $current_db_size $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
else
if [[ $(cat last_size_db_file) -gt $current_db_size ]] ; then
previously_readed_bytes=0
else
previously_readed_bytes=$(cat last_size_db_file)
fi
new_bytes=$(($current_db_size - $previously_readed_bytes))
tail --bytes $new_bytes $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
fi
printf $current_db_size > SOME_PATH_OF_YOUR_CHOICE/last_size_db_file
this prints all bytes of DB_FILE not previously printed to SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
Note that $(date +%Y-%m-%d_%H-%M-%S) will be the current 'full' date at the time of creating the log file
you can make this an script, and use cron to execute that script every five minutes; something like this:
*/5 * * * * PATH_TO_YOUR_SCRIPT
Here is my approach:
First, read the whole log once so far.
If you reach the end, collect and read new lines for a timespan (in my example 9 seconds, for faster testing, while my dummy server appends to the logfile every 3 seconds).
After the timespan, echo the cache, clear the cache (an array arr), loop and sleep for some time, so that this process doesn't consume all CPU time.
First, my dummy logfile writer:
#!/bin/bash
#
# dummy logfile writer
#
while true
do
s=$(( $(date +%s) % 3600))
echo $s server msg
sleep 3
done >> seconds.log
Startet via ./seconds-out.sh &.
Now the more complicated part:
#!/bin/bash
#
# consume a logfile as written so far. Then, collect every new line
# and show it in an interval of $interval
#
interval=9 # 9 seconds
#
printf -v secnow '%(%s)T' -1
start=$(( secnow % (3600*24*365) ))
declare -a arr
init=false
while true
do
read line
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
# consume every line created in the past
if (( ! init ))
then
# assume reading a line might not take longer than a second (rounded to whole seconds)
while (( ${#line} > 0 && (now - start) < 2 ))
do
read line
start=$now
echo -n "." # for debugging purpose, remove
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
done
init=1
echo "init=$init" # for debugging purpose, remove
# collect new lines, display them every $interval seconds
else
if ((${#line} > 0 ))
then
echo -n "-" # for debugging purpose, remove
arr+=("read: $line \n")
fi
if (( (now - start) > interval ))
then
echo -e "${arr[#]]}"
arr=()
start=$now
fi
fi
sleep .1
done < seconds.log
Output with logfile generator in 3 seconds, running for some time, then starting the read-seconds.sh script, with debugging output activated:
./read-seconds.sh
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................init=1
---read: 1688 server msg
read: 1691 server msg
read: 1694 server msg
---read: 1697 server msg
read: 1700 server msg
read: 1703 server msg
----read: 1706 server msg
read: 1709 server msg
read: 1712 server msg
read: 1715 server msg
^C
Every dot represents a logfile line from the past and therefor skipped.
Every dash represents a logfile line collected.
Looking for a way to extract the volume from
pactl list sink-inputs
Output example:
Sink Input #67
Driver: protocol-native.c
Owner Module: 12
Client: 32
Sink: 0
Sample Specification: s16le 2ch 44100Hz
Channel Map: front-left,front-right
Format: pcm, format.sample_format = "\"s16le\"" format.channels = "2" format.rate = "44100" format.channel_map = "\"front-left,front-right\""
Corked: no
Mute: no
Volume: front-left: 19661 / 30% / -31.37 dB, front-right: 19661 / 30% / -31.37 dB
balance 0.00
Buffer Latency: 100544 usec
Sink Latency: 58938 usec
Resample method: n/a
Properties:
media.name = "'Alerion' by 'Asking Alexandria'"
application.name = "Clementine"
native-protocol.peer = "UNIX socket client"
native-protocol.version = "32"
media.role = "music"
application.process.id = "16924"
application.process.user = "gray"
application.process.host = "gray-kubuntu"
application.process.binary = "clementine"
application.language = "en_US.UTF-8"
window.x11.display = ":0"
application.process.machine_id = "54f542f950a5492c9c335804e1418e5c"
application.process.session_id = "3"
application.icon_name = "clementine"
module-stream-restore.id = "sink-input-by-media-role:music"
media.title = "Alerion"
media.artist = "Asking Alexandria"
I want to extract the
30
from the line
Volume: front-left: 19661 / 30% / -31.37 dB, front-right: 19661 / 30% / -31.37 dB
Note: There may be multiple sink inputs, and I need to extract the volume only from Sink Input #67
Thanks
P.S. Need this for a script of mine which should increase or decrease the volume of my music player. I'm completely new to both linux and bash so I couldn't figure a way to resolve the problem.
Edit:
My awk version
gray#gray-kubuntu:~$ awk -W version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan
compiled limits:
max NF 32767
sprintf buffer 2040
Since you are pretty new to use standard text processing tools, I will provide an answer with a detailed explanation. Feel free to use it for future.
Am basing this answer using the GNU Awk I have installed which should likely also work in mawk installed in your system.
pactl list sink-inputs | \
mawk '/Sink Input #67/{f=1; next} f && /Volume:/{ n=split($0,matchGroup,"/"); val=matchGroup[2]; gsub(/^[[:space:]]+/,"",val); gsub(/%/,"",val); print val; f=0}'
Awk processes one line at a time which is based on a /pattern/{action1; action2} syntax. In our case though, we match the line /Sink Input #67/ and enable a flag(f) to mark the next occurrence of Volume: string in the lines below. Without the flag set it could match the instances for other sink inputs.
So once we match the line, we split the line using the de-limiter / and get the second matched element which is stored in the array(matchGroup). Then we use the gsub() calls twice once, to replace the leading white-spaces and other to remove the % sign after the number.
This script I wrote might be what you wanted. It lets me adjust the volume easily using the pacmd and pactl commands. Seems to work well when I'm using a GNOME desktop, (Wayland or Xorg), and it's working on RHEL/Fedora and Ubuntu so far. I haven't tried using it with other desktops/distros, or with surround sound systems, etc.
Drop it in your path, and run it without any values to see the current volume. Alternatively set the volume by passing it a percentage. A single value sets both speakers, two values will set left, and right separately. In theory you shouldn't use a value outside of 0%-200%, but the script doesn't check for that (and neither does PulseAudio apparently), so be careful, as a volume higher than 200% may harm your speakers.
[~]# volume
L R
20% 20%
[~]# volume 100% 50%
[~]# volume
L R
100% 50%
[~]# volume 80%
[~]# volume
L R
80% 80%
#!/bin/bash
[ ! -z "$1" ] && [ $# -eq 1 ] && export LVOL="$1" && export RVOL="$1"
[ ! -z "$1" ] && [ ! -z "$2" ] && [ $# -eq 2 ] && export LVOL="$1" && export RVOL="$2"
SINK=$(pacmd list-sinks | grep -e '* index:' | grep -Eo "[0-9]*$")
if [ -z "$LVOL" ] || [ -z "$RVOL" ]; then
# pacmd list-sinks | grep -e '* index:' -A 20 | grep -e 'name:' -e '^\s*volume:.*\n' -e 'balance' --color=none
printf "%-5s%-4s\n%-5s%-4s\n" "L" "R" $(pacmd list-sinks | grep -e '* index:' -A 20 | grep -e '^\s*volume:.*\n' --color=none | grep -Eo "[0-9]*%" | tr "\n" " " | sed "s/ $/\n/g")
exit 0
elif [[ ! "$LVOL" =~ ^[0-9]*%$ ]] || [[ ! "$RVOL" =~ ^[0-9]*%$ ]]; then
printf "The volume should specified as a percentage, from 0%% to 200%%.\n"
exit 1
elif [ "$SINK" == "" ]; then
printf "Unable to find the default sound output.\n"
exit 1
fi
pactl -- set-sink-volume $SINK $LVOL $RVOL
Hello: I have a lot of files called test-MR3000-1.txt to test-MR4000-1.nt, where the number in the name changes by 100 (i.e. I have 11 files),
$ ls test-MR*
test-MR3000-1.nt test-MR3300-1.nt test-MR3600-1.nt test-MR3900-1.nt
test-MR3100-1.nt test-MR3400-1.nt test-MR3700-1.nt test-MR4000-1.nt
test-MR3200-1.nt test-MR3500-1.nt test-MR3800-1.nt
and also a file called resonancia.kumac which in a couple on lines contains the string XXXX.
$ head resonancia.kumac
close 0
hist/delete 0
vect/delete *
h/file 1 test-MRXXXX-1.nt
sigma MR=XXXX
I want to execute a bash file which substitutes the strig XXXX in a file by a set of numbers obtained from the command ls *MR* | cut -b 8-11.
I found a post in which there are some suggestions. I try my own code
for i in `ls *MR* | cut -b 8-11`; do
sed -e "s/XXXX/$i/" resonancia.kumac >> proof.kumac
done
however, in the substitution the numbers are surrounded by sigle qoutes (e.g. '3000').
Q: What should I do to avoid the single quote in the set of numbers? Thank you.
This is a reproducer for the environment described:
for ((i=3000; i<=4000; i+=100)); do
touch test-MR${i}-1.nt
done
cat >resonancia.kumac <<'EOF'
close 0
hist/delete 0
vect/delete *
h/file 1 test-MRXXXX-1.nt
sigma MR=XXXX
EOF
This is a script which will run inside that environment:
content="$(<resonancia.kumac)"
for f in *MR*; do
substring=${f:7:3}
echo "${content//XXXX/$substring}"
done >proof.kumac
...and the output looks like so:
close 0
hist/delete 0
vect/delete *
h/file 1 test-MR300-1.nt
sigma MR=300
There are no quotes anywhere in this output; the problem described is not reproduced.
or if it could be perl:
#!/usr/bin/perl
#ls = glob('*MR*');
open (FILE, 'resonancia.kumac') || die("not good\n");
#cont = <FILE>;
$f = shift(#ls);
$f =~ /test-MR([0-9]*)-1\.nt/;
$nr = $1;
#out = ();
foreach $l (#cont){
if($l =~ s/XXXX/$nr/){
$f = shift(#ls);
$f =~ /test-MR([0-9]*)-1\.nt/;
$nr = $1;
}
push #out, $l;
}
close FILE;
open FILE, '>resonancia.kumac' || die("not good\n");
print FILE #out;
That would replace the first XXXX with the first filename, what seemed to be the question before change.