how to do change wit sed in bash script - linux

Hi below is my bash script. which takes a source file and a token file,
token file contains servicename:usage
I have to find servicename in source file line by line if found then calculate memory usage then change -Xmxm with -Xmx\d{1,3}m. In below script bold line explain what to do much simple
You can first under stand issue from below small part of script
line="Superviser.childOpts:-Xmx128m"
heapMB=750
line=($(echo $line|sed "s/${-Xmx\d{1,3}m}/$-Xmx{$heapMB}m/g"))
So what is the wrong in above line
#!/bin/bash
sourceFile=$1
tokenFile=$2
if [ -z $sourceFile ]
then
echo "Please provide a valid source file"
exit 0
fi
if [ -z $tokenFile ]
then
echo "Please provide a valid token file"
exit 0
fi
#read token file and tokenize with : to get service name at 0 index and percentage usages at 1
declare arr_token_name
declare arr_token_usage
count=0
while read line
do
#here line contain :percentage usages
OIFS="$IFS"
IFS=$':'
arr=($line)
IFS="$OIFS"
if [ ! -z $line ]
then
arr_token_name[$count]=${arr[0]}
arr_token_usage[$count]=${arr[1]}
count=`expr $count + 1`
fi
done
# read source file line by line test with all the tokens
totalMemKB=$(awk '/MemTotal:/ { print $2 }' /proc/meminfo)
echo "total mem = $totalMemKB"
while read line
do
result_token_search=""
#for j in "${arr_token_name[#]}"
#do
# echo "index=$j"
#done
count2=0
for i in "${arr_token_name[#]}"
do
#here search token in line , if found
#calculate memory for this getting percent usage from arr_token_usage then use calculate frmula then device by 1024
#then replace -Xmx\d{1,5}m with -Xmx
echo "line1=$line"
result_token_search=$(echo $line|grep -P "$i")
if [ -n "$result_token_search" ]
then
percent_usage=${arr_token_usage[$count2]}
let heapKB=$totalMemKB*$percent_usage/100
let heapMB=$heapKB/1024
echo "before sed=$line"
line=($(echo $line|sed "s/${-Xmx\d{1,3}m}/$-Xmx{$heapMB}m/g"))
echo "new line=$line"
echo "token found in line $line , token = $i"
fi
result_token_search=""
count2=`expr $count2+1`
cat "$line" >> tmp.txt
done
done

try this line:
line=$( sed "s/-Xmx[0-9]\+/-Xmx$heapMB/" <<<$line )
test with your example:
kent$ line="Superviser.childOpts:-Xmx128m"
kent$ heapMB=750
kent$ line=$( sed "s/-Xmx[0-9]\+/-Xmx$heapMB/" <<<$line )
kent$ echo $line
Superviser.childOpts:-Xmx750m

Related

Linux Bash script isnt printing out correctly

GOAL: My goal in this assignment is to create a script that will take in a student id as an input and will output a matching student's name OR an error message saying there is none by that name in this class. Im fairly new to Linux and it is kinda tough for me but I would love all the help I can get. Thanks!
Screenshot Page 1 of assignment
Screenshot Page 2 of assignment
My script is printing off everyones name in the file rather than just the one I am searching for.
#!/bin/bash
# findName.sh
searchFile="/acct/common/CSCE215-Fall17"
if [[ $1 = "" ]] ; then
echo "Sorry that person is not in CSCE215 this semester"
exit 2
fi
while read LINE
do
firstNameIndex=0
middleNameIndex=1
lastNameIndex=2
userIDIndex=3
IFS=', ' read -r -a lineArray <<< "$LINE"
if [[ $1 -eq ${lineArray[$userIDIndex]} ]] ; then
echo ${lineArray[$firstNameIndex]} ${lineArray[$middleNameIndex]} ${lineArray[$lastNameIndex]}
fi
done < "$searchFile"
VERSION 3:
Here is how I would do it with grep. This prevents you from looping through the input file.
#!/bin/bash
searchFile="sample.txt"
function notincourse()
{
echo "Sorry that person is not in CSCE215 this semester"
exit 2
}
# Verify arguments, 1 argument, name to search for
if [ $# -ne 1 ]
then
echo "findName.sh <NAME>"
exit 1
else
searchfor=$1
fi
# Verify if the name is in the file
nameline=$(grep $searchfor $searchFile)
#if [ $(echo $nameline | wc -l) -eq 0 ]
if [ $? -eq 1 ]
then
notincourse
else
idvalue=$(echo $nameline | cut -d',' -f1)
if [ "$idvalue" == "$searchfor" ]
then
IFS=', ' read -r -a lineArray <<< "$nameline"
echo ${lineArray[1]} ${lineArray[2]} ${lineArray[3]}
else
notincourse
fi
fi
I tried if with the following test input file:
111, firstname1, middlename1, lastname1
222, firstname2, middlename2, lastname2
333, firstname3, middlename3, lastname3
VERSION 3: it now verifies that the id is indeed the first word in the line. I realized that if the student id is someown included in his name (ya, but better safe than sorry!) my grep would return true!
One line of code to change:
if [[ "$1" == "${lineArray[$userIDIndex]}" ]] ; then

Using variables on grep –q doesn’t produce founds

I need to extract entries from a log file and put them on an errors file.
I don't want to duplicate the entries on the errors file every time that the script is run, so I create this:
grep $1 $2 | while read -r line ; do
echo "$line"
if [ ! -z "$line" ]
then
echo "Line is NOT empty"
if grep -q "$line" $3; then
echo "Line NOT added"
else
echo $line >> $3
echo "Line added"
fi
fi
done
And is run using:
./log_monitor.sh ERROR logfile.log errors.txt
The first time that the script runs it finds the entries, and create the errors file (there are no errors file before).
The next time, this line never found the recently added lines to the errors file,
if grep -q "$line" $3;
therefore, the script adds the same entries to the errors file.
Any ideas of why this is happening?
This most likely happens because you are not searching for the line itself, but treating the line as regex. Let's say you have this file:
$ cat file
[ERROR] This is a test
O This is a test
and you try to find the first line:
$ grep "[ERROR] This is a test" file
O This is a test
As you can see, it does not match the line we're looking for (causing duplicates) and does match a different line (causing dropped entries). You can instead use -F -x to search for literal strings matching the full line:
$ grep -F -x "[ERROR] This is a test" file
[ERROR] This is a test
Applying this to your script:
grep $1 $2 | while read -r line ; do
echo "$line"
if [ ! -z "$line" ]
then
echo "Line is NOT empty"
if grep -F -x -q "$line" $3; then
echo "Line NOT added"
else
echo $line >> $3
echo "Line added"
fi
fi
done
And here with additional fixes and cleanup:
grep -e "$1" -- "$2" | while IFS= read -r line ; do
printf '%s\n' "$line"
if [ "$line" ]
then
echo "Line is NOT empty"
if grep -Fxq -e "$line" -- "$3"; then
echo "Line NOT added"
else
printf '%s\n' "$line" >> "$3"
echo "Line added"
fi
fi
done
PS: this could be shorter, faster and have a better time complexity with a snippet of awk.
There's no need to check for blank lines; the first grep only checks lines with the word "ERROR", which cannot be blank.
If you can do without the diagnostic echo messages, pretty much the whole of that code boils down what might be done using two greps, a bash process substitution, sponge, and touch for the first-run case:
[ ! -f $3 ] && touch $3 ; grep -vf $3 <(grep $1 $2) | sponge -a $3

Bash: Counting instances of a string in text file with a loop

I am trying to write a simple bash script in which it takes in a text file, loops through the file and tells me how many times a certain string appears in the file. I want to eventually use this for a custom log searcher (for instance, search for the words 'log in' in a particular log file, etc.), but am having some difficulty as I am relatively new to bash. I want to be able to quickly search different logs for different terms at my will and see how many times they occur. Everything works perfectly until I get down to my loops. I think that I am using grep wrong, but am unsure if that is the issue. My loop codes may seem a little strange because I have been at it for a while and have been constantly tweaking things. I have done a bunch of searching but I feel like I am the only one who has ever had this issue (hopefully not because it is incredibly simple and I just suck). Any and all help is greatly appreciated, thanks in advance everyone.
edit: I would like to account for every instance of the string and not just
one instance per line
#!/bin/bash
echo "This bash script counts the instances of a user-defined string in a file."
echo "Enter a file to search:"
read fileName
echo " "
echo $path
if [ -f "$fileName" ] || [ -d "$fileName" ]; then
echo "File Checker Complete: '$fileName' is a file."
echo " "
echo "Enter a string that you would like to count the occurances of in '$fileName'."
read stringChoice
echo " "
echo "You are looking for '$stringChoice'. Counting...."
#TRYING WITH A WHILE LOOP
count=0
cat $fileName | while read line
do
if echo $line | grep $stringChoice; then
count=$[ count + 1 ]
done
echo "Finished processing file"
#TRYING WITH A FOR LOOP
# count=0
# for i in $(cat $fileName); do
# echo $i
# if grep "$stringChoice"; then
# count=$[ $count + 1 ]
# echo $count
# fi
# done
if [ $count == 1 ] ; then
echo " "
echo "The string '$stringChoice' occurs $count time in '$fileName'."
elif [ $count > 1 ]; then
echo " "
echo "The string '$stringChoice' occurs $count times in '$fileName'."
fi
elif [ ! -f "$fileName" ]; then
echo "File does not exist, please enter the correct file name."
fi
To find and count all occurrences of a string, you could use grep -o which matches only the word instead of the entire line and pipe the result to wc
read string; grep -o "$string" yourfile.txt | wc -l
You made basic syntax error in the code. Also, the variable of count was never updating as the the while loop was being executed in a subshell and thus the updated count value was never reflecting back.
Please change your code to the following one to get desired result.
#!/bin/bash
echo "This bash script counts the instances of a user-defined string in a file."
echo "Enter a file to search:"
read fileName
echo " "
echo $path
if [ -f "$fileName" ] ; then
echo "File Checker Complete: '$fileName' is a file."
echo " "
echo "Enter a string that you would like to count the occurances of in '$fileName'."
read stringChoice
echo " "
echo "You are looking for '$stringChoice'. Counting...."
#TRYING WITH A WHILE LOOP
count=0
while read line
do
if echo $line | grep $stringChoice; then
count=`expr $count + 1`
fi
done < "$fileName"
echo "Finished processing file"
echo "The string '$stringChoice' occurs $count time in '$fileName'."
elif [ ! -f "$fileName" ]; then
echo "File does not exist, please enter the correct file name."
fi

Bash scripting: why is the last line missing from this file append?

I'm writing a bash script to read a set of files line by line and perform some edits. To begin with, I'm simply trying to move the files to backup locations and write them out as-is, to test the script is working. However, it is failing to copy the last line of each file. Here is the snippet:
while IFS= read -r line
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
I obviously want to preserve whitespace when I copy the files, which is why I have set the IFS to null. I can see from the output that the last line of each file is being read, but it never appears in the output.
I've also tried an alternative variation, which does print the last line, but adds a newline to it:
while IFS= read -r line || [ -n "$line" ]
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
What is the best way to do this do this read-write operation, to write the files exactly as they are, with the correct whitespace and no newlines added?
The command that is adding the line feed (LF) is not the read command, but the echo command. read does not return the line with the delimiter still attached to it; rather, it strips the delimiter off (that is, it strips it off if it was present in the line, IOW, if it just read a complete line).
So, to solve the problem, you have to use echo -n to avoid adding back the delimiter, but only when you have an incomplete line.
Secondly, I've found that when providing read with a NAME (in your case line), it trims leading and trailing whitespace, which I don't think you want. But this can be solved by not providing a NAME at all, and using the default return variable REPLY, which will preserve all whitespace.
So, this should work:
#!/bin/bash
inFile=in;
outFile=out;
rm -f "$outFile";
rc=0;
while [[ $rc -eq 0 ]]; do
read -r;
rc=$?;
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
echo "$REPLY" >>"$outFile";
elif [[ -n "$REPLY" ]]; then ## incomplete line
echo "incomplete=\"$REPLY\"";
echo -n "$REPLY" >>"$outFile";
fi;
done <"$inFile";
exit 0;
Edit: Wow! Three excellent suggestions from Charles Duffy, here's an updated script:
#!/bin/bash
inFile=in;
outFile=out;
while { read -r; rc=$?; [[ $rc -eq 0 || -n "$REPLY" ]]; }; do
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
printf '%s\n' "$REPLY" >&3;
else ## incomplete line
echo "incomplete=\"$REPLY\"";
printf '%s' "$REPLY" >&3;
fi;
done <"$inFile" 3>"$outFile";
exit 0;
After review i wonder if :
{
line=
while IFS= read -r line
do
echo "$line"
line=
done
echo -n "$line"
} <$INFILE >$OUTFILE
is juts not enough...
Here my initial proposal :
#!/bin/bash
INFILE=$1
if [[ -z $INFILE ]]
then
echo "[ERROR] missing input file" >&2
exit 2
fi
OUTFILE=$INFILE.processed
# a way to know if last line is complete or not :
lastline=$(tail -n 1 "$INFILE" | wc -l)
if [[ $lastline == 0 ]]
then
echo "[WARNING] last line is incomplete -" >&2
fi
# we add a newline ANYWAY if it was complete, end of file will be seen as ... empty.
echo | cat $INFILE - | {
first=1
while IFS= read -r line
do
if [[ $first == 1 ]]
then
echo "First Line is ***$line***" >&2
first=0
else
echo "Next Line is ***$line***" >&2
echo
fi
echo -n "$line"
done
} > $OUTFILE
if diff $OUTFILE $INFILE
then
echo "[OK]"
exit 0
else
echo "[KO] processed file differs from input"
exit 1
fi
Idea is to always add a newline at the end of file and to print newlines only BETWEEN lines that are read.
This should work for quite all text files given they are not containing 0 byte ie \0 character, in which case 0 char byte will be lost.
Initial test can be used to decided whether an incomplete text file is acceptable or not.
Add a new line if line is not a line. Like this:
while IFS= read -r line
do
echo "Line is ***$line***";
printf '%s' "$line" >&3;
if [[ ${line: -1} != '\n' ]]
then
printf '\n' >&3;
fi
done < $POM.backup 3>$POM

Finding max lines in a file while printing file name and lines separately?

So I keep messing this up and I think where I was going wrong was that the code i'm writing needs to return only the file name and number of lines from an argument.
So using wc I need to get something to accept either 0 or 1 arguments and print out something like "The file findlines.sh has 4 lines" or if they give a ./findlines.sh Desktop/testfile they'll get the "the file testfile has 5 lines"
I have a few attempts and all of them have failed. I can't seem to figure out how to approach it at all.
Should I echo "The file" and then toss the argument name in and then add another echo for "has the number of lines [lines]"?
Sample input would be from terminal something like
>findlines.sh
Output:the file findlines.sh has 18 lines
Or maybe
>findlines.sh /home/directory/user/grocerylist
Output of 'the file grocerylist has 16 lines
#! /bin/sh -
file=${1-findfiles.sh}
lines=$(wc -l < "$file") &&
printf 'The file "%s" has %d lines\n' "$file" "$lines"
This should work:
#!/bin/bash
file="findfiles.sh"
if [ $# -ge 1 ]
then
file=$1
fi
if [ -f $file ]
then
lines=`wc -l "$file" | awk '{print $1}'`
echo "The file $file has $lines lines"
else
echo "File not found"
fi
See sch's answer for a shorter example that doesn't use awk.

Resources