Editing a .CSV file through shell automation script - linux

I'm getting an error when trying to execute the script below around the done statement. The point of the code is to have the while statement execute for the duration of files listed in the filenames that log in order to grab the revision number for each file location in my branches folder.
filea=/home/filenames.log
fileb=/home/actions.log
filec=/home/revisions.log
filed=/home/final.log
count=1
while read Path do
Status=`sed -n "$count"p $fileb`
Revision=`svn info ${WORKSPACE}/$Path | grep "Revision" | awk '{print $2}'`
if `echo $Path | grep "UpgradeScript"` then
Results="Reverted - ROkere"
Details="Reverted per process"
else if `echo $Path | grep "tsu_includes/shell_scripts"` then
Results="Reverted - ROkere"
Details="Reverted per process"
else
Results="Verified - ROkere"
Details=""
fi
echo "$Path,$Status,$Revision,$Results,$Details" > $filed
count=`expr $count + 1`
done < $filea

need a semicolon or newline before do and then.
Change else if to elif
change
if `echo $Path | grep "UpgradeScript"` then
to (removing backticks, using "here-string", and -q option for grep)
if grep -q "UpgradeScript" <<< "$Path"; then
"filed" will only ever contain just one line. I assume you want to append >> instead of overwrite >
Actually, a quick rewrite. You're reading corresponding lines from 2 files. Faster to do that completely within the shell instead of invoking sed once for each line in the file.
#!/bin/bash
filea=/home/filenames.log
fileb=/home/actions.log
filec=/home/revisions.log # not used?
filed=/home/final.log
exec 3<"$filea" # open $filea on fd 3
exec 4<"$fileb" # open $fileb on fd 4
while read -u3 Path && read -u4 Status; do
Revision=$(svn info "$WORKSPACE/$Path" | awk '/Revision/ {print $2}')
if [[ "$Path" == *"UpgradeScript"* ]]; then
Results="Reverted - ROkere"
Details="Reverted per process"
elif [[ "$Path" == *"tsu_includes/shell_scripts"* ]]; then
Results="Reverted - ROkere"
Details="Reverted per process"
else
Results="Verified - ROkere"
Details=""
fi
echo "$Path,$Status,$Revision,$Results,$Details"
done > "$filed"
exec 3<&- # close fd 3
exec 4<&- # close fd 4

Related

sed is not working for commenting a line in a file using bash script

I have created a bash script that is used to modify the ulimit of open files in the RHEL server.
so i have reading the lines in the file /etc/security/limits.conf and if the soft/hard limit of the open files are less than 10000 for '*' domain i am commenting the line and adding a new line with soft/hard limit as 10000.
The Script is working as designed but the sed command to comment a line in the script is not working.
Please find the full script below :-
#!/bin/sh
#This script would be called by '' to set ulimit values for open files in unix servers.
#
configfile=/etc/security/limits.conf
help(){
echo "usage: $0 <LimitValue>"
echo -e "where\t--LimitValue= No of files you want all the users to open"
exit 1
}
modifyulimit()
{
grep '*\s*hard\s*nofile\s*' $configfile | while read -r line ; do
firstChar="$(echo $line | xargs | cut -c1-1)"
if [ "$firstChar" != "#" ];then
hardValue="$(echo $line | rev | cut -d ' ' -f1 | rev)"
if [[ "$hardValue" -ge "$1" ]]; then
echo ""
else
sed -i -e 's/$line/#$line/g' $configfile
echo "* hard nofile $1" >> $configfile
fi
else
echo ""
fi
done
grep '*\s*soft\s*nofile\s*' $configfile | while read -r line ; do
firstChar="$(echo $line | xargs | cut -c1-1)"
if [ "$firstChar" != "#" ];then
hardValue="$(echo $line | rev | cut -d ' ' -f1 | rev)"
if [[ "$hardValue" -ge "$1" ]]; then
echo ""
else
sed -i -e 's/$line/#$line/g' $configfile
echo "* hard nofile $1" >> $configfile
fi
else
echo ""
fi
done
}
deleteEofTag(){
sed -i "/\b\(End of file\)\b/d" $configfile
}
addEofTag()
{
echo "#################End of file###################" >> $configfile
}
#-------------Execution of the script starts here ----------------------
if [ $# -ne 1 ];
then
help
else
modifyulimit $1
deleteEofTag
addEofTag
fi
The command sed -i -e 's/$line/#$line/g' $configfile when executed from the terminal is working absolutely fine and it is commenting the line but it is not working when i am executing it from the unix shell script.
interpolation does not work in single quote
use double quote and try
sed -i -e 's/$line/#$line/g'
sed -i -e "s/$line/#$line/g"
also you might try:
sed -i -e s/${line}/#${line}/g
as this will tell the script to take the value of the variable instead of variable as such.

bash script run to send process to background

Hi Im making a script to do some rsync process, for the rsync process, Sys admin has created the script, when it run it is asking select options, so i want to create a script to pass that argument from script and run it from cron.
list of directories to rsync take from file.
filelist=$(cat filelist.txt)
for i in filelist;do
echo -e "3\nY" | ./rsync.sh $i
#This will create a rsync log file
so i check the some value of log file and if it is empty i moving to the second file. if the file is not empty, i have to start rsync process as below that will take more that 2 hours.
if [ a != 0 ];then
echo -e "3\nN" | ./rsync.sh $i
above rsync process need to send to the background and take next file to loop. i check with the screen command, but screen is not working with server. also i need to get the duration that take to run process and passing to the log, when i use the time command i am unable to pass the echo variable. Also need to send this to background and take next file. appreciate any suggestions to success this task.
Question
1. How to send argument with Time command
echo -e "3\nY" | time ./rsync.sh $i
above one not working
how to send this to background and take next file to rsync while running previous rsync process.
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done
Your question is not very clear, but I'll try:
(1) If I understand you correctly, you want to time the rsync.
My first attempt would be to use echo xxxx | time rsycnc. On my bash, this was however broken (or not supposed to work?). I'm normally using Zsh instead of bash, and on zsht, this indeed runs fine.
If it is important for you to use bash, an alternative (since the time for the echo can likely be neglected) would be to time the whole pipe, i.e. time (echo xxxx | time rsync), or even simpler time rsync <(echo xxxx)
(2) To send a process to the background, add an & to the line. However, the time command produces of course output (that's it purpose), and you don't want to receive output from a program in background. The solution is to redirect the output:
(time rsync <(echo xxxx) >output.txt 2>error.txt) &
If you want to time something, you can use:
time sleep 3
If you want to time two things, you can do a compound statement like this (note semicolon after second sleep):
time { sleep 3; sleep 4; }
So, you can do this to time your echo (which will take no time at all) and your rsync:
time { echo "something" | rsync something ; }
If you want to do that in the background:
time { echo "something" | rsync something ; } &
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done

updating a file using tee randomly fails in linux bash script

when using sed -e to update some parameters of a config file and pipe it to | tee (to write the updated content into the file), this randomly breaks and causes the file to be invalid (size 0).
In Summary, this code is used for updating parameters:
# based on the provided linenumber, add some comments, add the new value, delete old line
sed -e "$lineNr a # comments" -e "$lineNr a $newValue" -e "$lineNr d" $myFile | sudo tee $myFile
I set up an script which calls this update command 100 times.
In a Ubuntu VM (Parallels Desktop) on a shared Directory with OSX this
behaviour occurs up to 50 times
In a Ubuntu VM (Parallels Desktop) on the
Ubuntu partition this behaviour occurs up to 40 times
On a native System (IntelNUC with Ubuntu) this behaviour occurs up to 15 times
Can someone explain why this is happening?
Here is a fully functional script where you can run the experiment as well. (All necessary files are generated by the script, so you can simply copy/paste it into a bashscriptfile and run it)
#!/bin/bash
# main function at bottom
#====================
#===HELPER METHOD====
#====================
# This method updates parameters with a new value. The replacement is performed linewise.
doUpdateParameterInFile()
{
local valueOfInterest="$1"
local newValue="$2"
local filePath="$3"
# stores all matching linenumbers
local listOfLines=""
# stores the linenumber which is going to be replaced
local lineToReplace=""
# find value of interest in all non-commented lines and store related lineNumber
lineToReplace=$( grep -nr "^[^#]*$valueOfInterest" $filePath | sed -n 's/^\([0-9]*\)[:].*/\1/p' )
# Update parameters
# replace the matching line with the desired value
oldValue=$( sed -n "$lineToReplace p" $filePath )
sed -e "$lineToReplace a # $(date '+%Y-%m-%d %H:%M:%S'): replaced: $oldValue with: $newValue" -e "$lineToReplace a $newValue" -e "$lineToReplace d" $filePath | sudo tee $filePath >/dev/null
# Sanity check to make sure file did not get corrupted by updating parameters
if [[ ! -s $filePath ]] ; then
echo "[ERROR]: While updating file it turned invalid."
return 31
fi
}
#===============================
#=== Actual Update Function ====
#===============================
main_script()
{
echo -n "Update Parameter1 ..."
doUpdateParameterInFile "Parameter1" "Parameter1 YES" "config.txt"
if [[ "$?" == "0" ]] ; then echo "[ OK ]" ; else echo "[FAIL]"; return 33 ; fi
echo -n "Update Parameter2 ..."
doUpdateParameterInFile "Parameter2" "Parameter2=90" "config.txt"
if [[ "$?" == "0" ]] ; then echo "[ OK ]" ; else echo "[FAIL]"; return 34 ; fi
echo -n "Update Parameter3 ..."
doUpdateParameterInFile "Parameter3" "Parameter3 YES" "config.txt"
if [[ "$?" == "0" ]] ; then echo "[ OK ]" ; else echo "[FAIL]"; return 35 ; fi
}
#=================
#=== Main Loop ===
#=================
#generate file config.txt
printf "# Configfile with 3 Parameters\n#[Parameter1]\n#only takes YES or NO\nParameter1 NO \n\n#[Parameter2]\n#Parameter2 takes numbers\nParameter2 = 100 \n\n#[Parameter3]\n#Parameter3 takes YES or NO \nParameter3 YES\n" > config.txt
cp config.txt config.txt.bkup
# Start the experiment and let it run 100 times
cnt=0
failSum=0
while [[ $cnt != "100" ]] ; do
echo "==========run: $cnt; fails: $failSum======="
main_script
if [[ $? != "0" ]] ; then cp config.txt.bkup config.txt ; failSum=$(($failSum+1)) ; fi
cnt=$((cnt+1))
sleep 0.5
done
regards
DonPromillo
The problem is that you're using tee to overwrite $filepath at the same time as sed is trying to read from it. If tee truncates it first then sed gets an empty file and you end up with a 0 length file at the other end.
If you have GNU sed you can use the -i flag to have sed modify the file in place (other versions support -i but require an argument to it). If your sed doesn't support it you can have it write to a temp file and move it back to the original name like
tmpname=$(mktemp)
sed -e "$lineToReplace a # $(date '+%Y-%m-%d %H:%M:%S'): replaced: $oldValue with: $newValue" -e "$lineToReplace a $newValue" -e "$lineToReplace d" "$filePath" > "$tmpname"
sudo mv "$tmpname" "$filePath"
or if you want to preserve the original permissions you could do
sudo sh -c "cat '$tmpname' > '$filePath'"
rm "$tmpname"
or use your tee approach like
sudo tee "$filePath" >/dev/null <"$tmpname"
rm "$tmpname"

Why do this sample script, keep outputting error near token?

enter image description hereI was trying to see how a shell scripts work and how to run them, so I toke some sample code from a book I picked up from the library called "Wicked Cool Shell Scripts"
I re wrote the code verbatim, but I'm getting an error from Linux, which I compiled the code on saying:
'd.sh: line 3: syntax error near unexpected token `{
'd.sh: line 3:`gmk() {
Before this I had the curly bracket on the newline but I was still getting :
'd.sh: line 3: syntax error near unexpected token
'd.sh: line 3:`gmk()
#!/bin/sh
#format directory- outputs a formatted directory listing
gmk()
{
#Give input in Kb, output converted to Kb, Mb, or Gb for best output format
if [$1 -ge 1000000]; then
echo "$(scriptbc -p 2 $1/1000000)Gb"
elif [$1 - ge 1000]; then
echo "$$(scriptbc -p 2 $1/1000)Mb"
else
echo "${1}Kb"
fi
}
if [$# -gt 1] ; then
echo "Usage: $0 [dirname]" >&2; exit 1
elif [$# -eq 1] ; then
cd "$#"
fi
for file in *
do
if [-d "$file"] ; then
size = $(ls "$file"|wc -l|sed 's/[^[:digit:]]//g')
elif [$size -eq 1] ; then
echo "$file ($size entry)|"
else
echo "$file ($size entries)|"
fi
else
size ="$(ls -sk "$file" | awk '{print $1}')"
echo "$file ($(gmk $size))|"
fi
done | \
sed 's/ /^^^/g' |\
xargs -n 2 |\
sed 's/\^\^\^/ /g' | \
awk -F\| '{ printf "%39s %-39s\n", $1, $2}'
exit 0
if [$#-gt 1]; then
echo "Usage :$0 [dirname]" >&2; exit 1
elif [$# -eq 1]; then
cd "$#"
fi
for file in *
do
if [ -d "$file" ] ; then
size =$(ls "$file" | wc -l | sed 's/[^[:digit:]]//g')
if [ $size -eq 1 ] ; then
echo "$file ($size entry)|"
else
echo "$file ($size entries)|"
fi
else
size ="$(ls -sk "$file" | awk '{print $1}')"
echo "$file ($(convert $size))|"
fi
done | \
sed 's/ /^^^/g' | \
xargs -n 2 | \
sed 's/\^\^\^/ /g' | \
awk -F\| '{ printf "%-39s %-39s\n", $1, $2 }'
exit 0
sh is very sensitive to spaces. In particular assignment (no spaces around =) and testing (must have spaces inside the [ ]).
This version runs, although fails on my machine due to the lack of scriptbc.
You put an elsif in a spot where it was supposed to be if.
Be careful of column alignment between starts and ends. If you mismatch them it will easily lead you astray in thinking about how this works.
Also, adding a set -x near the top of a script is a very good way of debugging what it is doing - it will cause the interpreter to output each line it is about to run before it does.
#!/bin/sh
#format directory- outputs a formatted directory listing
gmk()
{
#Give input in Kb, output converted to Kb, Mb, or Gb for best output format
if [ $1 -ge 1000000 ]; then
echo "$(scriptbc -p 2 $1/1000000)Gb"
elif [ $1 -ge 1000 ]; then
echo "$(scriptbc -p 2 $1/1000)Mb"
else
echo "${1}Kb"
fi
}
if [ $# -gt 1 ] ; then
echo "Usage: $0 [dirname]" >&2; exit 1
elif [ $# -eq 1 ] ; then
cd "$#"
fi
for file in *
do
if [ -d "$file" ] ; then
size=$(ls "$file"|wc -l|sed 's/[^[:digit:]]//g')
if [ $size -eq 1 ] ; then
echo "$file ($size entry)|"
else
echo "$file ($size entries)|"
fi
else
size="$(ls -sk "$file" | awk '{print $1}')"
echo "$file ($(gmk $size))|"
fi
done | \
sed 's/ /^^^/g' |\
xargs -n 2 |\
sed 's/\^\^\^/ /g' | \
awk -F\| '{ printf "%39s %-39s\n", $1, $2}'
exit 0
By the way, with respect to the book telling you to modify your PATH variable, that's really a bad idea, depending on what exactly it advised you to do. Just to be clear, never add your current directory to the PATH variable unless you intend on making that directory a permanent location for all of your scripts etc. If you are making this a permanent location for your scripts, make sure you add the location to the END of your PATH variable, not the beginning, otherwise you are creating a major security problem.
Linux and Unix do not add your current location, commonly called your PWD, or present working directory, to the path because someone could create a script called 'ls', for example, which could run something malicious instead of the actual 'ls' command. The proper way to execute something in your PWD, is to prepend it with './' (e.g. ./my_new_script.sh). This basically indicates that you really do want to run something from your PWD. Think of it as telling the shell "right here". The '.' actually represents your current directory, in other words "here".

Create new file but add number if filename already exists in bash

I found similar questions but not in Linux/Bash
I want my script to create a file with a given name (via user input) but add number at the end if filename already exists.
Example:
$ create somefile
Created "somefile.ext"
$ create somefile
Created "somefile-2.ext"
The following script can help you. You should not be running several copies of the script at the same time to avoid race condition.
name=somefile
if [[ -e $name.ext || -L $name.ext ]] ; then
i=0
while [[ -e $name-$i.ext || -L $name-$i.ext ]] ; do
let i++
done
name=$name-$i
fi
touch -- "$name".ext
Easier:
touch file`ls file* | wc -l`.ext
You'll get:
$ ls file*
file0.ext file1.ext file2.ext file3.ext file4.ext file5.ext file6.ext
To avoid the race conditions:
name=some-file
n=
set -o noclobber
until
file=$name${n:+-$n}.ext
{ command exec 3> "$file"; } 2> /dev/null
do
((n++))
done
printf 'File is "%s"\n' "$file"
echo some text in it >&3
And in addition, you have the file open for writing on fd 3.
With bash-4.4+, you can make it a function like:
create() { # fd base [suffix [max]]]
local fd="$1" base="$2" suffix="${3-}" max="${4-}"
local n= file
local - # ash-style local scoping of options in 4.4+
set -o noclobber
REPLY=
until
file=$base${n:+-$n}$suffix
eval 'command exec '"$fd"'> "$file"' 2> /dev/null
do
((n++))
((max > 0 && n > max)) && return 1
done
REPLY=$file
}
To be used for instance as:
create 3 somefile .ext || exit
printf 'File: "%s"\n' "$REPLY"
echo something >&3
exec 3>&- # close the file
The max value can be used to guard against infinite loops when the files can't be created for other reason than noclobber.
Note that noclobber only applies to the > operator, not >> nor <>.
Remaining race condition
Actually, noclobber does not remove the race condition in all cases. It only prevents clobbering regular files (not other types of files, so that cmd > /dev/null for instance doesn't fail) and has a race condition itself in most shells.
The shell first does a stat(2) on the file to check if it's a regular file or not (fifo, directory, device...). Only if the file doesn't exist (yet) or is a regular file does 3> "$file" use the O_EXCL flag to guarantee not clobbering the file.
So if there's a fifo or device file by that name, it will be used (provided it can be open in write-only), and a regular file may be clobbered if it gets created as a replacement for a fifo/device/directory... in between that stat(2) and open(2) without O_EXCL!
Changing the
{ command exec 3> "$file"; } 2> /dev/null
to
[ ! -e "$file" ] && { command exec 3> "$file"; } 2> /dev/null
Would avoid using an already existing non-regular file, but not address the race condition.
Now, that's only really a concern in the face of a malicious adversary that would want to make you overwrite an arbitrary file on the file system. It does remove the race condition in the normal case of two instances of the same script running at the same time. So, in that, it's better than approaches that only check for file existence beforehand with [ -e "$file" ].
For a working version without race condition at all, you could use the zsh shell instead of bash which has a raw interface to open() as the sysopen builtin in the zsh/system module:
zmodload zsh/system
name=some-file
n=
until
file=$name${n:+-$n}.ext
sysopen -w -o excl -u 3 -- "$file" 2> /dev/null
do
((n++))
done
printf 'File is "%s"\n' "$file"
echo some text in it >&3
Try something like this
name=somefile
path=$(dirname "$name")
filename=$(basename "$name")
extension="${filename##*.}"
filename="${filename%.*}"
if [[ -e $path/$filename.$extension ]] ; then
i=2
while [[ -e $path/$filename-$i.$extension ]] ; do
let i++
done
filename=$filename-$i
fi
target=$path/$filename.$extension
Use touch or whatever you want instead of echo:
echo file$((`ls file* | sed -n 's/file\([0-9]*\)/\1/p' | sort -rh | head -n 1`+1))
Parts of expression explained:
list files by pattern: ls file*
take only number part in each line: sed -n 's/file\([0-9]*\)/\1/p'
apply reverse human sort: sort -rh
take only first line (i.e. max value): head -n 1
combine all in pipe and increment (full expression above)
Try something like this (untested, but you get the idea):
filename=$1
# If file doesn't exist, create it
if [[ ! -f $filename ]]; then
touch $filename
echo "Created \"$filename\""
exit 0
fi
# If file already exists, find a similar filename that is not yet taken
digit=1
while true; do
temp_name=$filename-$digit
if [[ ! -f $temp_name ]]; then
touch $temp_name
echo "Created \"$temp_name\""
exit 0
fi
digit=$(($digit + 1))
done
Depending on what you're doing, replace the calls to touch with whatever code is needed to create the files that you are working with.
This is a much better method I've used for creating directories incrementally.
It could be adjusted for filename too.
LAST_SOLUTION=$(echo $(ls -d SOLUTION_[[:digit:]][[:digit:]][[:digit:]][[:digit:]] 2> /dev/null) | awk '{ print $(NF) }')
if [ -n "$LAST_SOLUTION" ] ; then
mkdir SOLUTION_$(printf "%04d\n" $(expr ${LAST_SOLUTION: -4} + 1))
else
mkdir SOLUTION_0001
fi
A simple repackaging of choroba's answer as a generalized function:
autoincr() {
f="$1"
ext=""
# Extract the file extension (if any), with preceeding '.'
[[ "$f" == *.* ]] && ext=".${f##*.}"
if [[ -e "$f" ]] ; then
i=1
f="${f%.*}";
while [[ -e "${f}_${i}${ext}" ]]; do
let i++
done
f="${f}_${i}${ext}"
fi
echo "$f"
}
touch "$(autoincr "somefile.ext")"
without looping and not use regex or shell expr.
last=$(ls $1* | tail -n1)
last_wo_ext=$($last | basename $last .ext)
n=$(echo $last_wo_ext | rev | cut -d - -f 1 | rev)
if [ x$n = x ]; then
n=2
else
n=$((n + 1))
fi
echo $1-$n.ext
more simple without extension and exception of "-1".
n=$(ls $1* | tail -n1 | rev | cut -d - -f 1 | rev)
n=$((n + 1))
echo $1-$n.ext

Resources