Comparing two environment properties files and have only the added properties in the new file migrated to the old file - linux

Hello here is a setup of what im trying to accomplish
The user has one environment properties file on their machine, it has a list of say 300 properties.
We then deploy a new build to that same system with an updated version of that file with a few more properties added. I do not want to get rid of the old env prop file i just want to add those new properties without the user having to do it. example
File A (New Environment Properties file)
DB_CONNECTION=
DB_REPO=
DB_TEST=
DB_USER=
File B (Old environment properties)
DB_REPO=
DB_USER=
I just need the DB_CONNECTION and DB_TEST added to that file, there may be more to add this is just an example.
I have tried multiple grep and diff commands but they just output the screen or replace the whole file. I don't want to do this since the user has saved values so i just need the new properties added.
Thanks in advance

I propose to source each file in its own subshell, execute set to show the complete set of environment variables, and compare both outputs. Sounds complex but is typed rather easily:
diff <(source a.sh; set) <(source b.sh; set)
The output is a typical diff output, in my case:
19d18
< DB_CONNECTION=
21d19
< DB_TEST=
87c85
< _=a.sh
---
> _=b.sh
The last two lines (_=a.sh and _=b.sh) are not interesting; they just show the last used argument (which differs of course).
Now, to add the found stuff to the file you want to patch, you can use this:
diff <(source a.sh; set) <(source b.sh; set) | grep '^<' | cut -c3- | grep -v '^_=' >> b.sh
This solution does not consider changed values, though. If the user changed a value in his old config file, the new config file will have the standard value again, this will be a difference, so it will be added to the old config file.
You might not want this, so you might want to specify further requirements on how to handle changed values.

I went with the following this will basically merge the old properties along with the values for each and add all the new properties. At the end of the script I just have the following command
sed -i 'property/d' environment.properties
Here is the merge script a co-worker helped me with
#!/bin/bash
old_file=
new_file=
while read new_line; do
# Check to see if the current line is a comment.
comment=$(echo "${new_line}" | grep '^[ \t]*[#!]')
# Print comments as-is.
if [ -n "${comment}" ]; then
echo "${new_line}"
continue
fi
# Get the property key.
key=$(echo "${new_line}" | sed -e 's/^[ \t]*\([^=:]\+\).*/\1/')
# If the line isn't a comment or a property, skip it.
if [ -z "${key}" ]; then
echo "ERROR: invalid property: ${new_line}" 1>&2
continue
fi
# Get the old value for the key.
old_line=$(grep "^[ \t]*${key}[ \t]*[=:]" ${old_file})
if [ -n "${old_line}" ]; then
# If there is an old value, used it.
echo "${old_line}" | sed -e 's/^[ \t]*//'
else
# Otherwise, use the new value.
echo "${new_line}"
echo "WARN: new property: ${key}" 1>&2
fi
done < ${new_file}
Thank you for the help

Related

Unable to pass string from each field into for loop

I'm new to Bash scripting, and been writing a script to check different log files if they exist or not, and I'm a bit stuck here.
clientlist=/path/to/logfile/which/consists/of/client/names
# I grepped only the client name from the logfile,
# and piped into awk to add ".log" to each client name
clients=$(grep -i 'list of client assets:' $clientlist | cut -d":" -f1 | awk '{print $NF".log"}')
echo "Clients : $clients"
#For example "Clients: Apple.log
# Samsung.log
# Nokia.log
# ...."
export alertfiles="*_$clients" #path/to/each/client/logfiles
for file in $alertfiles
do
# I will test each ".log" file of each client, if it exists or not
test -f "$file" && echo $file exists || { echo Error: $file does not exist && exit; }
done
The code above greps the client name from the log file, and using awk, added .log at the end of each client field. From the output, I'm trying to pass eachclientname.log from each field into one variable, i.e. alertfiles, and construct a path to be tested for the file existence.
The number of clients is indefinite and may vary from time to time.
The code I have returns the client name as a whole:
"Clients: Apple.log
Samsung.log
Nokia.log
....."
I'm unsure of how to pass each client name one by one into the loop, so that each client name log file will be tested if it exists or not. How can I do this?
export alertfiles="*_$clients" #path/to/each/client/logfiles
I want to have $clients output listed here one by one, so that it returns all client name one by one, and not as a whole thing, and I can pass that into the loop, so the client log filename gets checked one by one.
Use bash arrays.
(BTW: I can't test this as you have not supplied an example of the input data)
clientlist=/path/to/logfile/which/consists/of/client/names
logfilebase=/path/to/where/the/logfiles/should/exist
declare -a clients=($(grep -i 'list of client assets:' $clientlist | cut -d":" -f1))
for item in "${clients[#]}"; do
if [ -e ${logfilebase}/${item}.log ]; then
echo "$item exists"
else
echo "$item does not exist - quit"
exit 1
fi
done
It's really not clear what you are asking. $clients is already a list of tokens which you can loop over, though saving it in a variable seems like an unnecessary waste of memory.
Also, why are you looping over the wildcard and then checking if the files exist? With nullglob you can make sure that file is not looped at all if there are no matches on the wildcard.
I'm guessing your actual question is how to check whether the log files exist in the directory you nominated.
I have refactored your code to do the grep and cut in Awk, too. See useless use of grep
shopt -s nullglob # bash feature
awk -F: 'tolower($0) ~ /list of client assets:/ {
print(tolower($1).log))' "$clientlist" |
while read -r client; do
# some heavy guessing here
for file in path/to/each/"$client"/logfiles/*; do
test -f "$file" && echo "$file" exists || { echo "Error: $file does not exist" && exit; }
done
done

Alias to record all commands and std output to a file

I am looking to figure out a way to get my documentation done quicker for my project work. One thing that would help me would be to record my history and each commands output to a file. However, I don't want to have this on all the time and I would rather not have it as a toggle option for the risk of forgetting to turn it off and recording a load of junk that I will have to just go and delete later.
The idea I had was to create an alias, lets say 'verbatim', so that I could enter the command as so:
verbatim <command>
And then the alias would remove 'verbatim', take whatever that was entered and prepend/append it with:
echo -n \[\$(date)\] >> output_file | echo "<command>" >> output_file | <command> | fee -a output_file | echo " " >> output_file
where the output will be:
<timestamp>
<command>
<outputOfTheCommand>
<newLine>
could also add comments by
verbatim #some comment to go in line
example:
verbatim #deploying the production stack upgrade
verbatim <someDeployCommand>
This way by typing just one word extra per line I can record everything that happens as I am doing a deployment for example, which can be used to basically do all of my documentation for me since it is saved to a file in order, all I have to do is remove anything that is irrelevant in hindsight. And the fact that all the data is timestamped means it could also speed up RCA if something goes wrong.
Thanks in advance, any and all advice welcome
I would just do my deployment as usual, and then add
head -n 20 ~/.bash_history
and edit the 20 depending on the history size I want.
You should probably just use script, but you could do something like:
v() { { date; echo "$#"; "$#" | tee /dev/tty; echo; } >> ${OUTPUT-/tmp/output} 2>&1; }
or
v() { { date; echo "$#"; "$#"; echo; } 2>&1 | tee -a ${OUTPUT-/tmp/output}; }
(verbatim is too long, so I shortened it to v)
This won't handle comments at all; that would require writing a new parser, since the comment will never be seen by the function. But you can always echo "# some comment" >> $OUTPUT

How do i extract the date from multiple files with dates in it?

Lets say i have multiple filesnames e.g. R014-20171109-1159.log.20171109_1159.
I want to create a shell script which creates for every given date a folder and moves the files matching the date to it.
Is this possible?
For the example a folder "20171109" should be created and has the file "R014-20171109-1159.log.20171109_1159" on it.
Thanks
This is a typical application of a for-loop in bash to iterate thru files.
At the same time, this solution utilizes GNU [ shell param substitution ].
for file in /path/to/files/*\.log\.*
do
foldername=${file#*-}
foldername=${foldername%%-*}
mkdir -p "${foldername}" # -p suppress errors if folder already exists
[ $? -eq 0 ] && mv "${file}" "${foldername}" # check last cmd status and move
done
Since you want to write a shell script, use commands. To get date, use cut cmd like ex:
cat 1.txt
R014-20171109-1159.log.20171109_1159
cat 1.txt | cut -d "-" -f2
Output
20171109
is your date and create folder. This way you can loop and create as many folders as you want
Its actually quite easy(my Bash syntax might be a bit off) -
for f in /path/to/your/files*; do
## Check if the glob gets expanded to existing files.
## If not, f here will be exactly the pattern above
## and the exists test will evaluate to false.
[ -e "$f" ] && echo $f > #grep the file name for "*.log."
#and extract 8 charecters after "*.log." .
#Next check if a folder exists already with the name of 8 charecters.
#If not { create}
#else just move the file to that folder path
break
done
Main idea is from this post link. Sorry for not providing the actual code as i havent worked anytime recently on Bash
Below commands can be put in script to achieve this,
Assign a variable with current date as below ( use --date='n day ago' option if need to have an older date).
if need to get it from File name itself, get files in a loop then use cut command to get the date string,
dirVar=$(date +%Y%m%d) --> for current day,
dirVar=$(date +%Y%m%d --date='1 day ago') --> for yesterday,
dirVar=$(echo $fileName | cut -c6-13) or
dirVar=$(echo $fileName | cut -d- -f2) --> to get from $fileName
Create directory with the variable value as below, (-p : create directory if doesn't exist.)
mkdir -p ${dirVar}
Move files to directory to the directory with below line,
mv *log.${dirVar}* ${dirVar}/

Delete entry from /etc/fstab using sed

I am scripting a solution wherein, when a shared drive is removed from the server, we need to remove the entry from fstab as well.
What I have done till now :
MP="${ServerAddress}:${DirectoryPath} ${MountPoint} nfs defaults 0 0"
while read line; do
if [ "${MP}" = "${line}" ]; then
sed "/${line}/d"
fi
done < /etc/fstab
this is giving an error >> sed: 1: "/servername.net...": command I expects \ followed by text
Please suggest on this can be deleted.
PS: I am running this as a part of the script, so i dont have to run this individually. While running with the suggested options, I am able to delete.. but during the script, this does not work and gives that error. People commenting on the " or the formatting, its just that I cannot copy from there since that is a remote through terminal server.
Try the following:
sed -i.bak "\#^$SERVERADDR:$DIRPATH#d" /etc/fstab
After setting meaningful values for SERVERADDR and DIRPATH. That line will also make a backup of the old file (named fstab.bak). But since /etc/fstab is such an important file, please make sure to have more backups handy.
Let me point out that you only need that single line, no while loop, no script!
Note that the shell is case-sensitive and expects " as double quotes and not ” (a problem now fixed in the question). You need to configure your editor not to capitalize words randomly for you (also now fixed). Note that experimenting on system configuration files is A Bad Idea™. Make a copy of the file and experiment on the copy. Only when you're sure that the script works on the copy do you risk the live file. And then you make a backup of the live file before making the changes. (And avoid doing the experimenting as root if at all possible; that limits the damage you can do.)
Your script as written might as well not use sed. Change the comparison to != and simply use echo "$line" to echo the lines you want to standard output.
MP="${ServerAddress}:${DirectoryPath} ${MountPoint} nfs defaults 0 0"
while read line; do
if [ "${MP}" != "${line}" ]; then
echo "$line"
fi
done < /etc/fstab
Alternatively, you can use sed in a single command. The only trick is that the pattern probably contains slashes, so the \# notation says 'use # as the search marker'.
MP="${ServerAddress}:${DirectoryPath} ${MountPoint} nfs defaults 0 0"
sed -e "\#$MP#d" /etc/fstab
If you have GNU sed, when you're satisfied with that, you can add the -i.bak option to overwrite the /etc/fstab file (preserving a copy of the original in /etc/fstab.bak). Or use -i $(date +'%Y%m%d.%H%M%S') to get a backup extension that is the current date/time. Or use version control on the file.

Make SED command work for any variable

deploy.sh
USERNAME="Tom"
PASSWORD="abc123"
FILE="config.conf"
sed -i "s/\PLACEHOLDER_USERNAME/$USERNAME/g" $FILE
sed -i "s/\PLACEHOLDER_PASSWORD/$PASSWORD/g" $FILE
config.conf
deloy="PLACEHOLDER_USERNAME"
pass="PLACEHOLDER_PASSWORD"
This file puts my variables defined in deploy into my config file. I can't source the file so I want put my variables in this way.
Question
I want a command that is generic to work for all placeholder variables using some sort of while loop rather than needing one command per variable. This means any term starting with placeholder_ in the file will try to be replaced with the value of the variable defined already in deploy.sh
All variables should be set and not empty. I guess if there is the ability to print a warning if it can't find the variable that would be good but it isn't mandatory for this.
Basically, use shell code to write a sed script and then use sed -i .bak -f sed.script config.conf to apply it:
trap "rm -f sed.script; exit 1" 0 1 2 3 13 15
for var in USERNAME PASSWORD
do
echo "s/PLACEHOLDER_$var/${!var}/"
done > sed.script
sed -i .bak -f sed.script config.conf
rm -f sed.script
trap 0
The main 'tricks' here are:
knowing that ${!var} expands to the value of the variable named by $var, and
knowing that sed will take a script full of commands via -f sed.script, and
knowing how to use trap to ensure temporary files are cleaned up.
You could also use sed -e "s/.../.../" -e "s/.../.../" -i .bak config.conf too, but the script file is easier, I think, especially if you have more than 2 values to substitute. If you want to go down this route, use a bash array to hold the arguments to sed. A more careful script would use at least $$ in the script file name, or use mktemp to create the temporary file.
Revised answer
The trouble is, although much closer to being generic, it is still not generic since I have to manually put in what variables I want to change. Can it not be more like "for each placeholder_, find the variable in deploy.sh and add that variable, so it can work for any number of variables.
So, find what the variables are in the configuration file, then apply the techniques of the previous answer to solve that problem:
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
for file in "$#"
do
for var in $(sed 's/.*PLACEHOLDER_\([A-Z0-9_]*\).*/\1/' "$file")
do
value="${!var}"
[ -z "$value" ] && { echo "$0: variable $var not set for $file" >&2; exit 1; }
echo "s/PLACEHOLDER_$var/$value/"
done > $tmp
sed -i .bak -f $tmp "$file"
rm -f $tmp
done
trap 0
This code still pulls the values from the environment. You need to clarify what is required if you want to extract the settings from the shell script, but it can be done — the script will have to be sufficiently self-aware to find its source so it can search it for the names. But the basics are in this answer; the rest is a question of tinkering until it does what you need.
#!/bin/ksh
TemplateFile=$1
SourceData=$2
(sed 's/.*/#V0r:PLACEHOLDER_&:r0V#/' ${SourceData}; cat ${TemplateFile}) | sed -n "
s/$/²/
H
$ {
x
s/^\(\n *\)*//
# also reset t flag
t varxs
:varxs
s/^#V0r:\([a-zA-Z0-9_]\{1,\}\)=\([^²]*\):r0V#²\(\n.*\)\"\1\"/#V0r:\1=\2:r0V#²\3\2/
t varxs
# clean the line when no more occurance in text
s/^[^²]*:r0V#²\n//
# and next
t varxs
# clean the marker
s/²\(\n\)/\1/g
s/²$//
# display the result
p
}
"
call like this: YourScript.ksh YourTemplateFile YourDataSourceFile where:
YourTemplateFile is the file that contain the structure with generic value like deloy="PLACEHOLDER_USERNAME"
YourDataSourceFile is the file that contain all the peer Generic value = specific value like USERNAME="Tom"

Resources