What is the cleanest/easiest way to read in a config file - linux

I want to create a config file with key=value pairs in groupings so that I can iterate through the config file in groups of key=value pairs.
Example config file:
#group1
var1=test1
var2=test2
var3=test3
#group2
var1=text4
var2=text5
var3=test6
var4=test7
#group3
var3=test8
Is there a simple way to parse a config file similar to this layout where each group may include/exclude parameters, and each iteration of the parsing loop will pull in that specific groups key=value pairs?
Does bash have a built in config parser? This is for an openrc init script.

Building on the answers in this thread, you could do something like this:
#! /bin/bash
if [ -f "${HOME}/.${0##*/}" ]; then
config="${HOME}/.${0##*/}rc"
else
config="/etc/${0##*/}"
fi
if [ -f "$config" ]; then
section=global
while read -r line; do
if [[ $line =~ ^(#|$) ]]; then continue; fi
if [[ $line =~ ^\[[[:alpha:]_][[:alnum:]_]*\]$ ]]; then
section=${line#[}
section=${section%]}
elif [[ $line =~ ^[[:alpha:]_][[:alnum:]_]*= ]]; then
eval "${section}_${line%%=*}"=\${line#*=}
fi
done <"$config"
fi
This assumes bash, and parses config files like this:
# comment
global1=gval1
global2=gval2
[section1]
variable_1=value_11
variable_2=value_12
[section2]
variable_1=value_21
variable_2=value_22
It sets the variables named in the config file, with the name prefixed by the name of the section. Comments and blank lines are ignored.
Proof of concept:
set | egrep '^(global|section)[^=]' | \
while read -r line; do
key=${line%%=*}
eval "val=\${$key}"
printf '%s = [%s]\n' "$key" "$val"
done
Output:
global_global1 = [gval1]
global_global2 = [gval2]
section1_variable_1 = [value_11]
section1_variable_2 = [value_12]
section2_variable_1 = [value_21]
section2_variable_2 = [value_22]

You can use the command cut using the equals sign = as a delimiter
If $line is every valid line, (you can escape lines starting with comments and empty lines)
key=`cut -f1 -d '=" $line`
value=`cut -f2 -d '=" $line`

Related

Execute a process with the same environment variable than another process [duplicate]

This question already has an answer here:
Why variable values are lost after terminating the loop in bash? [duplicate]
(1 answer)
Closed 2 years ago.
I would like to make a script which allow me to execute a command which inherit environment variables from any PID.
Here the script I made :
#!/bin/sh
VARS=$(cat -A /proc/1/environ | tr "^#" "\n")
COMMAND=""
# sh compatible loop on a variable containing multiple lines
printf %s "$VARS" | while IFS='\n' read -r var
do
if [ "$var" != "" ]; then
export "$var"
fi
done
exec "$#"
I though exported variables would be available for the child process (created by exec) but this is obviously not the case because sh my_script.sh printenv doesn't show environment variables which are in /proc/1/environ.
I also tried the following script :
#!/bin/sh
VARS=$(cat -A /proc/1/environ | tr "^#" "\n")
COMMAND=""
# sh compatible loop on a variable containing multiple lines
printf %s "$VARS" | while IFS='\n' read -r var
do
if [ "$var" != "" ]; then
# Replace 'VAR=var' by 'VAR="var"' for eval
# sed replace only the first occurence of the '=' due of the missing /g parameter
escaped=$(echo $var | sed -e 's/=/="/')\"
COMMAND="${COMMAND} ${escaped}"
fi
done
COMMAND="${COMMAND} $#"
eval $COMMAND
However, it looks like eval doesn't export variables even if the evaluated command looks like VAR=value my_command.
How I am supposed to achieve my needs ?
Thanks in advance
That one should work (tested on RHEL 7)
#!/bin/bash
locPROC=$1
locCMD=$2
if [[ -z $locPROC || -z $locCMD ]]; then
exit
fi
if [[ -r /proc/${locPROC}/environ ]]; then
while IFS= read -r -d '' line; do
#Making sure it's properly quoted
locVar="${line/=/=\"}\""
#You probably don't want to mess with those
if [[ ${locVar:0:1} != "_" && ${locVar} != A__z* ]]; then
eval "$locVar"
eval "export ${locVar%%=*}"
fi
done < "/proc/${locPROC}/environ"
$locCMD
else
echo "Environment file is either inexistant or unreadable"
fi
EDITED : According to comments (still use eval...got to read more :) )

Unix - Replace column value inside while loop

I have comma separated (sometimes tab) text file as below:
parameters.txt:
STD,ORDER,ORDER_START.xml,/DML/SOL,Y
STD,INSTALL_BASE,INSTALL_START.xml,/DML/IB,Y
with below code I try to loop through the file and do something
while read line; do
if [[ $1 = "$(echo "$line" | cut -f 1)" ]] && [[ "$(echo "$line" | cut -f 5)" = "Y" ]] ; then
//do something...
if [[ $? -eq 0 ]] ; then
// code to replace the final flag
fi
fi
done < <text_file_path>
I wanted to update the last column of the file to N if the above operation is successful, however below approaches are not working for me:
sed 's/$f5/N/'
'$5=="Y",$5=N;{print}'
$(echo "$line" | awk '$5=N')
Update: Few considerations which need to be considered to give more clarity which i missed at first, apologies!
The parameters file may contain lines with last field flag as "N" as well.
Final flag needs to be update only if "//do something" code has successfully executed.
After looping through all lines i.e, after exiting "while loop" flags for all rows to be set to "Y"
perhaps invert the operations do processing in awk.
$ awk -v f1="$1" 'BEGIN {FS=OFS=","}
f1==$1 && $5=="Y" { // do something
$5="N"}1' file
not sure what "do something" operation is, if you need to call another command/script it's possible as well.
with bash:
(
IFS=,
while read -ra fields; do
if [[ ${fields[0]} == "$1" ]] && [[ ${fields[4]} == "Y" ]]; then
# do something
fields[4]="N"
fi
echo "${fields[*]}"
done < file | sponge file
)
I run that in a subshell so the effects of altering IFS are localized.
This uses sponge to write back to the same file. You need the moreutils package to use it, otherwise use
done < file > tmp && mv tmp file
Perhaps a bit simpler, less bash-specific
while IFS= read -r line; do
case $line in
"$1",*,Y)
# do something
line="${line%Y}N"
;;
esac
echo "$line"
done < file
To replace ,N at the end of the line($) with ,Y:
sed 's/,N$/,Y/' file

List only common parent directories for files

I am searching for one file, say "file1.txt", and output of find command is like below.
/home/nicool/Desktop/file1.txt
/home/nicool/Desktop/dir1/file1.txt
/home/nicool/Desktop/dir1/dir2/file1.txt
In above cases I want only common parent directory, which is "/home/nicool/Desktop" in above case. How it can be achieved using bash? Please help to find general solution for such problem.
This script reads lines and stores the common prefix in each iteration:
# read a line into the variable "prefix", split at slashes
IFS=/ read -a prefix
# while there are more lines, one after another read them into "next",
# also split at slashes
while IFS=/ read -a next; do
new_prefix=()
# for all indexes in prefix
for ((i=0; i < "${#prefix[#]}"; ++i)); do
# if the word in the new line matches the old one
if [[ "${prefix[i]}" == "${next[i]}" ]]; then
# then append to the new prefix
new_prefix+=("${prefix[i]}")
else
# otherwise break out of the loop
break
fi
done
prefix=("${new_prefix[#]}")
done
# join an array
function join {
# copied from: http://stackoverflow.com/a/17841619/416224
local IFS="$1"
shift
echo "$*"
}
# join the common prefix array using slashes
join / "${prefix[#]}"
Example:
$ ./x.sh <<eof
/home/nicool/Desktop1/file1.txt
/home/nicool/Desktop2/dir1/file1.txt
/home/nicool/Desktop3/dir1/dir2/file1.txt
eof
/home/nicool
I don't think there's a bash builtin for this, but you can use this script, and pipe your find into it.
read -r FIRSTLINE
DIR=$(dirname "$FIRSTLINE")
while read -r NEXTLINE; do
until [[ "${NEXTLINE:0:${#DIR}}" = "$DIR" || "$DIR" = "/" ]]; do
DIR=$(dirname "$DIR")
done
done
echo $DIR
For added safety, use -print0 on your find, and adjust your read statements to have -d '\0'. This will work with filenames that have newlines.
lcp() {
local prefix path
read prefix
while read path; do
while ! [[ $path =~ ^"$prefix" ]]; do
[[ $prefix == $(dirname "$prefix") ]] && return 1
prefix=$(dirname "$prefix")
done
done
printf '%s\n' "$prefix"
return 0
}
This finds the longest common prefix of all of the lines of standard input.
$ find / -name file1.txt | lcp
/home/nicool/Desktop

shell bash replacing tags in a line with values from a different file

I am trying to read lines within a file and if the line contains a tag, the text within the tag is used to replace the tag with a value from a different propertiesfile, before the line, with all tags replaced is written to a different file.
So the initial file being read have lines that would adhere to the following format:
testkey "TEST-KEY" "[#key_location#]:///[#key_name#]"
Where [# and #] house the tag text.
The propertied file would then contain lines like:
key_location=location_here
key_name=test_key_name
So the end result I am trying to achieve is that the line is written to a new file, but the tags are replaced with the values from the property file, so using the above content:
testkey "TEST-KEY" "loaction_here:///test_key_name"
I am not sure how best to handle the tags and deal with multiple tags in one line and am pretty lost. Any help would be greatly appreciated.
Skeleton code:
while read line
if [[ $line == *[#* ]]
then
#echo found a tag and need to deal with it
else
echo "$line">> $NEW_FILE
fi
done < $INITIAL_FILE
EDIT
Lines within the file could contain one or more tags, not always two like in the example given.
You'll have to do some looping and global sed replacements. The following is probably not optimal but it will get you started:
#!/bin/bash
declare -A props
while read line ; do
key=$(echo $line | sed -r 's/^(.*)=.*/\1/')
value=$(echo $line | sed -r 's/^.*=(.*)/\1/')
props[$key]=$value
done < values.properties
replace() {
line=$1
for key in "${!props[#]}"; do
line=$(echo $line | sed "s/\[#$key#\]/${props[$key]}/g")
done
echo $line
}
while read line ; do
while [[ $line == *"[#"*"#]"* ]] ;do
line=$(replace "$line")
echo Iter: $line
done
echo DONE: $line
done < $INITIAL_FILE
The snippet prints to stdout and it includes intermediate results so that you can check how it works. I think you will easily be able to modify it to write to a file, etc.
There are a number of ways to do this (e.g. count #). But simply, you can just use * expansion outside just the start and end of the string so
if [[ $line == *"[#"*"#]"*"[#"*"#]"* ]]; then
echo "has tags"
else
echo "does not have tags"
fi
Would work in your case, e.g.
$ echo "$line"
testkey "TEST-KEY" "[#key_location#]:///[#key_name#]"
$ if [[ $line == *"[#"*"#]"*"[#"*"#]"* ]]; then echo "has tags"; fi
has tags

find string in file using bash

I need to find strings matching some regexp pattern and represent the search result as array for iterating through it with loop ), do I need to use sed ? In general I want to replace some strings but analyse them before replacing.
Using sed and diff:
sed -i.bak 's/this/that/' input
diff input input.bak
GNU sed will create a backup file before substitutions, and diff will show you those changes. However, if you are not using GNU sed:
mv input input.bak
sed 's/this/that/' input.bak > input
diff input input.bak
Another method using grep:
pattern="/X"
subst=that
while IFS='' read -r line; do
if [[ $line = *"$pattern"* ]]; then
echo "changing line: $line" 1>&2
echo "${line//$pattern/$subst}"
else
echo "$line"
fi
done < input > output
The best way to do this would be to use grep to get the lines, and populate an array with the result using newline as the internal field separator:
#!/bin/bash
# get just the desired lines
results=$(grep "mypattern" mysourcefile.txt)
# change the internal field separator to be a newline
IFS=$'/n'
# populate an array from the result lines
lines=($results)
# return the third result
echo "${lines[2]}"
You could build a loop to iterate through the results of the array, but a more traditional and simple solution would just be to use bash's iteration:
for line in $lines; do
echo "$line"
done
FYI: Here is a similar concept I created for fun. I thought it would be good to show how to loop a file and such with this. This is a script where I look at a Linux sudoers file check that it contains one of the valid words in my valid_words array list. Of course it ignores the comment "#" and blank "" lines with sed. In this example, we would probably want to just print the Invalid lines only but this script prints both.
#!/bin/bash
# -- Inspect a sudoer file, look for valid and invalid lines.
file="${1}"
declare -a valid_words=( _Alias = Defaults includedir )
actual_lines=$(cat "${file}" | wc -l)
functional_lines=$(cat "${file}" | sed '/^\s*#/d;/^\s*$/d' | wc -l)
while read line ;do
# -- set the line to nothing "" if it has a comment or is empty line.
line="$(echo "${line}" | sed '/^\s*#/d;/^\s*$/d')"
# -- if not set to nothing "", check if the line is valid from our list of valid words.
if ! [[ -z "$line" ]] ;then
unset found
for each in "${valid_words[#]}" ;do
found="$(echo "$line" | egrep -i "$each")"
[[ -z "$found" ]] || break;
done
[[ -z "$found" ]] && { echo "Invalid=$line"; sleep 3; } || echo "Valid=$found"
fi
done < "${file}"
echo "actual lines: $actual_lines funtional lines: $functional_lines"

Resources