convert comma separated command line arguments to json in shell script - linux

I'm using below script to generate json data from comma separated values to feed zabbix.
but i'm getting one extra comma symbol. please try to optimize the comma in the end line.
#/bin/bash
IFS=':, ' read -r -a array <<< "$1"
idx=0
echo {\"data\":[
while [ -n "${array[$idx]}" ]; do
echo -n \{\"{#R_IP}\":\""${array[$idx]}"\"}
let idx=$idx+1
[ -n "$array[idx]}" ] && echo "," || echo
done
echo ]}
exit
input
./test.sh embimsrv.exe,emcms.exe,emcmsg.exe,emforecastsrv.exe,emgtw.exe,emguisrv.exe,emmaintag.exe,emselfservicesrv.exe,Naming_Service.exe,p_ctmce.exe,p_ctmcs.exe,p_ctmrt.exe,p_ctmtr.exe,p_ctmwd.exe
output
{"data":[
{"{#R_IP}":"embimsrv.exe"},
{"{#R_IP}":"emcms.exe"},
{"{#R_IP}":"emcmsg.exe"},
{"{#R_IP}":"emforecastsrv.exe"},
{"{#R_IP}":"emgtw.exe"},
{"{#R_IP}":"emguisrv.exe"},
{"{#R_IP}":"emmaintag.exe"},
{"{#R_IP}":"emselfservicesrv.exe"},
{"{#R_IP}":"Naming_Service.exe"},
{"{#R_IP}":"p_ctmce.exe"},
{"{#R_IP}":"p_ctmcs.exe"},
{"{#R_IP}":"p_ctmrt.exe"},
{"{#R_IP}":"p_ctmtr.exe"},
{"{#R_IP}":"p_ctmwd.exe"},
]}

Use a proper tool, like jq, to generate your JSON.
printf '%s' "$1" | jq -R 'split(",") | map({"{#R_IP}": .}) | {data: .}'

Manually piecing together JSON like this is pretty brittle. But here goes. A very common trick is to prefix each string except the first with a comma.
#!/bin/bash
IFS=':, ' read -r -a array <<< "$1"
prefix=''
printf '%s' '{"data":['
for item in "${array[#]}"; do
printf '%s%s' "$prefix" "{\"{#R_IP}\":\"$item\"}"
prefix=','
done
printf '%s\n' ']}'
Notice also how no explicit exit is required at the end of a script. The shell stops executing the script and terminates when it reaches the end of the script.
Also, the shebang needs to start with exactly the two single-byte characters #!.
Finally, a much better overall design is probably to not require the arguments to be comma-separated; but I won't try to fix that here.

Related

Linux script reading an ini file and splitting into variables by a specified character

I'm stuck in the following task: Lets pretend we have an .ini file in a folder. The file contains lines like this:
eno1=10.0.0.254/24
eno2=172.16.4.129/25
eno3=192.168.2.1/25
tun0=10.10.10.1/32
I had to choose the biggest subnet mask. So my attempt was:
declare -A data
for f in datadir/name
do
while read line
do
r=(${line//=/ })
let data[${r[0]}]=${r[1]}
done < $f
done
This is how far i got. (Yeah i know the file named name is not an .ini file but a .txt since i got problem even with creating an ini file,this teacher didn't even give a file like that for our exam.)
It splits the line until the =, but doesn't want to read the IP number because of the (first) . character.
(Invalid arithmetic operator the error message i got)
If someone could help me and explain how i can make a script for tasks like this i would be really thankful!
Both previously presented solutions operate (and do what they're designed to do); I thought I'd add something left-field as the specifications are fairly loose.
$ cat freasy
eno1=10.0.0.254/24
eno2=172.16.4.129/25
eno3=192.168.2.1/25
tun0=10.10.10.1/32
I'd argue that the biggest subnet mask is the one with the lowest numerical value (holds the most hosts).
$ sort -t/ -k2,2nr freasy| tail -n1
eno1=10.0.0.254/24
Don't use let. It's for arithmetic.
$ help let
let: let arg [arg ...]
Evaluate arithmetic expressions.
Evaluate each ARG as an arithmetic expression.
Just use straight assignment:
declare -A data
for f in datadir/name
do
while read line
do
r=(${line//=/ })
data[${r[0]}]=${r[1]}
done < $f
done
Result:
$ declare -p data
declare -A data=([tun0]="10.10.10.1/32" [eno1]="10.0.0.254/24" [eno2]="172.16.4.129/25" [eno3]="192.168.2.1/25" )
awk provides a simple solution to find the max value following the '/' that will be orders of magnitude faster than a bash script or Unix pipeline using:
awk -F"=|/" '$3 > max { max = $3 } END { print max }' file
Example Use/Output
$ awk -F"=|/" '$3 > max { max = $3 } END { print max }' file
32
Above awk separates the fields using either '=' or '/' as field separator and then keeps the max of the 3rd field $3 and outputs that value using the END {...} rule.
Bash Solution
If you did want a bash script solution, then you can isolate the wanted parts of each line using [[ .. =~ .. ]] to populate the BASH_REMATCH array and then compare ${BASH_REMATCH[3]} against a max variable. The [[ .. ]] expression with =~ considers everything on the right side an Extended Regular Expression and will isolate each grouping ((...)) as an element in the array BASH_REMATCH, e.g.
#!/bin/bash
[ -z "$1" ] && { printf "filename required\n" >&2; exit 1; }
declare -i max=0
while read -r line; do
[[ $line =~ ^(.*)=(.*)/(.*)$ ]]
((${BASH_REMATCH[3]} > max)) && max=${BASH_REMATCH[3]}
done < "$1"
printf "max: %s\n" "$max"
Using Only POSIX Parameter Expansions
Using parameter expansion with substring removal supported by POSIX shell (Bourne shell, dash, etc..), you could do:
#!/bin/sh
[ -z "$1" ] && { printf "filename required\n" >&2; exit 1; }
max=0
while read line; do
[ "${line##*/}" -gt "$max" ] && max="${line##*/}"
done < "$1"
printf "max: %s\n" "$max"
Example Use/Output
After making yourscript.sh executable with chmod +x yourscript.sh, you would do:
$ ./yourscript.sh file
max: 32
(same output for both shell script solutions)
Let me know if you have further questions.

Shell script that filters command output and saves it in Json formated list

never worked with shell scripts before,but i need to in my current task.
So i have to run a command that returns output like this:
awd54a7w6ds54awd47awd refs/heads/SomeInfo1
awdafawe23413f13a3r3r refs/heads/SomeInfo2
a8wd5a8w5da78d6asawd7 refs/heads/SomeInfo3
g9reh9wrg69egs7ef987e refs/heads/SomeInfo4
And i need to loop over every line of output get only the "SomeInfo" part and write it to a file in a format like this:
["SomeInfo1","SomeInfo2","SomeInfo3"]
I've tried things like this:
for i in $(some command); do
echo $i | cut -f2 -d"heads/" >> text.txt
done
But i don't know how to format it into an array without using a temporary file.
Sorry if the question is dumb and probably too easy and im sure i can figure it out on my own,but i just don't have the time for it because its just an extra conveniance feature that i personally want to implement.
Try this
# json_encoder.sh
arr=()
while read line; do
arr+=(\"$(basename "$line")\")
done
printf "[%s]" $(IFS=,; echo "${arr[*]}")
And then invoke
./your_command | json_encoder.sh
PS. I personally do this kind of data massaging with Vim.
Using Perl one-liner
$ cat petar.txt
awd54a7w6ds54awd47awd refs/heads/SomeInfo1
awdafawe23413f13a3r3r refs/heads/SomeInfo2
a8wd5a8w5da78d6asawd7 refs/heads/SomeInfo3
g9reh9wrg69egs7ef987e refs/heads/SomeInfo4
$ perl -ne ' { /.*\/(.*)/ and push(#res,"\"$1\"") } END { print "[".join(",",#res)."]\n" }' petar.txt
["SomeInfo1","SomeInfo2","SomeInfo3","SomeInfo4"]
While you should rarely ever use a script to format json, in your case you are simply parsing output into a comma-separated line with added end-caps of [...]. You can use bash parameter expansion to avoid spawning any additional subshells to obtain the last field of information in each line as follows:
#!/bin/bash
[ -z "$1" -o ! -r "$1" ] && { ## validate file given as argument
printf "error: file doesn't exist or not readable.\n" >&2
exit 1
}
c=0 ## simple flag variable
while read -r line; do ## read each line
if [ "$c" -eq '0' ]; then ## is flag 0?
printf "[\"%s\"" "${line##*/}" ## output ["last"
else ## otherwise
printf ",\"%s\"" "${line##*/}" ## output ,"last"
fi
c=1 ## set flag 1
done < file ## redirect file to loop
echo "]" ## append closing ]
Example Use/Output
Using your given data as the input file, you would get the following:
$ bash script.sh file
["SomeInfo1","SomeInfo2","SomeInfo3","SomeInfo4"]
Look things over and let me know if you have any questions.
You can also use awk without any loops I guess:
cat prev_output | awk -v ORS=',' -F'/' '{print "\042"$3"\042"}' | \
sed 's/^/[/g ; s/,$/]\n/g' > new_output
cat new_output
["SomeInfo1","SomeInfo2","SomeInfo3","SomeInfo4"]

bash separate parameters with specific delimiter

I am searching for a command, that separates all given parameters with a specific delimiter, and outputs them quoted.
Example (delimiter is set to be a colon :):
somecommand "this is" "a" test
should output
"this is":"a":"test"
I'm aware that the shell interprets the "" quotes before passing the parameters to the command. So what the command should actually do is to print out every given parameter in quotes and separate all these with a colon.
I'm also not seeking for a bash-only solution, but for the most elegant solution.
It is very easy to just loop over an array of these elements and do that, but the problem is that I have to use this inside a gnu makefile which only allows single line shell commands and uses sh instead of bash.
So the simpler the better.
How about
somecommand () {
printf '"%s"\n' "$#" | paste -s -d :
}
Use printf to add the quotes and print every entry on a separate line, then use paste with the -s ("serial") option and a colon as the delimiter.
Can be called like this:
$ somecommand "this is" "a" test
"this is":"a":"test"
apply_delimiter () {
(( $# )) || return
local res
printf -v res '"%s":' "$#"
printf '%s\n' "${res%:}"
}
Usage example:
$ apply_delimiter hello world "how are you"
"hello":"world":"how are you"
As indicated in a number of the comments, a simple "loop-over" approach, looping over each of the strings passed as arguments is a fairly straight-forward way to approach it:
delimit_colon() {
local first=1
for i in "$#"; do
if [ "$first" -eq 1 ]; then
printf "%s" "$i"
first=0
else
printf ":%s" "$i"
fi
done
printf "\n"
}
Which when combined with a short test script could be:
#!/bin/bash
delimit_colon() {
local first=1
for i in "$#"; do
if [ "$first" -eq 1 ]; then
printf "%s" "$i"
first=0
else
printf ":%s" "$i"
fi
done
printf "\n"
}
[ -z "$1" ] && { ## validate input
printf "error: insufficient input\n"
exit 1
}
delimit_colon "$#"
exit 0
Test Input/Output
$ bash delimitargs.sh "this is" "a" test
this is:a:test
Here a solution using the z-shell:
#!/usr/bin/zsh
# this is "somecommand"
echo '"'${(j_":"_)#}'"'
If you have them in an array already, you can use this command
MYARRAY=("this is" "a" "test")
joined_string=$(IFS=:; echo "$(MYARRAY[*])")
echo $joined_string
Setting the IFS (internal field separator) will be the character separator. Using echo on the array will display the array using the newly set IFS. Putting those commands in $() will put the output of the echo into joined_string.

find string in file using bash

I need to find strings matching some regexp pattern and represent the search result as array for iterating through it with loop ), do I need to use sed ? In general I want to replace some strings but analyse them before replacing.
Using sed and diff:
sed -i.bak 's/this/that/' input
diff input input.bak
GNU sed will create a backup file before substitutions, and diff will show you those changes. However, if you are not using GNU sed:
mv input input.bak
sed 's/this/that/' input.bak > input
diff input input.bak
Another method using grep:
pattern="/X"
subst=that
while IFS='' read -r line; do
if [[ $line = *"$pattern"* ]]; then
echo "changing line: $line" 1>&2
echo "${line//$pattern/$subst}"
else
echo "$line"
fi
done < input > output
The best way to do this would be to use grep to get the lines, and populate an array with the result using newline as the internal field separator:
#!/bin/bash
# get just the desired lines
results=$(grep "mypattern" mysourcefile.txt)
# change the internal field separator to be a newline
IFS=$'/n'
# populate an array from the result lines
lines=($results)
# return the third result
echo "${lines[2]}"
You could build a loop to iterate through the results of the array, but a more traditional and simple solution would just be to use bash's iteration:
for line in $lines; do
echo "$line"
done
FYI: Here is a similar concept I created for fun. I thought it would be good to show how to loop a file and such with this. This is a script where I look at a Linux sudoers file check that it contains one of the valid words in my valid_words array list. Of course it ignores the comment "#" and blank "" lines with sed. In this example, we would probably want to just print the Invalid lines only but this script prints both.
#!/bin/bash
# -- Inspect a sudoer file, look for valid and invalid lines.
file="${1}"
declare -a valid_words=( _Alias = Defaults includedir )
actual_lines=$(cat "${file}" | wc -l)
functional_lines=$(cat "${file}" | sed '/^\s*#/d;/^\s*$/d' | wc -l)
while read line ;do
# -- set the line to nothing "" if it has a comment or is empty line.
line="$(echo "${line}" | sed '/^\s*#/d;/^\s*$/d')"
# -- if not set to nothing "", check if the line is valid from our list of valid words.
if ! [[ -z "$line" ]] ;then
unset found
for each in "${valid_words[#]}" ;do
found="$(echo "$line" | egrep -i "$each")"
[[ -z "$found" ]] || break;
done
[[ -z "$found" ]] && { echo "Invalid=$line"; sleep 3; } || echo "Valid=$found"
fi
done < "${file}"
echo "actual lines: $actual_lines funtional lines: $functional_lines"

Multiple variables in loop input?

When using the following:
for what in $#; do
read -p "Where?" where
grep -H "$what" $where -R | cut -d: -f1
How can I, instead of using read to define a user-variable, have a second variable input along with the first variable when calling the script.
For example, the ideal usage I believe I can get is something like:
sh scriptname var1 var2
But my understanding is that the for... line is for looping the subsequent entires into the one variable; what would I need to change to input multiple variables?
As an aside: using | cut -D: -f1 is not safe, because grep does not escape colons in filenames. To see what I mean, you can try this:
ghoti#pc:~$ echo bar:baz > foo
ghoti#pc:~$ echo baz > foo:bar
ghoti#pc:~$ grep -Hr ba .
./foo:bar:baz
./foo:bar:baz
Clarity .. there is not.
So ... let's clarify what you're looking for.
Do you want to search for one string in multiple files? Or,
Do you want to search for multiple strings in one file?
If the former, then the following might work:
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: `basename $0` string file [file ...]
exit 1
fi
what="$1"
shift # discard $1, move $2 to $1, $3 to $2, etc.
for where in "$#"; do
grep -HlR "$what" "$where" -R
done
And if the latter, then this would be the way:
#!/bin/bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: `basename $0` file string [string ...]
exit 1
fi
where="$1"
shift
for what in "$#"; do
grep -lR "$what" "$where"
done
Of course, this one might be streamlined if you concatenated your strings with an or bar, then used egrep. Depends on what you're actually looking for.
You can get parameters passed on the command line with $1 $2 etc.
Read up on positional parameters: http://www.linuxcommand.org/wss0130.php. You don't need a for loop to parse them.
sh scriptname var1 var2
v1=$1 # contains var1
v2=$2 # contains var1
$# is basically just a list of all the positional parameters: $1 $2 $3 etc.

Resources