Formating window values with jq - linux

I would like to take the json output of window values from a program. The output of it currently is:
[
3067,
584
]
[
764,
487
]
But I need it to be formatted like: 3067,584 764x487. How would I go about doing this using jq or other commands?
I'm not very experienced with jq and json formatting in general, so I'm not really sure where to start. I have tried looking this up but am still not really sure how to do it.

A solution that does not --slurp would be using input for every other array:
jq -r 'join(",")+" "+(input|join("x"))'
3067,584 764x487
Demo

If your input is a stream of JSON arrays, you could use jq's -s/--slurp command-line option. Join the first array with comma, the second array with "x"; and finally join both strings with a space:
$ jq -sr '[(.[0]|join(",")), (.[1]|join("x"))] | join(" ")' <<JSON
[
3067,
584
]
[
764,
487
]
JSON
3067,584 764x487
Alternatively, simply use string interpolation:
jq -sr '"\(.[0][0]),\(.[0][1]) \(.[1][0])x\(.[1][1])"'

Related

How do I grep and replace string in bash

I have a file which contains my json
{
"type": "xyz",
"my_version": "1.0.1.66~22hgde",
}
I want to edit the value for key my_version and everytime replace the value after third dot with another number which is stored in a variable so it will become something like 1.0.1.32~22hgde. I am using sed to replace it
sed -i "s/\"my_version\": \"1.0.1.66~22hgde\"/\"my_version\": \"1.0.1.$VAR~22hgde\"/g" test.json
This works but the issue is that my_version string doesn't remain constant and it can change and the string can be something like this 1.0.2.66 or 2.0.1.66. So how do I handle such case in bash?
how do I handle such case?
You write a regular expression to match any possible combination of characters that can be there. You can learn regex with fun with regex crosswords online.
Do not edit JSON files with sed - sed is for lines. Consider using JSON aware tools - like jq, which will handle any possible case.
A jq answer: file.json contains
{
"type": "xyz",
"my_version": "1.0.1.66~22hgde",
"object": "can't end with a comma"
}
then, replacing the last octet before the tilde:
VAR=32
jq --arg octet "$VAR" '.my_version |= sub("[0-9]+(?=~)"; $octet)' file.json
outputs
{
"type": "xyz",
"my_version": "1.0.1.32~22hgde",
"object": "can't end with a comma"
}

Bash: Loop Read N lines at time from CSV

I have a csv file of 100000 ids
wef7efwe1fwe8
wef7efwe1fwe3
ewefwefwfwgrwergrgr
that are being transformed into a json object using jq
output=$(jq -Rsn '
{"id":
[inputs
| . / "\n"
| (.[] | select(length > 0) | . / ";") as $input
| $input[0]]}
' <$FILE)
output
{
"id": [
"wef7efwe1fwe8",
"wef7efwe1fwe3",
....
]
}
currently, I need to manually split the file into smaller 10000 line files... because the API call has a limit.
I would like a way to automatically loop through the large file... and only use 10000 lines as a time as $FILE... up until the end of the list.
I would use the split command and write a little shell script around it:
#!/bin/bash
input_file=ids.txt
temp_dir=splits
api_limit=10000
# Make sure that there are no leftovers from previous runs
rm -rf "${temp_dir}"
# Create temporary folder for splitting the file
mkdir "${temp_dir}"
# Split the input file based on the api limit
split --lines "${api_limit}" "${input_file}" "${temp_dir}/"
# Iterate through splits and make an api call per split
for split in "${temp_dir}"/* ; do
jq -Rsn '
{"id":
[inputs
| . / "\n"
| (.[] | select(length > 0) | . / ";") as $input
| $input[0]]
}' "${split}" > api_payload.json
# now do something ...
# curl -dapi_payload.json http://...
rm -f api_payload.json
done
# Clean up
rm -rf "${temp_dir}"
Here's a simple and efficient solution that at its core just uses jq. It takes advantage of the -c command-line option. I've used xargs printf ... for illustration - mainly to show how easy it is to set up a shell pipeline.
< data.txt jq -Rnc '
def batch($n; stream):
def b: [limit($n; stream)]
| select(length > 0)
| (., b);
b;
{id: batch(10000; inputs | select(length>0) | (. / ";")[0])}
' | xargs printf "%s\n"
Parameterizing batch size
It might make sense to set things up so that the batch size is specified outside the jq program. This could be done in numerous ways, e.g. by invoking jq along the lines of:
jq --argjson n 10000 ....
and of course using $n instead of 10000 in the jq program.
Why “def b:”?
For efficiency. jq’s TCO (tail recursion optimization) only works for arity-0 filters.
Note on -s
In the Q as originally posted, the command-line options -sn are used in conjunction with inputs. Using -s with inputs defeats the whole purpose of inputs, which is to make it possible to process input in a stream-oriented way (i.e. one line of input or one JSON entity at a time).

How to transform a string (with a fixed delimiter) into an array to be appended to a JSON?

I've some string inputted by user such as:
read -p "define module tags (example: TAG1, TAG2): " -r module_tags
if [ "$module tags" = "" ]; then module tags="TAG1, TAG2"; fi
which are tags separated by ,
Than, I need to append these tags in a JSON array field:
{
"modules": [
{
"tags": "<user-custom-tags>"
}
]
}
I would do it in this way:
args='.modules[0].tags = '$module_tags''
tmp=$(mktemp -u)
jq --indent 4 "$args" $plugin_file > "$tmp" && mv "$tmp" $plugin_file
But for this, I need to transform the input TAG1, TAG2 to [ "TAG1", "TAG2" ]
How would you do this?
for tag in $module_tags_array
This splits the value of $module_tags_array on IFS, i.e. you are looping over first TAG1, and then TAG2, not the list of tags.
What you are describing can easily be accomplished with
module_tags="[ $module_tags_array ]"
Notice also that the proper way to echo the value of the variable is with quotes:
echo "$module_tags"
unless you specifically require the shell to perform whitespace tokenization and wildcard expansion on the unquoted value. See also When to wrap quotes around a shell variable?
However, a more natural and obvious solution is to actually use an array to store the values.
tags=("TAG1" "TAG2")
printf "\"%s\", " "${tags[#]}" | sed 's/^/[/;s/, $/]/'
The printf produces a string like
"TAG1", "TAG2",
which we then massage into a JSON array expression with a quick bit of sed postprocessing.
Not using bashism, but using jq command line JSON parser:
<<< "TAG1, TAG2" jq -Rc 'split(", ")'
["TAG1","TAG2"]
-R is for raw input string
-c is for compact JSON output (on 1 line)
As expected, split function turns the input string into different part and put them into a JSON array.
Use the "Search and Replace" feature of Bash's parameter expansion:
input="TAG1, TAG2"
output='[ "'${input//, /\", \"}'" ]'
printf "%s\n" "$output"
But be aware that this is not a proper way to quote.

Create variable equal to the lines in the file, and assign the variables the value from the file sequentially

I want to create a number of variables equal to the lines in a file and assign to each of those variables a value from the file sequentially.
Say,
file1 contains device1 device2 device3 .....
file2 contains olddevice1 olddevice2 olddevice3 .....
I want values as when I do echo $A = device1
Similarly echo $B = device2 and echo $Z = device26
I tried a for loop, and even an array, but couldn't get through it.
I have tried something like below:
iin=0
var=({A..Z})
for jin in `cat file1`
do
array[$iin]="$var=$jin";
iin=$(($iin+1));
var="$(echo $var | tr '[A-Y]Z' '[B-Z]A')"
printf '%s\n' "${array[#]}"
done`
I believe you're missing the point : variables have fix names in programming languages, like $A, $B, ..., $Z: while programming you need to specify those variables inside your program, you can't expect your program to invent it's own variables.
What you are looking for, are collections, like arrays, lists, ...:
You create a collection A and you can add values to it (A[n]=value_n, or A.SetAt(n, value_n), ..., depending on the kind of collection you're using.
With bash (v4 and later) something like this mapfile code should work:
mapfile -O 1 -t array1 < file1
mapfile -O 1 -t array2 < file2
# output line #2 from both files
echo "${array1[2]}" "${array2[2]}"
# output the last line from both files
echo "${array1[-1]}" "${array2[-1]}"
Notes: mapfile just loads an array, but with a few more options.
-O 1 sets the array subscript to start at 1 rather than the default 0; this isn't necessary, but it makes the code easier to read.

Bash: How to convert an array-like-string to array [duplicate]

This question already has answers here:
Convert a Python Data list to a bash array
(5 answers)
Closed 7 years ago.
I have an array-like-string '[ "tag1", "tag2", "tag3" ]' and need to convert it to a bash array. How can this be achieved?
As you know, arrays are declared in bash as follows:
arr=(one two three elements)
So let's try to tranform your input into a bash array. Without parsing the input, this might be very error prone and due to the use of eval, considerably insecure when variable user input would be processed.
Nevertheless, here's a starting point:
t='[ "tag1", "tag2", "tag3" ]'
# strip white space
t=${t// /}
# substitute , with space
t=${t//,/ }
# remove [ and ]
t=${t##[}
t=${t%]}
# create an array
eval a=($t)
When run on a console, this yields:
$ echo ${a[2]}
tag3
The source format is similar to python list.
Using python for intermediate processing:
$ src='[ "tag1", "tag2", "tag,\" 3" ]' # With embedded double quotes, spaces and comma for example.
$ IFS=$'\n' bash_array=($(python <<< 'a='"$src"$'\n''for t in a: print(t)'))
$ printf "%s\n" ${bash_array[#]}
tag1
tag2
tag,"3

Resources