Replacing date within a JSON config file - linux

I'm new but trying to get a new script running but I need it to call on todays date as a variable within the configuration file so the program can be run.
I'm sure sure the best way to implement it so far this line will replace the correct part of the configuration file I need but I can't figure out how to get it to use the "todays date" e.g. date +%F command.
sed -i 's/"to_date":.*/"to_date":"date +%F"/' /config/settings
config following:
{
"username":"admin",
"password":"redhat",
"assumeyes":true,
"to_date": "2011-10-01",
"skip_depsolve":false,
"skip_errata_depsolve":false,
"security_only":false,
"use_update_date":false,
"no_errata_sync":false,
"dry_run":false,
"errata": ["RHSA-2014:0043", "RHBA-2014:0085"],
"blacklist": {
},
"removelist": {
},
"channels":[
{
"rhel-x86_64-server-5": {
"label": "my-rhel5-x86_64-clone",
"existing-parent-do-not-modify": true
},
"rhn-tools-rhel-x86_64-server-5": {
"label": "my-tools-5-x86_64-clone",
"name": "My Clone's Name",
"summary": "This is my channel's summary",
"description": "This is my channel's description"
}
},
{
"rhel-i386-server-5": "my-rhel5-i386-clone"
}
]
}

Using a proper JSON parser jq with the --arg field to pass the current date,
jq --arg inputDate $(date +%F) '.to_date = $inputDate' /config/settings
{
"username": "admin",
"password": "redhat",
"assumeyes": true,
"to_date": "2017-01-27",
"skip_depsolve": false,
"skip_errata_depsolve": false,
"security_only": false,
"use_update_date": false,
"no_errata_sync": false,
"dry_run": false,
"errata": [
"RHSA-2014:0043",
"RHBA-2014:0085"
],
"blacklist": {},
"removelist": {},
"channels": [
{
"rhel-x86_64-server-5": {
"label": "my-rhel5-x86_64-clone",
"existing-parent-do-not-modify": true
},
"rhn-tools-rhel-x86_64-server-5": {
"label": "my-tools-5-x86_64-clone",
"name": "My Clone's Name",
"summary": "This is my channel's summary",
"description": "This is my channel's description"
}
},
{
"rhel-i386-server-5": "my-rhel5-i386-clone"
}
]
}
The jq download and usage instructions are pretty straight forward. Recommend using it for manipulating JSON, instead of depending upon regex.
jq does not edit the file in-place, save it to a temporary file and rename it back, using GNU mktemp
jsonTemp=$(mktemp)
jq --arg inputDate $(date +%F) '.to_date = $inputDate' /config/settings > "$jsonTemp"
mv "$jsonTemp" /config/settings

To include the output of a command inside some quoted text, you have to use a subshell and use double-quotes so the text will get expanded :
sed -i "s/\"to_date\":.*/\"to_date\":\"$(date +%F)\"/" /config/settings
Also I second Inian's comment : you should be using jq to manipulate JSON data.
For example, the following command should do the modification you need :
jq ".toDate = $(date +%F)" /config/settings

Related

jq parsing and linux formatting to desired output

I am trying to format json output and exclude an element when a condition is met.
1) In this case I'd like to exclude any element that contains "valueFrom" using jq
[{
"name": "var1",
"value": "var1value"
},
{
"name": "var2",
"value": "var2value"
},
{
"name": "var3",
"value": "var3value"
},
{
"name": "var4",
"value": "var4value"
},
{ # <<< exclude this element as valueFrom exists
"name": "var5",
"valueFrom": {
"secretKeyRef": {
"key": "var5",
"name": "var5value"
}
}
}
]
After excluding the element mentioned above I am trying to return a result set that looks like this.
var1: var1value
var2: var2value
var3: var3value
var4: var4value
Any feedback is appreciated. Thanks.
Select array items that doesn't have the valueFrom key using a combination of select/1, has/1, and not/0. Then format the objects as you please.
$ jq -r '.[] | select(has("valueFrom") | not) | "\(.name): \(.value)"' input.json

Removing pattern from multiple lines using sed or awk in two places in the same line

I have a JSON file with 12,166,466 of lines.
I want to remove quotes from values on keys:
"timestamp": "1538564256",and "score": "10", to look like
"timestamp": 1538564256, and "score": 10,.
Input:
{
"title": "DNS domain", ,
"timestamp": "1538564256",
"domain": {
"dns": [
"www.google.com"
]
},
"score": "10",
"link": "www.bit.ky/sdasd/asddsa"
"id": "c-1eOWYB9XD0VZRJuWL6"
}, {
"title": "DNS domain",
"timestamp": "1538564256",
"domain": {
"dns": [
"google.de"
]
},
"score": "10",
"link": "www.bit.ky/sdasd/asddsa",
"id": "du1eOWYB9XD0VZRJuWL6"
}
}
Expected output:
{
"title": "DNS domain", ,
"timestamp": 1538564256,
"domain": {
"dns": [
"www.google.com"
]
},
"score": 10,
"link": "www.bit.ky/sdasd/asddsa"
"id": "c-1eOWYB9XD0VZRJuWL6"
}, {
"title": "DNS domain",
"timestamp": 1538564256,
"domain": {
"dns": [
"google.de"
]
},
**"score": 10,**
"link": "www.bit.ky/sdasd/asddsa",
"id": "du1eOWYB9XD0VZRJuWL6"
}
}
I have tried:
sed -E '
s/"timestamp": "/"timestamp": /g
s/"score": "/"score": /g
'
the first part is quite straightforward, but how to remove ", at that the end of the line that contains "timestamp" and "score"? How do I access that using sed or even awk, or other tool with the mind that I have 12 million lines to process?
Assuming that you fix your JSON input file like this:
<file jq .
[
{
"title": "DNS domain",
"timestamp": "1538564256",
"domain": {
"dns": [
"www.google.com"
]
},
"score": "10",
"link": "www.bit.ky/sdasd/asddsa",
"id": "c-1eOWYB9XD0VZRJuWL6"
},
{
"title": "DNS domain",
"timestamp": "1538564256",
"domain": {
"dns": [
"google.de"
]
},
"score": "10",
"link": "www.bit.ky/sdasd/asddsa",
"id": "du1eOWYB9XD0VZRJuWL6"
}
]
You can use jq and its tonumber function to change the wanted strings to values:
<file jq '.[].timestamp |= tonumber | .[].score |= tonumber'
If the JSON structure matches roughly your example (e. g., there won't be any other whitespace characters between "timestamp", the colon, and the value), then this awk should be ok. If available, using jq for JSON transformation is the better choice by far!
awk '{print gensub(/("(timestamp|score)": )"([0-9]+)"/, "\\1\\3", "g")}' file
Be warned that tonumber can lose precision. If using tonumber is inadmissible, and if the output is produced by jq (or otherwise linearized vertically), then using awk as proposed elsewhere on this page is a good way to go. (If your awk does not have gensub, then the awk program can be easily adapted.) Here is the same thing using sed, assuming its flag for extended regex processing is -E:
sed -E -e 's/"(timestamp|score)": "([0-9]+)"/"\1": \2/'
For reference, if there's any doubt about where the relevant keys are located, here's a filter in jq that is agnostic about that:
walk(if type == "object"
then if has("timestamp") then .timestamp|=tonumber else . end
| if has("score") then .score|=tonumber else end
else . end)
If your jq does not have walk/1, then simply snarf its def from the web, e.g. from https://raw.githubusercontent.com/stedolan/jq/master/src/builtin.jq
If you wanted to convert all number-valued strings to numbers, you could write:
walk(if type=="object" then map_values(tonumber? // .) else . end)
This might work for you (GNU sed):
sed ':a;/"timestamp":\s*"1538564256",/{s/"//3g;:b;n;/timestamp/ba;/"score":\s*"10"/s/"//3g;Tb}' file
On encountering a line that contains "timestamp": "1538564256", remove the 3rd or more "'s. Then read on until another line containing timestamp and repeat or a line containing "score": "10 and remove the 3rd or more "'s.

How can i remove these special characters from JSON output file

^[[0;32m ?~V? ^[[0m
JSON file is being written by shell script.
So the text processing produces these special characters, tried using dos2unix and changing the characters globally using %s option as well.
Check this out. I introduced some control characters in a sample JSON file which can be displayed using "cat -v" command. Those with ^B,^A,^D are control characters.
Use perl to remove the control characters completely. You can redirect to a new file
> cat -v json_control.txt
^B{"menu": {
"id": "file",
"value": "File",
"popup": ^B{
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}^D
^A
> perl -pe ' { s/[\x00-\x09\x0B-\x1F]//g } ' json_control.txt | cat -v
{"menu": {
"id": "file",
"value": "File",
"popup": {
"menuitem": [
{"value": "New", "onclick": "CreateNewDoc()"},
{"value": "Open", "onclick": "OpenDoc()"},
{"value": "Close", "onclick": "CloseDoc()"}
]
}
}}
>

Parse JSON Response to extract specific values Bash with out using any utility [duplicate]

This question already has answers here:
Parsing JSON with Unix tools
(45 answers)
Closed 4 years ago.
I tried using Parse JSON to array in a shell script
but unable to get required field. Below is my json:
{
"status": "UP",
"databaseHealthCheck":
{
"status": "UP",
"dataSource":
{
"maxActive": 100,
"maxIdle": 8,
"numActive": 0,
"url": "jdbc:oracle:thin:#hostname:port/db_name",
"userName": "test_123"
}
},
"JMSHealthCheck":
{
"status": "UP",
"producerTemplate":
{
"name": "Test_2",
"pendingCount": 0,
"operator": "<"
}
},
"diskSpace":
{
"status": "UP",
"total": 414302519296,
"free": 16099868672,
"threshold": 10485760
}
}
I want to extract pendingCount value under producerTemplate under JMSHealthCheck.
Have restriction to use utility like jq.
Bash Version 3.x
In absence of jq, you may use this gnu grep command:
read -r s < <(grep -zoP '"JMSHealthCheck":\s*{[^{}]*?"producerTemplate":\s*{[^{}]*?"pendingCount":\h*\K\d+' file.json)
echo "$s"
0
However, please keep in mind that parsing a JSON using regex is not recommended. If you have jq then it would be a very simple jq command lke this:
jq '.JMSHealthCheck.producerTemplate.pendingCount' file.json

Indexing geospatial in elastic search results in error?

{ title: 'abcccc',
price: 3300,
price_per: 'task',
location: { lat: -33.8756, lon: 151.204 },
description: 'asdfasdf'
}
The above is the JSON that I want to index. However, when I index it, the error is:
{"error":"MapperParsingException[Failed to parse [location]]; nested: ElasticSearchIllegalArgumentException[unknown property [lat]]; ","status":400}
If I remove the "location" field, everything works.
How do I index geo? I read the tutorial and I'm still confused how it works. It should work like this, right...?
You are getting this error message because the field location s not mapped correctly. It's possible that at some point of time, you tried to index a string in this field and it's now mapped as a string. Elasticsearch cannot automatically detect that a field contains a geo_point. It has to be explicitly specified in mapping. Otherwise, Elasticsearch maps such field as a string, number or object depending on the type of geo_point representation that you used in the first indexed record. Once field is added to the mapping, its type can no longer be changed. So, in order to fix the situation, you will need to delete the mapping for this type and create it again. Here is an example of specifying mapping for a geo_point field:
curl -XDELETE "localhost:9200/geo-test/"
echo
# Set proper mapping. Elasticsearch cannot automatically detect that something is a geo_point:
curl -XPUT "localhost:9200/geo-test" -d '{
"settings": {
"index": {
"number_of_replicas" : 0,
"number_of_shards": 1
}
},
"mappings": {
"doc": {
"properties": {
"location" : {
"type" : "geo_point"
}
}
}
}
}'
echo
# Put some test data in Sydney
curl -XPUT "localhost:9200/geo-test/doc/1" -d '{
"title": "abcccc",
"price": 3300,
"price_per": "task",
"location": { "lat": -33.8756, "lon": 151.204 },
"description": "asdfasdf"
}'
curl -XPOST "localhost:9200/geo-test/_refresh"
echo
# Search, and calculate distance to Brisbane
curl -XPOST "localhost:9200/geo-test/doc/_search?pretty=true" -d '{
"query": {
"match_all": {}
},
"script_fields": {
"distance": {
"script": "doc['\''location'\''].arcDistanceInKm(-27.470,153.021)"
}
},
"fields": ["title", "location"]
}
'
echo
Since you don't specify how you parse it, this:
Parsing through JSON in JSON.NET with unknown property names
may bring some light in.

Resources