I have a json data like:
{
"cse_thumbnail": [
{
"width": "188",
"height": "268",
"src": "http://abc.dk"
}
],
"metatags": [
{
"referrer": "origin-when-cross-origin",
"og:image": "http://def.dk"
}
],
"cse_image": [
{
"src": "http://ghi.dk"
}
]
}
There are 3 array lists coming in the JSON. I want to check if the corresponding keys are existing when I'm getting the response:
cse_thumbnail
metatags
cse_image
I'm tried all the key:value pair check in python like (hasattr, key in list etc) which are not working at all.
Please help to get it resolved.
You can use in to check if a key is present
import json
with open('data.json') as thing:
data = json.load(thing)
keys = ('cse_thumbnail', 'metatags', 'cse_image')
for key in keys:
print(key in data)
Related
I am in no way an expert with groovy so please don't hold that against me.
I have JSON that looks like this:
{
"metrics": [
{
"name": "metric_a",
"help": "This tracks your A stuff.",
"type": "GAUGE",
"labels": [
"pool"
],
"unit": "",
"aggregates": [],
"meta": [
{
"category": "CAT A",
"deployment": "environment-a"
}
],
"additional_notes": "Some stuff (potentially)"
},
...
]
...
}
I'm using it as a source for automated documentation of all the metrics. So, I'm iterating through it in various ways to get the information I need. So far so good, I'm most of the way there. The problem is this all needs to be organized per the deployment environment. Meaning, multiple metrics will share the same value for deployment.
My thought was I could create a map with deployment as the key and the metric name for any metric that has a matching deployment as the value. Once I have that map, it should be easy for me to organize things the way they should be. I can't figure out how to do that. The result is all the metric names are added which is expected since I'm not doing anything to filter them out. I was thinking that groupBy would make sense here but I can't figure out how to use it effectively and frankly I'm not sure it will solve my problem by itself. Here is my code so far:
parentChild = [:]
children = []
metrics.each { metric ->
def metricName = metric.name
def depName = metric.meta.findResult{ it.deployment }
children.add(metricName)
parentChild.put(depName, children)
}
What is the best way to create a new map where the values for each key are based off a specific condition?
EDIT: The desired result would be each key in the resulting map would be a unique deployment value from all the metrics (as a string). Each value would be name of each metric that contains that deployment (as an array).
[environment-a:
[metric_a,metric_b,metric_c,...],
environment-b:
[metric_d,metric_e,metric_f,...]
...]
I would use a combo of withDefault() to pre-fill each map-entry value with a fresh TreeSet-instance (sorted no-duplicates set) and standard inject().
I reduced your sample data to the bare minimum and added some new nodes:
import groovy.json.*
String input = '''\
{
"metrics": [
{
"name": "metric_a",
"meta": [
{
"deployment": "environment-a"
}
]
},
{
"name": "metric_b",
"meta": [
{
"deployment": "environment-a"
}
]
},
{
"name": "metric_c",
"meta": [
{
"deployment": "environment-a"
},
{
"deployment": "environment-b"
}
]
},
{
"name": "metric_d",
"meta": [
{
"deployment": "environment-b"
}
]
}
]
}'''
def json = new JsonSlurper().parseText input
def groupedByDeployment = json.metrics.inject( [:].withDefault{ new TreeSet() } ){ res, metric ->
metric.meta.each{ res[ it.deployment ] << metric.name }
res
}
assert groupedByDeployment.toString() == '[environment-a:[metric_a, metric_b, metric_c], environment-b:[metric_c, metric_d]]'
If your metrics.meta array is supposed to have a single value, you can simplify the code by replacing the line:
metric.meta.each{ res[ it.deployment ] << metric.name }
with
res[ metric.meta.first().deployment ] << metric.name
Heyo, I want to get a data from a json file but it says undefined.
This is my json file:
[
{
"channel": [
"960229917264584736"
],
"info": {
"cooldown": 3000
}
},
{
"channel": [
"960229880405053473"
],
"info": {
"cooldown": 6000
}
}
]
And here is how i'm trying to get it:
let channels = JSON.parse(fs.readFileSync("./channels.json"));
console.log(channels.channel)
Thanks for helping^^
The error in the code comes down to the wrong mapping in JavaScript.
Note that in JavaScript code, channels corresponds to a JSON array with multiple channels, like this:
let channels = [{channel1}, {channel2}] // the file JSON is an array [{},{}]
So to access a channel you will need to go to the following path:
channels[0].channel // 0 can be replaced for any valid index number
// output: {"channel": ["960229917264584736"],"info": {"cooldown": 3000}}
To get a channel id:
channels[0].channel[0]
// output: "960229917264584736"
I am new to using influxdb (v1.7.4), and I am using its python module (influxdb-python).
I want to write data in bulk into influxdb.
But unable to get the correct output when using the write_points method.
point = [
{
"fields": {
"PATH": "/",
"DISK_USED_PERCENT": "10"
},
"measurement": "xxxxxxx"
},
{
"fields": {
"PATH": "/xxxxxxxxx",
"DISK_USED_PERCENT": "0"
},
"measurement": "xxxxxxx"
}
]
client.write_points(point)
I want to add multiple points in a go. As per the documentation, it can be done by providing a list of dictionaries.
But the result I'm seeing is only taking the last item from the list and creating a ResultSet with it.
ResultSet({'('xxxxxxx', None)': [{'time': '2021-05-10T09:38:20.818555Z', 'DISK_USED_PERCENT': '0', 'PATH': '/xxxxxxxxx'}]})
Any leads would be appreciated.
I am new to elasticsearch i want to index a JSON file and perform search queries from elasticsearch
How can I index this json and perform queries to get value if i pass parameter as "field3.innerfield" : "someval"
I have tried indexing this file with helpers.bulk and search but it returns all the fields instead of a selected field.
Below is the JSON sample
[
{
"id": "someid",
"metadata": {
"docType": "value",
"otherfield": " ",
morefields
.
.
},
"field1":"value1",
"field2":"value2,
"field3": [
{
"innerfield": "someval",
"innerfield1":[
"kind of a paragraph"
]
}
],
"field4": [
{
"innerfield": "someval",
"innerfield1": "kind of a paragraph"
}
],
},
{ again the format repeats with different id but same fields
},
{
}
]
Your question lacks clarity however what I understood is that you want to fetch values from its key for a nested json. You can do that in the following way as shown below.
Parse it multiple times and make the required changes as per your need.
import json
data = data.apply(lambda x: json.loads(json.loads(x).get("metadata","{}")).get("doctype") if x else None)
I have a nested dictionary with keys and values as shown below.
j = {
"app": {
"id": 0,
"status": "valid",
"Garden": {
"Flowers":
{
"id": "1",
"state": "fresh"
},
"Soil":
{
"id": "2",
"state": "stale"
}
},
"BackYard":
{
"Grass":
{
"id": "3",
"state": "dry"
},
"Soil":
{
"id": "4",
"state": "stale"
}
}
}
}
Currently, I have a python method which returns me the route based on keys to get to a 'value'. For example, if I want to access the "1" value, the python method will return me a list of string with the route of the keys to get to "1". Thus it would return me, ["app","Garden", "Flowers"]
I am designing a service using flask and I want to be able to return a json output such as the following based on the route of the keys. Thus, I would return an output such as below.
{
"id": "1",
"state": "fresh"
}
The Problem:
I am unsure on how to output the result as shown above as I will need to parse the dictionary "j" in order to build it?
I tried something as the following.
def build_dictionary(key_chain):
d_temp = list(d.keys())[0]
...unsure on how to
#Here key_chain contains the ["app","Garden", "Flowers"] sent to from the method which parses the dictionary to store the key route to the value, in this case "1".
Can someone please help me to build the dictionary which I would send to the jsonify method. Any help would be appreciated.
Hope this is what you are asking:
def build_dictionary(key_chain, j):
for k in key_chain:
j = j.get(k)
return j
kchain = ["app","Garden", "Flowers"]
>>> build_dictionary(kchain, j)
{'id': '1', 'state': 'fresh'}