I have the following python code that is to replace low-precision temperatures in a list of JSON trees, ec2_tcs['zones'] with higher precision temps from a generator, ec1_api.temperatures().
if CONF_HIGH_PRECISION:
try:
from evohomeclient import EvohomeClient as EvohomeClientVer1
ec1_api = EvohomeClientVer1(client.username, client.password)
for temp in ec1_api.temperatures(force_refresh=True):
for zone in ec2_tcs['zones']:
if str(temp['id']) == str(zone['zoneId']):
if zone['temperatureStatus']['isAvailable']:
zone['temperatureStatus']['temperature'] \
= temp['temp']
break
# TypeError: usually occurs in client library if problems with vendor's website
except TypeError:
_LOGGER.warning(
"Failed to obtain higher-precision temperatures"
)
The JSON data looks like this (an array of JSON data, 1 per 'zone'):
[
{
'zoneId': '3432521',
'name': 'Main Room'
'temperatureStatus': {'temperature': 21.5, 'isAvailable': True},
'setpointStatus': {'targetHeatTemperature': 5.0, 'setpointMode': 'FollowSchedule'},
'activeFaults': [],
}, {
...
...
}
]
and each result from the generator like this:
{'thermostat': 'EMEA_ZONE', 'id': 3432521, 'name': 'Main Room', 'temp': 21.55, 'setpoint': 5.0}
I know Python must have a better way of doing this, but I can't seem to make it fly. Any suggestions would be gratefully received.
I could 'massage' the generator, but there are good reasons why the JSON tree's schema should remain unchanged.
The primary goal is to reduce a number of nested code blocks with a very fancy one-liner!
Related
I have the following datasets:
kpi = {
"latency": 3,
"cpu_utilisation": 0.98,
"memory_utilisation": 0.95,
"MIR": 200,
}
ns_metrics = {
"timestamp": "2022-10-04T15:24:10.765000",
"ns_id": "cache",
"ns_data": {
"cpu_utilisation": 0.012666666666700622,
"memory_utilisation": 8.68265852766783,
},
}
What I'm looking for is an elegant way to compare the cpu_utilisation and memory_utilisation values from each dictionary and if the two utilisation figures from ns_metrics is greater than kpi, for now, print a message as to which utilisation value was greater,i.e. was it either cpu or memory or both. Naturally, I can do something simple like this:
if ns_metrics["ns_data"]["cpu_utilisation"] > kpi["cpu_utilisation"]:
print("true: over cpu threshold")
if ns_metrics["ns_data"]["memory_utilisation"] > kpi["memory_utilisation"]:
print("true: over memory threshold")
But this seems a bit longer winded to have many if conditions, and I was hoping there is a more elegant way of doing it. Any help would be greatly appreciated.
maybe you can use a loop to do this:
check_list = ["cpu_utilisation", "memory_utilisation"]
for i in check_list:
if ns_metrics["ns_data"][i] > kpi[i]:
print("true: over {} threshold".format(i.split('_')[0]))
if the key is different,you can use a mapping dict to do it,like this:
check_mapping = {"cpu_utilisation": "cpu_utilisation_1"}
for kpi_key, ns_key in check_mapping.items():
....
So, I am writing a program in Python to fetch data from google classroom API using requests module. I am getting the full json response from the classroom as follows :
{'announcements': [{'courseId': '#############', 'id': '###########', 'text': 'This is a test','state': 'PUBLISHED', 'alternateLink': 'https://classroom.google.com/c/##########/p/###########', 'creationTime': '2021-04-11T10:25:54.135Z', 'updateTime': '2021-04-11T10:25:53.029Z', 'creatorUserId': '###############'}, {'courseId': '############', 'id': '#############', 'text': 'Hello everyone', 'state': 'PUBLISHED', 'alternateLink': 'https://classroom.google.com/c/#############/p/##################', 'creationTime': '2021-04-11T10:24:30.952Z', 'updateTime': '2021-04-11T10:24:48.880Z', 'creatorUserId': '##############'}, {'courseId': '##################', 'id': '############', 'text': 'Hello everyone', 'state': 'PUBLISHED', 'alternateLink': 'https://classroom.google.com/c/##############/p/################', 'creationTime': '2021-04-11T10:23:42.977Z', 'updateTime': '2021-04-11T10:23:42.920Z', 'creatorUserId': '##############'}]}
I was actually unable to convert this into a pretty format so just pasting it as I got it from the http request. What I actually wish to do is just request the first few announcements (say 1, 2, 3 whatever depending upon the requirement) from the service while what I'm getting are all the announcements (as in the sample 3 announcements) that had been made ever since the classroom was created. Now, I believe that fetching all the announcements might make the program slower and so I would prefer if I could get only the required ones. Is there any way to do this by passing some arguments or anything? There are a few direct functions provided by google classroom however I came across those a little later and have already written everything using the requests module which would require changing a lot of things which I would like to avoid. However if unavoidable I would go that route as well.
Answer:
Use the pageSize field to limit the number of responses you want in the announcements: list request, with an orderBy parameter of updateTime asc.
More Information:
As per the documentation:
orderBy: string
Optional sort ordering for results. A comma-separated list of fields with an optional sort direction keyword. Supported field is updateTime. Supported direction keywords are asc and desc. If not specified, updateTime desc is the default behavior. Examples: updateTime asc, updateTime
and:
pageSize: integer
Maximum number of items to return. Zero or unspecified indicates that the server may assign a maximum.
So, let's say you want the first 3 announcements for a course, you would use a pageSize of 3, and an orderBy of updateTime asc:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
service = build('classroom', 'v1', credentials=creds)
asc = "updateTime asc"
pageSize = 3
# Call the Classroom API
results = service.courses().announcements().list(pageSize=3, orderBy=asc ).execute()
or an HTTP request example:
GET https://classroom.googleapis.com/v1/courses/[COURSE_ID]/announcements
?orderBy=updateTime%20asc
&pageSize=2
&key=[YOUR_API_KEY] HTTP/1.1
Authorization: Bearer [YOUR_ACCESS_TOKEN]
Accept: application/json
References:
Method: announcements.list | Classroom API | Google Developers
i am using geopy library for my Flask web app. i want to save user location which i am getting from my modal(html form) in my database(i am using mongodb), but every single time i am getting this error:
TypeError: Object of type 'Location' is not JSON serializable
Here's the code:
#app.route('/register', methods=['GET', 'POST'])
def register_user():
if request.method == 'POST':
login_user = mongo.db.mylogin
existing_user = login_user.find_one({'email': request.form['email']})
# final_location = geolocator.geocode(session['address'].encode('utf-8'))
if existing_user is None:
hashpass = bcrypt.hashpw(
request.form['pass'].encode('utf-8'), bcrypt.gensalt())
login_user.insert({'name': request.form['username'], 'email': request.form['email'], 'password': hashpass, 'address': request.form['add'], 'location' : session['location'] })
session['password'] = request.form['pass']
session['username'] = request.form['username']
session['address'] = request.form['add']
session['location'] = geolocator.geocode(session['address'])
flash(f"You are Registerd as {session['username']}")
return redirect(url_for('home'))
flash('Username is taken !')
return redirect(url_for('home'))
return render_template('index.html')
Please Help, let me know if you want more info..
According to the geolocator documentation the geocode function "Return a location point by address" geopy.location.Location objcet.
Json serialize support by default the following types:
Python | JSON
dict | object
list, tuple | array
str, unicode | string
int, long, float | number
True | true
False | false
None | null
All the other objects/types are not json serialized by default and there for you need to defined it.
geopy.location.Location.raw
Location’s raw, unparsed geocoder response. For details on this,
consult the service’s documentation.
Return type: dict or None
You might be able to call the raw function of the Location (the geolocator.geocode return value) and this value will be json serializable.
Location is indeed not json serializable: there are many properties in this object and there is no single way to represent a location, so you'd have to choose one by yourself.
What type of value do you expect to see in the location key of the response?
Here are some examples:
Textual address
In [9]: json.dumps({'location': geolocator.geocode("175 5th Avenue NYC").address})
Out[9]: '{"location": "Flatiron Building, 175, 5th Avenue, Flatiron District, Manhattan Community Board 5, Manhattan, New York County, New York, 10010, United States of America"}'
Point coordinates
In [10]: json.dumps({'location': list(geolocator.geocode("175 5th Avenue NYC").point)})
Out[10]: '{"location": [40.7410861, -73.9896298241625, 0.0]}'
Raw Nominatim response
(That's probably not what you want to expose in your API, assuming you want to preserve an ability to change geocoding service to another one in future, which might have a different raw response schema).
In [11]: json.dumps({'location': geolocator.geocode("175 5th Avenue NYC").raw})
Out[11]: '{"location": {"place_id": 138642704, "licence": "Data \\u00a9 OpenStreetMap contributors, ODbL 1.0. https://osm.org/copyright", "osm_type": "way", "osm_id": 264768896, "boundingbox": ["40.7407597", "40.7413004", "-73.9898715", "-73.9895014"], "lat": "40.7410861", "lon": "-73.9896298241625", "display_name": "Flatiron Building, 175, 5th Avenue, Flatiron District, Manhattan Community Board 5, Manhattan, New York County, New York, 10010, United States of America", "class": "tourism", "type": "attraction", "importance": 0.74059885426854, "icon": "https://nominatim.openstreetmap.org/images/mapicons/poi_point_of_interest.p.20.png"}}'
Textual address + point coordinates
In [12]: location = geolocator.geocode("175 5th Avenue NYC")
...: json.dumps({'location': {
...: 'address': location.address,
...: 'point': list(location.point),
...: }})
Out[12]: '{"location": {"address": "Flatiron Building, 175, 5th Avenue, Flatiron District, Manhattan Community Board 5, Manhattan, New York County, New York, 10010, United States of America", "point": [40.7410861, -73.9896298241625, 0.0]}}'
I want to write some list in data.txt.
The output from program is:
Triangle
('(a1, b1)', '(a2, b2)', '(a3, b3)')
Triangle
('(a4, b4)', '(a5, b5)', '(a6, b6)')
With this lines of code to write in data.txt;
data = {}
data['shapes'] = []
data['shapes'].append({
'name': str(triangle.name),
'Vertices': list(triangle.get_points())
I need output in my data.txt with json format like this:
{"shapes": [{"name": "Triangle", "Vertices": ["(a1, b1)", "(a2, b2)", "(a3, b3)"]}, {"name": "Triangle", "Vertices": ["(a4, b4)", "(a5, b5)", "(a6, b6)"]}]}
But this is what I get:
{"shapes": [{"name": "Triangle", "Vertices": ["(a4, b4)", "(a5, b5)", "(a6, b6)"]}]}
So, how can I write the past value of triangle that have vertices (a1, b1)...(a3, b3)?
This part of your code should be executed only once:
data = {}
data['shapes'] = []
The following part of your code you should execute repeatedly
data['shapes'].append({
'name': str(triangle.name),
'Vertices': list(triangle.get_points())
probably in a loop similar to this one
for triangle in triangles:
data['shapes'].append({
'name': str(triangle.name),
'Vertices': list(triangle.get_points())
It seems like you're overwriting the variable referencing the first triangle object with the next triangle object before appending the first triangle object's information to data['shapes'].
That block of code where you append to your data['shapes'] should be executed twice, once for each triangle object.
I have a database with documents that are roughly of the form:
{"created_at": some_datetime, "deleted_at": another_datetime, "foo": "bar"}
It is trivial to get a count of non-deleted documents in the DB, assuming that we don't need to handle "deleted_at" in the future. It's also trivial to create a view that reduces to something like the following (using UTC):
[
{"key": ["created", 2012, 7, 30], "value": 39},
{"key": ["deleted", 2012, 7, 31], "value": 12}
{"key": ["created", 2012, 8, 2], "value": 6}
]
...which means that 39 documents were marked as created on 2012-07-30, 12 were marked as deleted on 2012-07-31, and so on. What I want is an efficient mechanism for getting the snapshot of how many documents "existed" on 2012-08-01 (0+39-12 == 27). Ideally, I'd like to be able to query a view or a DB (e.g. something that's been precomputed and saved to disk) with the date as the key or index, and get the count as the value or document. e.g.:
[
{"key": [2012, 7, 30], "value": 39},
{"key": [2012, 7, 31], "value": 27},
{"key": [2012, 8, 1], "value": 27},
{"key": [2012, 8, 2], "value": 33}
]
This can be computed easily enough by iterating through all of the rows in the view, keeping a running counter and summing up each day as I go, but that approach slows down as the data set grows larger, unless I'm smart about caching or storing the results. Is there a smarter way to tackle this?
Just for the sake of comparison (I'm hoping someone has a better solution), here's (more or less) how I'm currently solving it (in untested ruby pseudocode):
require 'date'
def date_snapshots(rows)
current_date = nil
current_count = 0
rows.inject({}) {|hash, reduced_row|
type, *ymd = reduced_row["key"]
this_date = Date.new(*ymd)
if current_date
# deal with the days where nothing changed
(current_date.succ ... this_date).each do |date|
key = date.strftime("%Y-%m-%d")
hash[key] = current_count
end
end
# update the counter and deal with the current day
current_date = this_date
current_count += reduced_row["value"] if type == "created_at"
current_count -= reduced_row["value"] if type == "deleted_at"
key = current_date.strftime("%Y-%m-%d")
hash[key] = current_count
hash
}
end
Which can then be used like so:
rows = couch_server.db(foo).design(bar).view(baz).reduce.group_level(3).rows
date_snapshots(rows)["2012-08-01"]
Obvious small improvement would be to add a caching layer, although it isn't quite as trivial to make that caching layer play nicely incremental updates (e.g. the changes feed).
I found an approach that seems much better than my original one, assuming that you only care about a single date:
def size_at(date=Time.now.to_date)
ymd = [date.year, date.month, date.day]
added = view.reduce.
startkey(["created_at"]).
endkey( ["created_at", *ymd, {}]).rows.first || {}
deleted = view.reduce.
startkey(["deleted_at"]).
endkey( ["deleted_at", *ymd, {}]).rows.first || {}
added.fetch("value", 0) - deleted.fetch("value", 0)
end
Basically, let CouchDB do the reduction for you. I didn't originally realize that you could mix and match reduce with startkey/endkey.
Unfortunately, this approach requires two hits to the DB (although those could be parallelized or pipelined). And it doesn't work as well when you want to get a lot of these sizes at once (e.g. view the whole history, rather than just look at one date).