Nest API access for Time-to-Temperature, Optimized Start, Optimized Stop and Thermostat level Away - nest-api

I'm using the Nest API to monitor an install with 5 Nest Thermostats.
Having reviewed the Nest docs I seem unable to access the following data - is there anyway to access it via the official API?
Time to temperature - the time the Nest thinks it will take to reach target temp
Optimized Start (Mode) - the nest is pre heating before a desired setpoint as its learnt how long a room takes to heat
Optimized Stop (Mode) - the nest has stopped calling for heat as it knows the room will retain heat until the next set point
Away/Home per thermostat - the API only seems to provide Away information at the structure (home) level but I'm specifically interested in the occupation level at a per thermostat basis e.g. is someone in that room
All feedback appreciated.

Those fields are not currently part of the API.

Related

Best way to implement background “timer” functionality in Python/Django

I am trying to implement a Django web application (on Python 3.8.5) which allows a user to create “activities” where they define an activity duration and then set the activity status to “In progress”.
The POST action to the View writes the new status, the duration and the start time (end time, based on start time and duration is also possible to add here of course).
The back-end should then keep track of the duration and automatically change the status to “Finished”.
User actions can also change the status to “Finished” before the calculated end time (i.e. the timer no longer needs to be tracked).
I am fairly new to Python so I need some advice on the smartest way to implement such a concept?
It needs to be efficient and scalable – I’m currently using a Heroku Free account so have limited system resources, but efficiency would also be important for future production implementations of course.
I have looked at the Python threading Timer, and this seems to work on a basic level, but I’ve not been able to determine what kind of constraints this places on the system – e.g. whether the spawned Timer thread might prevent the main thread from finishing and releasing resources (i.e. Heroku Dyno threads), etc.
I have read that persistence might be a problem (if the server goes down), and I haven’t found a way to cancel the timer from another process (the .cancel() method seems to rely on having the original object to cancel, and I’m not sure if this is achievable from another process).
I was also wondering about a more “background” approach, i.e. a single process which is constantly checking the database looking for activity records which have reached their end time and swapping the status.
But what would be the best way of implementing such a server?
Is it practical to read the database every second to find records with an end time of “now”? I need the status to change in real-time when the end time is reached.
Is something like Celery a good option, or is it overkill for a single process like this?
As I said I’m fairly new to these technologies, so I may be missing other obvious solutions – please feel free to enlighten me!
Thanks in advance.
To achieve this you need some kind of scheduling tasks functionality. For a fast simpler implementation is a good solution to use the Timer object from the
Threading module.
A more complete solution is tu use Celery. If you are new, deeping in it will give you a good value start using celery as a queue manager distributing your work easily across several threads or process.
You mentioned that you want it to be efficient and scalable, so I guess you will want to implement similar functionalities that will require multiprocessing and schedule so for that reason my recommendation is to use celery.
You can integrate it into your Django application easily following the documentation Integrate Django with Celery.

Best practices for internal api calls to external apis with buffer

I have different external APIs doing basically the same things but in a different way : add product informations (ext_api).
I would like to make an adapter API that would call, behind the scene, the different external APIs (adapter_api).
My problem is the following : the external APIs are optimised when calling them with a batch of products attributes. However, my API would be optimised on a product by product basis.
I would like to somehow make a buffer of product attributes that would grow when I call my adapter_api. When the number of product attributes reach a certain limit, the ext_api would be called and the buffer would be reset and ready to receive more product attributes.
I'm wondering how to achieve that. I was thinking of making a REST api in python that would store the buffer of product attributes. I would like this REST api to be able to scale on a Kubernetes cluster : it would need low latency, and several instance of this API would write in the buffer of products until one of them reach the limit and make the call to the external API.
Here is what I have in mind :
Are there any best practices concerning the buffer on this use case ? To add some extra informations : my main purpose here is to hide from internal business APIs (not drawn) the complexity of calling many different external APIs each of which have their own rules and credentials.
Thank you very much for your help.
You didn't tell us your performance evaluation criteria.
You did tell us this:
don't know how to store the buffer : I would like to avoid databases or files.
which makes little sense,
since there's a simple answer to this question:
Is there any best practices on this use case ?
Yes. The best practice is to append requests to buffer.txt
and send the batch when that file exceeds some threshold.
A convenient way to implement the threshold would be
to send when getsize() reports a large enough value.
If requests are of quite different size and the batch
size really matters to you, then append a single byte
to a 2nd file, and use size of that to indicate how
many entries are enqueued.
requirements
The heart of your question seems to revolve around
what was left unsaid:
What is the cost function for sending too many "small" batches to ext_api?
What is the cost function for the consumer of the adapter_api, what does it care about? Low latency return, perhaps?
If ext_api permanently fails (say, a day of downtime), do we have some responsibility for quickly notifying the consumer that its updates are going into a black hole?
And why would using the filesystem be inappropriate?
It seems a perfect match for your needs.
Consider using a global in-memory object,
such as list or queue for the batch you're accumulating.
You might want to protect accesses with a lock.
Maybe your client doesn't really want a
one-product-at-a-time API.
Maybe you'd prefer to have your client
accumulate items,
sending only when its batch size is big enough.

How do you run operations on another thread in flutter?

I have a flutter app using the geolocator plugin to retrieve coordinate data while the user types in the address. I can see some lag on the screen as I type on my phone, in my console I see an error that it skipped an x amount of frames, and it is doing too much work on it's main thread. I plan to switch to using an API from Google instead. I also get this error while I upload images to Firebase (I didn't restrict size yet), I've seen the error pop up randomly but mostly for these two cases. What is the proper way to run operations on another thread in flutter? Unless I should be doing something else.
You should create a new Isolate Loop corresponding to a new thread.
I suggest you read this article from Didier Boelens blog which is very clear about all this concepts.

Using Google map objects within a web worker?

The situation:
Too much stuff is running in the main thread of a page making a google map with overlays representing ZIP territories coming from US census data and stuff the client has asked for grouping territories into discreet groups. While there is no major issue on desktops, mobile devices (iPad) decide that the thread is taking too long (max of 6 seconds after data returns) and therefore must have crashed.
Solution: Offload the looping function to gather the points for the shape from each row to a web worker that can work as fast or slow as resources allow on a mobile device. (Three for loops, 1st to select row, 2nd to select column, 3rd for each point within the column. Execution time: matter of 3-6 seconds total for over 2000+ rows with numerous points)
The catch: In order for this to be properly efficient, the points must be made into a shape (polygon) within the web worker. HOWEVER since it is a google.maps.polygon object made up of google.maps.latlng objects it [the web worker] needs to have some knowledge of what those items are within the web worker. Web workers require you to not use window or the DOM so it must import the script and the intent was to pass back just the object as a JSON encoded item. The code fails on any reference of google objects even with importScript() due to the fact those items rely on the window element.
Further complications: Google's API is technically proprietary. The web app code that this is for is bound by NDA so pointed questions could be asked but not a copy/paste of all code.
The solution/any vague ideas:???
TLDR: Need to access google.maps.latlng object and create new instances of (minimally) within a web worker. Web worker should either return Objects ready to be popped into a google.maps.polygon object or should return a google.maps.polygon object. How do I reference the google maps API if I cannot use the default method of importing scripts due to an issue requiring the window object?
UPDATE: Since this writing Ive managed to offload the majority of the grunt work from the main thread to the web worker allowing it to parse through the data asynchronously and assign the data to custom made latlng object.
The catch now is getting the returned values to run the function in the proper context to see if the custom latlng is sufficient for google.maps.polygon to work its magic.
Excerpt from the file that calls the web worker and listens for its response (Coffeescript)
#shapeWorker.onmessage= (event)->
console.log "--------------------TESTING---------------"
data=JSON.parse(event.data)
console.log data
#generateShapes(data.poly,data.center,data.zipNum)
For some reason, its trying to evaluate GenerateShapes in the context of the web worker rather than in the context of the class its in.
Once again it was a complication of too many things going on at once. The scope was restricted due to the usage of -> rather than => which expands the scope to allow the parent class functions.
Apparently the issue resided with the version of iOS this web app needed to run on and a bug with the storage being set arbitrarily low (a tenth of its previous size). With some shrinking of the data and a fix to the iOS version in question I was able to get it running without the usage of web workers. One day I may be able to come back to it with web workers to increase efficiency.

Probability distribution for sms answer delays

I'm writing an app using sms as communication.
I have chosen to subscribe to an sms-gateway, which provides me with an API for doing so.
The API has functions for sending as well as pulling new messages. It does however not have any kind of push functionality.
In order to do my queries most efficient, I'm seeking data on how long time people wait before they answer a text message - as a probability function.
Extra info:
The application is interactive (as can be), so I suppose the times will be pretty similar to real life human-human communication.
I don't believe differences in personal style will play a big impact on the right times and frequencies to query, so average data should be fine.
Update
I'm impressed and honered by the many great answers recieved. I have concluded that my best shot will be a few adaptable heuristics, including exponential (or maybe polynomial) backoff.
All along I will be gathering statistics for later analysis. Maybe something will show up. I think I will cheat start on the algorithm for generating poll-frquenzies from a probability distribution. That'll be fun.
Thanks again many times.
In the absence of any real data, the best solution may be to write the code so that the application adjusts the wait time based on current history of response times.
Basic Idea as follows:
Step 1: Set initial frequency of pulling once every x seconds.
Step 2: Pull messages at the above frequency for y duration.
Step 3: If you discover that messages are always waiting for you to pull decrease x otherwise increase x.
Several design considerations:
Adjust forever or stop after sometime
You can repeat steps 2 and 3 forever in which case the application dynamically adjusts itself according to sms patterns. Alternatively, you can stop after some time to reduce application overhead.
Adjustment criteria: Per customer or across all customers
You can chose to do the adjustment in step 3 on a per customer basis or across all customers.
I believe GMAIL's smtp service works along the same lines.
well I would suggest finding some statistics on daily SMS/Text Messaging usage by geographical location and age groups and come up with an daily average, it wont be an exact measurement for all though.
Good question.
Consider that people might have multiple tasks and that answering a text message might be one of those tasks. If each of those tasks takes an amount of time that is exponentially distributed, the time to get around to answering the text message is the sum of those task completion times. The sum of n iid random variables has a Gamma distribution.
The number of tasks ahead of the text return also has a dicrete distribution - let's say it's Poisson. I don't have the time to derive the resulting distribution, but simulating it using #Risk, I get either a Weibull or Gamma distribution.
SMS is a store-and-forward messaging service, so you have to add in the delay that can be added by the various SMSCs (Short Message Service Centers) along the way. If you are connecting to one of the big aggregation houses (Sybase, TNS, mBlox etc) commercial bulk SMS providers (Clickatel, etc) then you need to allow for the message to transverse their network as well as the carriers network. If you are using a smaller shop then most likely they are using a GSM Modem (or modems) and there is a throughput limit on the message the can receive and process (as well as push out)
All that said, if you are using a direct connection or one of the big guys MO (mobile originated) messages coming to you as a CP (content provider) should take less than 5 seconds. Add to that the time it takes the Mobile Subscribers to reply.
I would say that anecdotal evidence form services I've worked on before, where the Mobile Subscriber needs to provide a simple reply it's usually within 10 seconds or not at all.
If you are polling for specific replies I would poll at 5 and 10 seconds then apply an exponential back off.
All of this is from a North American point-of-view. Europe will be fairly close, but places like Africa, Asia will be a bit slower as the networks are a bit slower. (unless you are connected directly to the operator and even then some of them are slow).

Resources