Nest-API: Nest Protect and Nest Thermostat Occupancy Sensor Data - nest-api

Is it possible to access data about the occupancy sensor on the nest Protect and Thermostat? Ie last recorded occupancy event?
I've had a look on the API-reference but I've had no luck:
https://developers.nest.com/documentation/api-reference/

Related

Azure Time Series (TSI) initial considerations and best practices

My apologies for the bad title!
I am in the initial phase of designing an Azure Time Series solution and I have run into a number of uncertainties. The background for getting into TSI is that we currently have a rather badly designed cosmos db which contains close to 1TB of IoT data and it is growing by the minute. By "badly" I mean that the partition key was designed in such a manner that we do not have control of the size of the partitions. Knowing that there is a limit of 10GB(?) pr partition key, we will soon run out of space and need to come up with a new solution. Also, when running historical queries on the cosmos db, it does not respond within an acceptable time frame. Any experiments with throughput calculations and changes does not improve the response time to an accepted time frame.
We are in the business of logging IoT time series data including metadata from different sensors. We have a number of clients which have from 30 to 300 sensors each - smaller and larger clients. At the client side the sensors are grouped into locations and sub-locations.
An example of an event could be something like this:
{
deviceId,
datetime,
clientId,
locationId,
sub-locationId,
sensor,
value,
metadata{}
}
Knowing how to better design a partition key in CosmosDB, would the same approach as described below be considered as a good practice in TSI when composing the TimeSeriesId?
In a totally different cosmosdb solution we have included eventDate.datepart(YYYY-MM) as a part of the partition key to stop it from growing out of bounds and to better predict the response time on queries within one partition.
Or will TSI handle time series data differently thus making the datepart in TimeSeriesId obsolete?
Having TSI API queries in mind, should I consider the simpicity of the composed TimeSeriesId as well? The TimeSeriesId has to be provided in the body of each API request - as far as I can tell, and when composing a query in a back-end service I do have access to all our clients id's and location/sub-location id's. And these are more accesible than the deviceId's
And finally, when storing IoT data for multiple clients would it be best practice to provision a new TSI solution for each client or does TSI support collections as seen in CosmosDB?
As stated in this article, when using composite key, you will need to query against all the key properties, and not against one or some of them. That's a consideration when deciding for a single key or composite key. Also, as it states in the article, as tip,
If your event source is an IoT hub, your Time Series ID will likely be iothub-connection-device-id.
So, I assume you will have at least one IoT Hub sourcing the events reported from the devices, and in this case you can use the iothub-connection-device-id.

How can I aggregate data in Time Series Insights preview using the hierarchy?

I am storing 15 minute electricity consumption measurements in a TSI preview environment. Is it possible to aggregate the total energy consumption per day of multiple meters using the TSI query API?
I have configured a hierarchy as Area-Building and the Time Series ID is the 'MeterId' of the Meter.
The query API (https://learn.microsoft.com/en-us/rest/api/time-series-insights/preview-query#aggregate-series-api) enabled me to aggregate to consumption per day for a single meter. Then I expected to find an API to aggregate the electricity consumption to Building and Area, but could only find the aggregate operation with a single "timeSeriesId" or "timeSeriesName" as required parameter. Is aggregation to a level in the hierarchy not possible? If not, what would be a good alternative (within or outside TSI) to obtain these aggregated values?
What you may do, is get all the instances you need with the search api(docs).(mind that the documentation is wrong for the
url, it should contain "search" instead of "suggest" like this:
)
Then loop through the instances you get in the response to call the aggregates by id one by one. And finally sum the results yourself to have a daily result for all the telemetry sensors responding to your search.
Note: You can only make 9 aggregate calls at the same time(limitations).
I hope they fix aggregates soon.
In the meanwhile I hope it helps you out.
Good luck,

Correlating Events in Stream Analytics

I have a number of events that are based on values from devices. They are read in intervals, e.g. every hour. The events are delivered to an Event Hub, which is used as an input to a Stream Analytics (SA) job.
I want to aggregate and calculate an average value in SA. Currently, I aggregate and group the events in SA using an origin id and other properties to create the correct groups and averages. The problem is that the averages are not correct. I think the events a either not complete and/or not correlated correct.
Using a TumblingWindow will produce a number of static windows based on time, but the events I need to aggregate might come across two or more windows.
Using a SlidingWindow, as I understand, will trigger output upon a specific condition and the "look back" for a specified interval. Is this correct? If it is correct, I could attach the same id, like a JobId, to each event that I need aggregated and a value indicating whether it is the last event. When the last event enters SA, the SlidingWindow is triggered and we can "look back" for all the events with the same id. Is this possible? 
Are there other options in this case? Basically I need to correlate a number of events based on other characteristics than time.
I hope you can help me.

Managing constantly changing data in Database

I need some advice on how to architect my data in monogoDB. I have this app, where users can view, add, edit and remove credit and debit transactions. Below is how the data looks.
The balance column here is dynamic. For example if someone adds a transaction dates 10-09-2017, all the amount in the balance field thereafter needs to change in that moment to reflect the new transaction. Right now, I am not saving this balance field at all in the database and is calculating it every time when the user loads the page, reloads it, and also when editing, deleting, adding a transaction. Now it is fast, but I assume, in the future, when the user has a lot of transactions, they will become slow as these calculations needs to be done before the user is displayed the data table. Is there a more efficient way to do this?
Also I am doing the calculations on the client side, so the load is on the client's device and not on server. I think if it is on server side, and a lot of users start using it, the API requests will become much slower and not unusable at all after a while. Is this the right way?
PS : Also it was hard making sure the reader understand my questions but I have tried my best. Please let me know if I should explain this in more details or if I should add any more details.
It is not a question about mongodb, it is a question about user interface.
Will you really display the whole history of transactions at once?
You should either utilize pagination (simplest) or reload on scroll to load your data.
Before you get problems because of the balance cell calculation, it is more likely that you experience problems because of:
Slow loading from network (almost certainly)
Slow page interaction because of DOM size (maybe)
Show the first 100 to 500 transactions and provider the user with some way to load earlier entries.
Update - Regarding server-side balance calculation:
You could calculate balance on server-side and store it into a second collection which serves as a cache. If a transaction insertion happens in the past, you recalculate the cache. To speed this up, you can utilize snapshots:
Within a third collection, you could store the current balance in certain intervals, e.g. with the following data structure:
{ Balance: 150000, Date: 2017-02-03, LastTransactionId: 546 }
When a transaction is inserted in the past, take the most recent snapshot before that past moment and recalculate the cache based on that. This way, you. can keep the number of recalculated transactions pretty small.

CQRS and Event Sourcing Query Historical Data

When using CQRS and Event Sourcing how does one query historical data. As an example if I am building a timesheet system that has a report for revenue I need to query against hours, pay rate, and bill rate for each employee. There is a EMPLOYEE_PAY_RATE table that has EmployeeID, PayRate, and EffectiveDate, as well as a BILL_RATE table which has ClientID, EmployeeID, Rate, and EffectiveDate. The effective date in those tables is basically keeping the running history so we can report accurately.
If we were to take a DDD, CQRS, and Event Sourcing Route how would we generate such a report? It's not like we can query the event store in the same way. I've looked at frameworks like Axon but not sure if it would allow us to do what we need to do from a reporting perspective.
When using CQRS and Event Sourcing how does one query historical data.
Pretty much the same way you query live data: you build the views that you want from the event history, and then query the views for the data that you want.
To borrow your example - your view might be supported by an EMPLOYEE_PAY_RATE table and a BILL_RATE table. Replay your events, as something interesting happens update the appropriate table. TaDa.
An important idea that may not be obvious - for something low latency like a history report, you'll probably want the historical aggregator to be pulling the events from the event store, rather than having a bus push events to the aggregator. The pull approach makes it a lot easier to keep track of where you are, so that you don't need to repeat a lot of work, worry about whether you've received all of the events you should, ordering, and so on.
You report is just another read-model/projection of the events, for example a SQL table that is populated by listening to the relevant events.
If the table is big, i.e. a lot of employees, in order to be fast, you should avoid using joins, by keeping the data denormalized; so, for every employee and day (or whatever granularity you want) you would have a row in a table containing the Employee ID and name, the start date and the end date of the day and other columns containing relevant data, i.e. the pay rate. You put heer the employee name also in order to avoid the joins and you keep it up-to-date by listening to the relevant employee events (like EmployeeChangedName).

Resources