Do we know why a Nest update occurs? - nest-api

I am using the Android SDK for Nest, but I believe this is a general Nest question so I did not tag this question with Android.
When registering a GlobalUpdate listener, I cannot find any API to tell me why it was called, or what changed. The documentation says,
GlobalUpdate contains the state of all devices, structures and metadata in the Nest account when a change is detected in anything
It would be nice if there was a field to know what that change was. Is there a better way than tracking all of the data myself and comparing it?

Hi I'm the author of the Nest Android SDK. This SDK (and the Nest API itself) provides the entire state (and omits saying what changes were made) every time for a number of reasons:
Clients may have older (or newer!) data than what the server has, so stating the changes would result in the client incorrectly determining the "true" state.
This design ensures that everyone (all clients and the server) become eventually consistent, and haven't combined changes in an order that results in up to date clients with different states.
You can always calculate the changes yourself and update your UI state accordingly, but every event that your client receives should be independent of each other and shouldn't depend on assumed knowledge of the prior state.

Related

Concept of Conversations in Teams Bot development

I am developing a Microsoft Teams Bot using the NodeJS v4 Bot Framework. This is the first time I have gone and developed a bot and it seems to me it is missing a core concept, conversations / previous message context. When the bot asks me how I am going and I answer "good" in the next message and following messages it doesn't seem to store in an object how I am going.
I have a work around for this by pushing answers into an array but it just seems strange that previous message context hasn't been implemented... Am I missing something?
I think what you might be missing is an understanding of Bot state management. This link gives an overview of the types of state (user vs conversation) as well as places you can store state (e.g. memory, Azure blob storage, etc.). Be aware that Cosmos DB, proposed in the article, can be an expensive option because of the high read state of bots (every turn results in a read, which is part of what Cosmos pricing is based on), so MongoDB for instance could be another possible option.
Another approach to "state" though is the concept of "dialogs", where there a specific "guided conversation" the user might be going through. As an example, in an flight booking scenario you would need departure location, destination, date, time, etc., so this is a multi-turn "mini conversation" and dialogs do their own state management in this context. See "Dialogs within the Bot Framework".
As an aside, the "array" approach you're taking is kind of similar to the in-memory state option, but it requires you to manage things 100%, it can't easily be scaled (with the built in state stuff, it's easy to switch out memory to another option), and it might not be multi-user safe (depending how you're working with the array, if you're saving one per user or so).
Hope that helps

IBM Maximo - Is there a way to get possible work order status transitions via API

We are building a work order management integration layer on top of the base Maximo, communicating via provided REST/OSLC API, but we are stuck when it comes to finding all possible statuses a work order could transition to for a given work order.
Is there a REST/OSLC API, or some way to expose it externally (ex. some kind of one-time config export), the possible status transitions for a given work order?
This should consider all the customizations we've made to Maximo including additional statuses, extra conditions, etc. We are targeting version 7.6.1.
IBM seems to have dropped some things from the new NextGen REST/JSON API documentation. There is almost no mention of the "getlist" action anymore, something I have really enjoyed using for domain controlled fields. This should give you exactly what you are looking for, a list of the possible statuses that a given work order could go into. I was unable to verify this call today, but I remember it working as desired when I last used it (many months ago).
<hostname>/maximo/oslc/os/mxwo/<href_value_of_a_specific_wo>?action=getlist&attribute=status
The method you're looking for is psdi.mbo.StatefulMbo.getValidStatusList
See details here:
https://developer.ibm.com/assetmanagement/7609-maximo-javadoc/
Now, you want to expose the result to a REST API. You could create an automation script that given the WONUM would return the allowed status list. You can leverage the new REST API to achieve that quite easily.
See how you can call an automation script with a REST call here:
https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_automation_scripts
Last part: you will need to create a request response based on the mboset returned from getValidStatusList.

In an Event-Driven Microservice, how to I update private database with older data

I'm working on a new project, and I am still learning about how to use Microservice/Domain Driven Design.
If the recommended architecture is to have a Database-Per-Service, and use Events to achieve eventual consistency, how does the service's database get initialized with all the data that it needs?
If the events indicating an update to the database occurred before the new service/db was ever designed, do I need to start with a copy of the previous database?
Or should I publish a 'New Service On The Block' event, and allow all the other services to vomit back everything back to me again? Which could be a LOT of chatty-ness, and cause performance issues.
how does the service's database get initialized with all the data that it needs?
It asks for it; which is to say that you design a protocol so that the service that is spinning up can get copies of all of the information that it needs. That often includes tracking checkpoints, and queries that allow you to ask what has happened since some checkpoint.
Think "pull", rather than "push".
Part of the point of "services": designing the right data boundaries. The need to copy a lot of data between services often indicates that the service boundaries need to be reconsidered.
There is a special streaming platform named Apache Kafka, that solves something similar.
With Kafka you would publish events for other services to consume. What makes Kafka special is the fact, that events never (depends on configuration) get deleted and can be consumed again by new services spinning up. This feature can be used for initially populating the database (by setting the offset for a Topic to 0 and therefore re-read the history of events).
There also is another feature, called GlobalKTable what is a TableView of all events for a particular Topic. The GlobalKTable holds the latest value for each key (like primary key) and can be turned into an state-store (RocksDB under the hood), what makes it queryable. This state-store initializes itself whenever the application starts up. So the application does not need to have a database itself, because the state-store would be kept up-to-date automatically (consistency still is a thing to keep in mind). Only for more complex queries that state-store would need to be accompanied with a database (with kafka you would try to pre-compute the results of those queries and make them accessible to a distinct state-store itself).
This would be a complex endeavor, but if it suits your needs it is a fun thing to do!

PouchDB/CouchDB Conflict Resolution Server Side

I'm new to pouch/couch and looking for some guidance on handling conflicts. Specifically, I have an extension running pouchdb (distributed to two users). Then the idea is to have a pouchdb-server or couchdb (does it matter for this small a use case?) instance running remotely. The crux of my concern is handling conflicts, the data will be changing frequently and though the extensions won't be doing live sync, they will be syncing very often. I have conflict handling written into the data submission functions, however there could still be conflicts when syncing occurs with multiple users.
I was looking at the pouch-resolve-conflicts plugin and see immediately the author state:
"Conflict resolution should better be done server side to avoid hard to debug loops when multiple clients resolves conflicts on the same documents".
This makes sense to me, but I am unsure how to implement such conflict
resolution. The only way I can think would be to place REST API layer
in front of the remote database that handles all updates/conflicts etc with custom logic.
But then how could I use the pouch sync functionality? At that point I
may as well just use a different database.
I've just been unable to find any resources discussing how to implement conflict resolution server-side, in fact the opposite.
With your use case, you could probably write to a local pouchdb instance and sync it with the master database. Then, you could have a daemon that automatically resolve conflicts on your master database.
Below is my approach to solve a similar problem.
I have made a NodeJS daemon that automatically resolve conflicts. It integrates deconflict, a NodeJS library that allows you to resolve a document in three ways:
Merge all revisions together
Keep the latest revisions (based on a custom key. Eg: updated_at)
Pick a certain revision (Here you can use your own logic)
Revision deconflict
The way I use CouchDB, every write is partial. We always take some changes and apply them to the latest document. With this approach, we can easily take the merge all revision strategy.
Conflict scanner
When the daemon boot, two processes are executed. One that go through all the changes. If a conflict is detected, it's added to a conflict queue.
Another process is executed and remain active: Continuous changes scanner.
It listen to all new changes and add conflicted documents to the conflict queue
Queue processing
Another process is started and keeps polling the queue for new conflicted documents. It gets conflicted documents in batch and resolve them on by one. If there's not documents, it just wait a certain period and starts the polling again.
Having worked a little bit with Redux I realized that the same concept of unidirectional flow would help me avoid the problem of conflicts altogether.
Redux flows like this...
So, my clientside code never write definitive data to the master database, instead they write insert/update/delete requests locally which PouchDB then pushes to the CouchDB master database. On the same server as the master CouchDB I have PouchDB in NodeJS replicating these requests. "Superviser" software in NodeJS examines each new request, changes their status to "processing" writes the requested updates, inserts and deletes, then marks the request "processed". To ensure they're processed one at time the code that receives each request, stuffs them into a FIFO. The processing code pulls them from the other end.
I'm not dealing with super high volume, so the latency is not a concern.
I'm also not facing a situation where numerous people might be trying to update exactly the same record at the same time. If that's your situation, your client-side update requests will need to specify the rev number and your "supervisors" will need to reject change requests that refer to a superseded version. You'll have to figure out how your client code would get and respond to those rejections.

How to handle domain model updates and immutability of stored events?

I understand that events in event sourcing should never be allowed to change. But what about the in-memory state? If the domain model needs to be updated in some way, shouldn't old event still be replayed to old models? I mean shouldn't it be possible to always replay events and get the exact same state as before or is it acceptable if this state evolves too as long as the stored events remains the same? Ideally I think I'd like to be able to get a state as it was with it's old models, rules and what not. But other than that I of course also want to replay old events into new models. What does the theory say about this?
Anticipate event structure changes
You should always try to reflect the fact that an event had a different structure in your event application mechanism (i.e. where you read events and apply them to the model). After all, the earlier structure of an event was a valid structure at that time.
This means that you need to be prepared for this situation. Design the event application mechanism flexible enough so that you can support this case.
Migrating stored events
Only as a very last resort should you migrate the stored events. If you do it, make sure you understand the consequences:
Which other systems consumed the legacy events?
Do we have a problem with them if we change a stored event?
Does the migration work for our system (verify in a QA environment with a full data set)?

Resources