How can I keep Dialogflow followup intents nested when adjusting default contexts? - dialogflow-es

We're developing a system that encapsulates content through context. The context names that Dialogflow generates aren't very readable, so we've been adjusting them in the files directly.
This works for a batch of content, but when we add a new followup intent, it inherits an automatically generated context. When we adjust that context in the file directly, it dissociates with the parentID and rootParentID.
Is there a way to keep these followup intents nested in the console? It would make large volumes of content easier to manage.
Thank you.

Related

Axon Framework: send command on aggregate load

We're building a microservices system with Axon Framework 4.1. In our domain, we have a label concept where we can attach labels to other entities. While labels are normally created and managed by the user, some of these labels are "special" and need to be hard-coded, but they need to be present in the event stream as well.
We have a bunch of aggregates that represent entities that can be labeled with these labels. Some of these aggregates will be used frequently, while others might be used infrequently or are even abandoned by the user.
Sometimes we come up with new special labels. We add them to the code, and then we also need to add them to the event stream. What is a good way to do that?
We can create a special command that we need to send when the updated service is started for the first time. It goes through all the labels and adds the ones that aren't in the event stream yet. This has two disadvantages. First, we need to actually send that command, which either requires us to not forget it, or to add some infrastructure for it outside of the code (e.g., in our build pipeline). Also, other services could have booted up faster with the new labels and started sending commands before we fired our special command. The other disadvantage is that this command will target all aggregates, including the abandoned ones, which could be wasteful of resources and be confusing to end users who might see activity in a document they thought was abandoned.
Ideally, we would like to be able to send the command when Axon has just loaded the aggregate. That way we would be certain that the labels are only introduced in aggregates that are actually used. Also, we could wire this up in code and it wouldn't require us to add infrastructure outside of the application and/or remember to do it manually.
Unfortunately, this feature doesn't seem to exist in Axon (yet) 😉.
Are there other (better) ways to achieve this?
I've got an idea which might help you out on this.
If I understand the use case correctly, the "Label" in your system, which user can introduce themselves but for which also a couple of hard-coded versions exist, is an Aggregate.
Based on that assumption, I suggest to be smart with the Aggregate Identifier you are using.
The sole thing that Axon expects from you, is that the Aggregate Identifier is (or can be made in to) a String. Typically a UUID is used for the Aggregate Identifiers, which is a reasonable first start.
You can however wrap this UUID in a typed-id object. Taking your "Label" Aggregate, that would opt for a LabelId.
That said, let's first go back to verifying whether a given "Label" Aggregate exists within the Event Stream.
The concern you have is rather valid I think; reading the entire Event Stream to figure out whether a given Aggregate instance exists is to big of a hassle.
However, the EventStore can be queried through two mechanism:
The Event Stream from a given point in time (e.g. what the TrackingToken mechanism does).
The Event Stream for a given Aggregate instance, based on the Aggregate Identifier.
It's the second option which is far more ideal in your scenario.
Just query the EventStore for a given "Label" Aggregate's Identifier. If you receive a non-empty Event Stream, you know it already exists.
Vice versa, if no Events are found, you are certain it's a new "Label" that needs to be introduced.
The crux here is in knowing the "Label's" Aggregate Identifier up front, which circles back to the String storage approach for the Aggregate Identifiers using a typed LabelId. What you could do, is deviate in the LabelId object between a custom "Label" (I'd opt for a UUID here) and a hard-coded "Label".
For the latter, you could for example have the label-name, plus a UUID/counter if desired.
Doing so will ensure that all the Events published from a hard-coded "Label" will have an Aggregate Identifier you can anticipate on during start-up.
Hope this is clear and all, if not, please comment on my response below.

dialogflow context vs. intent - Design Choice

I started working with Dialogflow a couple of weeks ago. It's nice to learn the concept of intent and (input/output) context through which Google models and defines the daily conversation flow in natural language. I understand how intent and context work at the current setting. But to me the function of context can be achieved by only using intent. You may argue whether the word 'intent' is proper for this usage but it's another discussion. So instead of input and output context just do input and output intents. In the implementation make sure the parameters and information of current conversation is carried to the following intent. And the following intent has again its output intent and the talk continues.
Can anyone correct me if I'm wrong?
Intents represent a user action, typically what a user says, including the parameters from that specific utterance.
Contexts serves two purposes:
Hold the parameters from an Intent or that have been set through Fulfillment for some period of time.
When used as an Input Context, limit what Intents can be triggered.
While you can certainly "send the parameters forward" from one Intent to another, this is a very linear way of thinking, and rapidly falls apart in complicated conversations. Using Contexts to store parameters and other info, as the first bullet suggests, makes this a lot easier, so your user can wander around in the conversation, and yet you are still maintaining the overall state.
As for the second bullet, this is used to change how we understand what the user has said based on other parts of our conversation. (This matches how humans handle conversations.)
So my response saying "Yes" means different things depending if I'm asking to delete a message or send a message - Contexts help us manage that.

Importing data and Event Sourcing

I am currently working on a monolithic system which I would like to bring into the modern day and incorporate DDD and CQRS. I have been presented with a request to re-write the importing mechanism for the solution and feel this could present a good opportunity to start this re-architecting process.
Currently the process is:
User uploads CSV
System parses CSV and shows each row on screen. Validation takes place for each row and errors/warnings associated with each row
User can modify each line an re-validate all rows
User then selects rows that don't have errors and submits the import
Rows import and any non-selected rows, or rows with errors go into a holding area so they can deal with them at a later date
Additional details for this is that multiple rows could belong to the same entity (E.g. 2 rows could be line items in an order, so would have the same Order Ref).
I was thinking of having an import saga that would generate a bunch of import aggregates (e.g. OrderImportAggregate), and then when the import is submitted those would get converted into the class used across the system currently, which would hopefully become aggregates in their own right when re-architected further down the line! So the saga process would take on something along the lines of:
[EntityType]FileImportUploaded - Stores the CSV
[EntityType]FileImportParsed - Generates n number of [EntityType]Import aggregates.[EntityType]ImportItemCreated events raised/handled
Process would call the validation routine that the current entities go through to generate a list of errors, if any, and store against each item. [EntityType]ImportItemValidated events raised/handled
Each time a row is changed on screen, it calls a web api method for the saga and and item id to update the details and re-validate the row as per point 3.
User submits import, service groups entities together, based on ref for example, they get converted into the current system entity and calls their import/save routine. [EntityType]ImportItemCompleted event raised.
Saga completes when all aggregates are at ImportItemComplete state
As this was my first implementation of CQRS/Event Sourcing/DDD, I wanted to start off on the right foundation, so was wondering if this is a desired approach for this functionaility?
I suggest that you break your domain into two separate sub-domains implemented as to separate bounded context, one bounded context being the Import bounded context (ImportBC) and the other being the receiving bounded context (ReceivingBC, the actual name is not know to me, please replace it accordingly).
Then, in the Import BC you should implement using the CRUD style, having an entity for each import file and use a persistence to remember the progress on the validation and import process (this entity holds a list of not-yet imported items). After each item is validated by a human, a command could be sent to the aggregates in the ReceivindBC to test if the aggregate is valid according to the business rules, but without committing the changes to the repository! You do this so that the human user would know if the item is indeed valid and to enable/disable an import button. In this way you don't duplicate the validation logic inside the two bounded contexts. When the user actually presses the import button send the import command to the aggregate in the ReceivingBC and you actually commit the changes to the repository. Also, you remove the import item from the import file CRUD entity.
This technique of sending commands but without actually persisting into the repository is useful in helping the user experience in the UI (without duplicating logic inside the UI) and it is doable if you follow the DDD best practices and design your aggregates to be pure, side-effect free objects (to be Repository agnostic, to not know of their existing, to not use them at all!).
Well first of all you have to ask yourself why are you using CQRS. CQRS is the heavy 18 wheeler amongst architecture. I know of 2 good reasons that scream CQRS
1) You need to support undo functionality
2) in the future when new requirements are implemented you want to apply those to past data too.
The part of the requirements that you are describing however feels very much like crud. (You import a set of rows, you list a set of rows, you edit those rows and the ones marked as completed are then deleted from their input state and converted into some other kind of entity.
If you feel there is a lot of complexity describing the specific entities and the validation rules that apply then DDD would be a good fit. but still i would consider scaling it down and build a simle mvc style app to implement this (depending what else is required of this project)
and even if this were part of a larger domain i would suggest a microservices approach where this would be a completely standalone import application (and in that case you could still raise a ImportCompleted Event and put it on a service bus with multiple other applications listening to that event)
NOTE: CQRS is not event sourcing, cqrs is separating a command (update) stack from a query stack. It's often applied in combination with event sourcing. But having events that pop up everywhere can be a pain to maintain especially since it's often less obvious who is raising the event and if events have interactions on eachother (what happens to an order if both a ordercompleted and ordercanceled event are raised, possibly with timing issues which one is handled first)
I'm not a DDD expert but this is my thoughts on approaching this. I wouldn't use a seperate bounded context because it feels to me the import of domain objects can ideally be in the same bounded context as the one they are a part of. Keen to hear from experts why it would be wrong!
Parse the csv into an aggregate representing the data import and persist this (to the staging area / tables etc). We can load this aggregate from here in future. The parsing of CSV file to create this aggregate could be modelled as a command "CreateDataImportFromCsvFile" etc.
Build a UI that loads this aggregate and displays it. The aggregate can contain a list of domain objects "customer import items" and each "customer import item" can contain an "IsSelected" property as well as the domain object being imported I.e the "customer" domain object itself. This means you don't duplicate validation rules as you are using the actual Domain objects you intend to import. You hydrate those objects and display them in the UI. When the user clicks the import button, you issue a command. You handle that command by looping through each selected and valid "import item" on the aggregate and calling Save() on its Domain model, and then marking the import item as processed. Ideally do this all within an outer transaction scope (depends on whether you want atomicity vs eventual consistenty etc). Your UI can then optionally not display processed import items or it can display them in a disabled state or whatever depending on whether it is useful for the user to also be able to see what has actually been processed so far vs what's remaining.

What Changes Does Referencing CodeGeneration.CodeCustomization Make to the Early Bound Generated CRM Entities?

After reading this SO question, I noticed that the link in the question made a reference to Microsoft.Xrm.Client.CodeGeneration.CodeCustomization,Microsoft.Xrm.Client.CodeGeneration.
What advantages it has over the standard code gen? According to LameCoder it changes all the entities to inherit from Microsoft.Xrm.Client.CrmEntity rather than `Microsoft.Xrm.Sdk.Entity. What changes does that make and what other changes are created?
Here is the best site I could currently find on what it does:
CrmSvcUtil & OrganizationServiceContext enhancements such as lazy loading
Simplified Connection Management with Connection Dialog UI
Client Side caching extensions
Utility Extension functions for common tasks to speed up client development
Organization Service Message utility functions to make it easy to call common messages such as BulkDelete, Add Member to Team etc.
Objects to support the Microsoft.Xrm.Portal extensions
The only real downside I can see to inheriting from CrmEntity is that it requires the Microsoft.Xrm.Client dll to either be Gac'd on the server, or IL Mergered into the Entities dll.
Besides that one downside, here are the features I see it adding:
Moves INotifyPropertyChanging and INotifyPropertyChanged into the base class, making resulting code smaller
Defines additional class Attributes
System.Data.Services.Common.DataServiceKeyAttribute
System.Data.Services.IgnorePropertiesAttribute (I'm assuming this one sends less data over the wire?)
Microsoft.Xrm.Client.Metadata.EntityAttribute (I believe this is used to support LazyLoading
Option Sets properties are changed to nullable ints
Money properties are now nullable decimals
Setting a property value to the value it already is, will not trigger a property changing/changed event
SetPrimaryIdAttributeValue results in smaller code.

Mass-Replacing Objects using Unity's label tags?

Im currently working on an exercise, for which I want to create a technical design documentation.
Therefore, I need to evaluate possible solutions to a bunch of problems coming with my fictional project.
Here's a quick glance at the exercise:
The game's art & core game design are split up very harshly - basically, the core system, game mechanics and design are created to be very abstract, in order to allow them to work with a very wide variety of art settings. Also, one of the restrictions is to re-use as many assets, levels & designs as possible.
Now to my question:
I want the level designers to create levels using "template" objects (object which have all the technical features that are required, ie slots for attachments, correct scale, textures etc) and later replace these objects with set of assets I receive from my outsourcer.
Since I dont want to manually replace all objects whenever I get a new set of assets, this is what I wanted to do:
Each template object gets a descriptive label, and each asset delivered by the outsourcer needs to have the exact same label name as its corresponding template-counterpart stored in it as well (for example as a custom attribute, a channel, or simply in its name).
I now want to replace all templates with the related asset using a script.
This would be done for each set of assets. I would also keep several deployments of my engine, one per set, but initially, they'd all start out with the templates that need to be replaced (since there will need to be some modifications for each setting, both visually and from a game design perspective, keeping all assets in one trunk/project didn't make any sense to me).
To make this easier i'd use a "database" of some sorts (probably a simple dictionary which the engine script could query and which would be filled out beforehand by another script upon delivery of new assets?).
My question is: is this possible? If yes, how difficult would this be from a programmers perspective? I have only limited knowledge in this field, so I'd love to hear what you lads & ladies think about this.
Also (very important) - do you know of a better way to achieve this "replacability" of assets? Or simply have an easier way to achieve what I want to do? I appreciate any feedback! Thank you!
quick edit: This would not only be applied to 3d Objects; textures would also need to be replaced, obviously
I think you are looking for Prefabs.
Basically prefabs implements a sort of prototype pattern.
Instead of putting into scene's hierarchy directly a GameObject you can make it a prefab and put into the scene a GameObject that is an instance of that prefab.
When a GameObject into the scene is linked to a prefab, and the prefab will be modified, the linked object will be modified too.
If you have several instances of the same prefab, all istances will be updated as well.
The only strong limitation to this feature is that, since now, nested prefabs aren't supported.
I want the level designers to create levels using "template" objects
(object which have all the technical features that are required, ie
slots for attachments, correct scale, textures etc) and later replace
these objects with set of assets I receive from my outsourcer.
This is the tipical use case. You have a placeholder into the scene (es. a Cube) that will be subistitued by a model when the artists will provide it.
If you instantiate 100 cubes into the scene, when you need to substitute them, you would do it manually for all objects.
If instead you have created a prefab (lets call it ModelPrefab) and the cubes into the scene are instances of that prefab, when you'll have the new 3d model you can simply update the prefab and all linked instances will be updated too.
My question is: is this possible? If yes, how difficult would this be
from a programmers perspective?
If you can work without nested prefabs you have to do nothing, it's already implemented. If you need to implement nested prefabs, it might not be so straightforward.
quick edit: This would not only be applied to 3d Objects; textures
would also need to be replaced, obviously
I made the example above using the models, but you can make a prefab from each GameObject that is actually a collection of Components (have a look at Component Based Object Management if you are interested).
EDIT
Yes, it is possible to update prefabs throught script the required functions are in the UnityEditor namespace, so they mast be used through an editor extension.
You can found all you need in PrefabUtility class.

Resources