Dialogflow agent git versioning - dialogflow-es

I'm looking for a solution that allows my team to collaborate in the development of a single Dialogflow agent, because the typical scenario is that 2 or more developers work on the same agent.
Usually for other kind of technologies we adopt git for source code versioning (also for webhooks in example) applying the proper branching strategy.
We tried to use git also for agents exports, but in this case it seems to be impossible because we are facing issues in merging intents ids (that seems to be generated with a not well known way).
Then in this scenario, we need to synchronize each other, for consistency checks, before exporting and committing the zip to git, losing the chance to leverage the power of git.
Searching with Google I got that we can solve the problem start using such framework like Narratory.
Said that, my questions are:
Do you know if is there any way ther to accomplish our need without using Narratoy?
Do you know if exist alternative to Narratory?

Related

Using Terraform as an API

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

Using PersistentAuditEvent in JHipster for managing custom events

I notice that JHipster microservices have their own Auditing viz. PersistentAuditEvent it seems easier to use than say AuditEventRepository which only has add and some limited find methods.
I want to save an Event of a task being run with a role of SYSTEM and identify it by something like type:executedLongQuery
Then in future I want to check the last run of this query and decide whether we need to run in again for report generation then log an event again if it is run. It seems to me PersistentAuditEvent offered by JHipster is the best way to go.
I don't see a PersistentAuditEventRepository or any suitable implementation within the microservice so if I can get a documentation with example that would be very helpful. Even a clue in the right direction could help me start.
I found the repository interface and a custom implementation in JHipster Gateway which is not present in microservice. It was easy to simply copy it over to microservice and use the repository. Ofcourse here I am using a database in the microservice, an empty one, which still adds the migrations as well as Audit tables.

REST API with versioned data and differential endpoint : optimizing bandwidth and performance

My NodeJS project is based on SailsJS, itself using ExpressJS.
Its API will be used by mobile apps to fetch their data from it.
The tricky part is I don't want the client apps to fetch the whole data tree every time there is a change in the database.
The client only needs to download a differential between the data it's already got and the data on the server.
To achieve that I thought of using git on the server. That is create a repository and save all endpoints as a json file in the repo. Each save will trigger an automatic commit.
Then I could create a specific API endpoint that will accept a commit sha as a parameter and return a diff between that and git HEAD.
This post by William Benton comforted me with this idea.
I'm now looking for any tips that could help me get this working based on the language and frameworks cited above :
I'd like to see a proof of concept of this in action but couldn't find one
I couldn't find an easy way to use git with NodeJS yet.
I'm not sure how to parse the returned diff on client apps developed with the IONIC framework, so AngularJS.
Note : The api will only be readable. All DB movement will be triggered by a custom web back-end used by few users.
I used the ideas in that post for an experimental configuration-management service. That code is in Erlang and I can't offer Node-specific suggestions, but I have some general advice.
Calling out to git itself wasn't a great option at the time from any of the languages I was interested in using. Using git as a generic versioned-object store actually works surprisingly well, but using git plumbing commands is a pain (and slow, due to all of the forking) and there were (again, at the time) limitations to all of the available native git libraries.
I wound up implementing my own persistent trie data structure and put a git-like library interface atop it. The nice thing about doing this is that your diffs can be sensitive to the data format you're storing; if you call out to git, you're stuck with finding a serialization format for your data that is amenable to standard diffs. (Without a diffable format, though, you can still send back a sequence of operations to the client to replay on whatever stale objects they have.)

Development for Cloudant using local CouchDB

I'm planing on having my database stored in Cloudant.
Is it safe to use local CouchDB during development, testing and staging of our application with knowledge that everything works locally should also work on Cloudant?
Certainly. Cloudant is API compatible with the Apache CouchDB API with a few subtle distinctions, all of which are documented at http://docs.cloudant.com. Some highlights are:
we disable temporary views (they would be expensive for you at scale!)
for our distributed system, we have extend the update_seq from an integer to a string
your re-reduce code will nearly always be called, so we recommend using exclusively built-in reduce methods
we have fully integrated lucence indexing/search
we have multi-stage mapreduce processing via "dbcopy"
I do a very similar process. You don't need the same versions, it will actually be very different no matter how you look at it. Cloudant is very cool, and have made a lot of alterations and additions to their system. So, if you are looking at developing views, attachments, etc, then you can develop those locally on your dev project. Once your dev project looks good, I would have those checked into the staging/qa server, which I like to use Cloudant for as well. Thats where you need to get everyones code working together. after that is done, you can fire off a replicator to replicate your staging to production.
No matter how you look at it though, or how you envision the process being, you are going to want to take a close look at the going from dev to QA. There are ways to go about it so that everyone can dev on their own, and merge up. I personally like to use github. I hope this helps you out in your tasks.

How can I still use DDD, TDD in BizTalk?

I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.

Resources