I am new to AI and NLP world and trying to create functionality similar to datasloth.generate(). functionality work perfectly fine to my use case only problem is our organization has not approved use of Open-AI outside of POCs and also its bit expensive.
I am after similar functionality which uses open source libraries instead of OPEN-AI GPT-3 (I expect some degradation is results with alternatives)
Can you guys direct me or guide me to right path ?
Related
I am trying to implement a chatbot using linked data. In a paper, I read that I can use SVM for the intent classification which is basically the first stage in conversational systems. It is the process of mapping queries to a predefined class. I wanted to know whether there is any other way or applications / programs that I can use to do this. Could someone please advise me?
Thank you in advance.
Rasa is an open-source platform that can be used to easily set up a chatbot, and through it you can set up several kinds of intent classification (including with an SVM).
It also allows you to annotate example utterances and map them to intents within the tool itself.
These tasks are certainly doable on your own, but if speed and easy integration with a working chatbot are priorities, then I recommend using a platform like that.
I'm looking to build a REST API using Node and Express and I'd like to provide documentation with it. I don't want to craft this by hand and it appears that there are solutions available in the forms of Swagger, RAML and Api Blueprint/Apiary.
What I'd really like is to have the documentation auto-generate from the API code as is possible in .NET land with Swashbuckle or the Microsoft provided solution but they're made possible by strong typing and reflection.
For the JS world it seems like the correct option is to use the Swagger/RAML/Api Blueprint markup to define the API and then generate the documentation and scaffold the server from that. The former seems straightforward but I'm less sure about the latter. What I've seen of the server code generation for all of these options seem very limited. There needs to be some way to separate the auto-generated code from the manual code so that the definition can be updated easily and I've seen no sign or discussion on that. It doesn't seem like an insurmountable problem (I'm much more familiar with .NET than JS so I could easily be missing something) and there is mention of this issue and solutions being worked on in a previous Stack Overflow question from over a year ago.
Can anyone tell me if I'm missing/misunderstanding anything and if any solution for the above problem exists?
the initial version of swagger-node-express did just this--you would define some metadata from the routes, models, etc., and the documentation would auto-generate from it. Given how dynamic javascript is, this became a bit cumbersome for many to use, as it required you to keep the metadata up-to-date against the models in a somewhat decoupled manner.
Fast forward and the latest swagger-node project takes an alternative approach which can be considered in-line with "generating documentation from code" in a sense. In this project (and swagger-inflector for java, and connexion for python) take the approach that the swagger specification is the DSL for the api, and the routing logic is handled by what is defined in the swagger document. From there, you simply implement the controllers.
If you treat the swagger specification "like code" then this is a very efficient way to go--the documentation can literally never be out of date, since it is used to construct all routes, validate all input variables, and connect the API to your business layer.
While true code generation, such as what is available from the swagger-codegen project can be extremely effective, it does require some clever integration with your code after you initially construct the server. That consideration is completely removed from the workflow with the three projects above.
I hope this is helpful!
My experience with APIs and dynamic languages is that the accent is on verification instead of code generation.
For example, if using a compiled language I generate artifacts from the API spec and use that to enforce correctness. Round tripping is supported via the generation of interfaces instead of concrete classes.
With a dynamic language, the spec is used at test time to guarantee that both all the defined API is test covered and that the responses are conform to the spec (I tend to not validate requests because of Postel's law, but it is possible too).
I am fairly new to Azure and mobile services, and all the examples and tutorials I can find for the table and API scripts are fairly simplistic.
If I have some processes that are fairly complex and rely on pulling information from many different tables and processing contingent on that data, should I be doing that somewhere other than the API scripts? I am new to node.js as well so maybe that's the problem but I was wondering if there is a more appropriate place for business logic, such as some bridge I need to add to my stack?
There are a lot of examples of how to use MSSql object which is used to query tables and Node in general available. A healthy search will reveal just about anything you need. Since you said you are new to Node.js consider using the .NET backend instead. It is based on Entity Framework and there are lots of Entity framework examples out there for you too. Finally, there are some really good examples of complex logic being used in the back ends in the sample code available: http://azure.microsoft.com/en-us/develop/mobile/ios-samples/ (pick your client OS) and here: http://azure.microsoft.com/blog/topics/mobile/ and here: http://blogs.msdn.com/b/azuremobile/
Let us know if you have specific questions!
Ok, here's the thing.
I have a good JS background, had my share of JS in the past, and have lots of cool bare-bones tools I take with me from project to project that act like a library.
I'm trying to formulate work with CouchDB.
Now, after getting used to luxury of cool tools that you wrote and simplify the language for you - I find it a little frustrating to write many things in bare-bones manner.
I'm looking for a way I can load to the database context a limited, highly efficient and generic set of tools that focus on the pure language and makes the work with the language much more groovy (and gosh, no, im not talking about jquery or any of the even more busty libraries out there).
If on top of that, there could be found a way where I can add to the execution context of the couchDB JS engine some of my own logic tools (BL model functions) - it would present a great and admirable power and make couchDB the new home for a JavaScript-er like me.
Maybe I'm aiming too low.
I'd be satisfied with a way I can allocate a set of extensions even for a specific database, and I don't mind do it for every database in separate. Or worse - to add it to every design document, so I can teach for example several views in the same design-doc what a Person is, what a Worker is, and use their methods to retrieve data from them according to logic in a reusably coded manner.
Can anybody point me the the way?
Whatever way you can point me - I'll be very verrry grateful.
If there are ways for all of these - then great.
Trust me to know the difference of what logic belongs to what layer...
You open my possibilities - I promise to use them :D
CouchDB now supports code sharing as CommonJS modules.
http://docs.couchbase.org/couchdb-release-1.1/index.html#couchdb-release-1.1-commonjs
http://caolanmcmahon.com/posts/commonjs_modules_in_couchdb
In this way, you can share your javascript modules between views, lists, and shows in the same design doc. (Server-side)
Also, you can load these modules on the browser side with this library:
https://github.com/couchapp/couchapp/blob/master/couchapp/templates/vendor/couchapp/_attachments/jquery.couch.app.js
You also might want to look at Kanso:
http://kansojs.org/
It does a really good job of making your javascript work seemless between the server and client.
You can find some helpful tools here : https://github.com/vivekpathak/casters
The running examples and test cases may particularly help you.
I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.