Providing documentation with Node/JS REST APIs - node.js

I'm looking to build a REST API using Node and Express and I'd like to provide documentation with it. I don't want to craft this by hand and it appears that there are solutions available in the forms of Swagger, RAML and Api Blueprint/Apiary.
What I'd really like is to have the documentation auto-generate from the API code as is possible in .NET land with Swashbuckle or the Microsoft provided solution but they're made possible by strong typing and reflection.
For the JS world it seems like the correct option is to use the Swagger/RAML/Api Blueprint markup to define the API and then generate the documentation and scaffold the server from that. The former seems straightforward but I'm less sure about the latter. What I've seen of the server code generation for all of these options seem very limited. There needs to be some way to separate the auto-generated code from the manual code so that the definition can be updated easily and I've seen no sign or discussion on that. It doesn't seem like an insurmountable problem (I'm much more familiar with .NET than JS so I could easily be missing something) and there is mention of this issue and solutions being worked on in a previous Stack Overflow question from over a year ago.
Can anyone tell me if I'm missing/misunderstanding anything and if any solution for the above problem exists?

the initial version of swagger-node-express did just this--you would define some metadata from the routes, models, etc., and the documentation would auto-generate from it. Given how dynamic javascript is, this became a bit cumbersome for many to use, as it required you to keep the metadata up-to-date against the models in a somewhat decoupled manner.
Fast forward and the latest swagger-node project takes an alternative approach which can be considered in-line with "generating documentation from code" in a sense. In this project (and swagger-inflector for java, and connexion for python) take the approach that the swagger specification is the DSL for the api, and the routing logic is handled by what is defined in the swagger document. From there, you simply implement the controllers.
If you treat the swagger specification "like code" then this is a very efficient way to go--the documentation can literally never be out of date, since it is used to construct all routes, validate all input variables, and connect the API to your business layer.
While true code generation, such as what is available from the swagger-codegen project can be extremely effective, it does require some clever integration with your code after you initially construct the server. That consideration is completely removed from the workflow with the three projects above.
I hope this is helpful!

My experience with APIs and dynamic languages is that the accent is on verification instead of code generation.
For example, if using a compiled language I generate artifacts from the API spec and use that to enforce correctness. Round tripping is supported via the generation of interfaces instead of concrete classes.
With a dynamic language, the spec is used at test time to guarantee that both all the defined API is test covered and that the responses are conform to the spec (I tend to not validate requests because of Postel's law, but it is possible too).

Related

Supporting multiple versions of models for different REST API versions

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

Should I use the 'request' module for a new project?

The 'request' module has been a long-time standard for Node.js. They have recently deprecated the library.
I am starting a new project, and looking for the best solution to do my networking. I started off using the native 'https' module, but ran into problem after problem. Using the request module seemed to be easy and work just fine. There are also many other libraries to replace the request module.
Generally speaking, you should avoid using deprecated libraries when possible. But does that rule of thumb apply here?
Is it bad to start a new project with the 'request' module? If so, what is the new standard?
I would personally not start a new project with the request() library unless it has a feature that no other library has that I absolutely need or unless I need another module that depends upon the request() module itself.
When I have the freedom to choose, I'm using got() for new projects instead. Choosing from the list of alternatives is a personal decision so you just have to evaluate the type of interface they each have and what features they have. For what I typically do with this type of library, got() seemed simple and clean, built from the ground up with promises, meets my needs and I've had no problems using it.
Axios, node-fetch and superagent have advantages in that you can use a similar interface in both node.js and in the browser. All are popular and in wide use.
I tried bent, but didn't click with its programming interface.
I'd personally rather be using libraries that have a stated objective to continue to evolve with new developments in the language, new developments in nodejs libraries and add new features over time rather than a library that says it will not be adding new features.
I also like using a library that has promise support built-in from the core rather than added on only as a wrapper since I do all asynchronous programming with promises now.
Some other resources in examining the alternatives:
Feature comparison chart (written by the makers of got())
Migrating to got() from request
And, if you want to read about why the request() library has gone into maintenance mode, read here.
In a nutshell, it's an old architecture with tons of features glued onto the side, but because there are so many modules dependent upon it, they can't really break their API to fix or smooth things out. And, because it's so popular, it is holding back the success of competing solutions that have designed a cleaner interface. So, the decision was made to let the alternatives that have been designed in a more modern way take the mantle going forward and request() will go into maintenance mode to continue to support the other modules that are dependent upon it, but not try to evolve into a more modern interface.
Request isn’t really deprecated. It’s no longer considering new features or breaking changes, but it is still being maintained. It’s safe to use for the foreseeable future, but how good an idea it is to use it is up to the developer. Not really a right or wrong answer here.
They’ve stopped new work on it because they believe the patterns it uses are out of date and to switch over to modern patterns would effectively make request an entirely new module, so rather than invalidate thousands of blogs and SO answers by making a massive update, they’ve decided to stop in order to make space for a new standard to emerge that effectively uses new features. As of yet, a new standard does not exist.
If you like request, and the outdated patterns it uses are good for your purposes, then go for it. But new patterns and features exist for a reason, and they might be worth exploring.
The author of request mentions that request was written in the old ways when the best practices were so different. He tried to recreate a similar library with better practices and made bent.
I would say the only problem with request was not the underlying code, but also its interface: callbacks belong to the stone age and make your code ugly.
My personal suggestion would be using RxJS-based solutions:
RxJS's bundled ajax library for frontend code (comes with rxjs),
RxJSx's request library for backend (#rxjsx/request).

derbyjs for REST API

First of all, I have seen this question: How to best create a RESTful API in Node.js and it has pointed me towards mers, which has been a great help.
But I have also been reading a lot of good things about derbyjs and it does look really interesting.
So my questions, does it make sense to use derbyjs for ceating a REST API (real-time features might be useful in the future, but not a 100% certain at this pont.)? And is it any better or worse than mers?
I am really grateful for any help.
Edit:
If anyone is interested, decided now to use sails.js: http://sailsjs.org/
The strength of Derby is that the same views (i.e. rendering templates into HTML) can be executed on the client as well as on the server. So for building a webapp, you won't have to explicitly code a REST API and then use it from the client-side JavaScript, instead you just write your views and Derby does the rest.
So if you're looking into making a REST API only (as your question states) and no HTML, there is no advantage in using Derby. It's the wrong tool for the job.
It depends on what you're looking for exactly. Derby.js is built on top of Express.js which has excellent support for creating a REST API. This also means that anything you can do in Express, you could also do in Derby. If you want real-time features, and the ability to build out a REST API, Derby.js is an excellent choice. It's also one of the reasons that people recommend Derby over something like Meteor (currently Meteor does not have support for REST endpoints, but it will hopefully in the future so also something you might want to keep your eye on, if you're in the market for real-time framework). However, if you're not looking for a node framework with an emphasis on real-time functionality, Derby is not the right choice. I would however recommend looking into Express.js to build a REST API. We use it currently for that purpose and it works really well. There are also a number of libraries and packages that play nicely with Express, so in the future if your needs change, it's easy to find something that works well with Express.
Anyway, I would recommend checking out some basic tutorials for how to create a REST API in Express because once you're able to successfully do that, adding some of the real-time features of Derby.js is fairly straightforward.
Basic tutorial on creating a REST API in Express.
http://coenraets.org/blog/2012/10/creating-a-rest-api-using-node-js-express-and-mongodb/

CouchDB - share functions across views, across design documents, across databases

Ok, here's the thing.
I have a good JS background, had my share of JS in the past, and have lots of cool bare-bones tools I take with me from project to project that act like a library.
I'm trying to formulate work with CouchDB.
Now, after getting used to luxury of cool tools that you wrote and simplify the language for you - I find it a little frustrating to write many things in bare-bones manner.
I'm looking for a way I can load to the database context a limited, highly efficient and generic set of tools that focus on the pure language and makes the work with the language much more groovy (and gosh, no, im not talking about jquery or any of the even more busty libraries out there).
If on top of that, there could be found a way where I can add to the execution context of the couchDB JS engine some of my own logic tools (BL model functions) - it would present a great and admirable power and make couchDB the new home for a JavaScript-er like me.
Maybe I'm aiming too low.
I'd be satisfied with a way I can allocate a set of extensions even for a specific database, and I don't mind do it for every database in separate. Or worse - to add it to every design document, so I can teach for example several views in the same design-doc what a Person is, what a Worker is, and use their methods to retrieve data from them according to logic in a reusably coded manner.
Can anybody point me the the way?
Whatever way you can point me - I'll be very verrry grateful.
If there are ways for all of these - then great.
Trust me to know the difference of what logic belongs to what layer...
You open my possibilities - I promise to use them :D
CouchDB now supports code sharing as CommonJS modules.
http://docs.couchbase.org/couchdb-release-1.1/index.html#couchdb-release-1.1-commonjs
http://caolanmcmahon.com/posts/commonjs_modules_in_couchdb
In this way, you can share your javascript modules between views, lists, and shows in the same design doc. (Server-side)
Also, you can load these modules on the browser side with this library:
https://github.com/couchapp/couchapp/blob/master/couchapp/templates/vendor/couchapp/_attachments/jquery.couch.app.js
You also might want to look at Kanso:
http://kansojs.org/
It does a really good job of making your javascript work seemless between the server and client.
You can find some helpful tools here : https://github.com/vivekpathak/casters
The running examples and test cases may particularly help you.

How can I still use DDD, TDD in BizTalk?

I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.

Resources