Supporting multiple versions of models for different REST API versions - backwards-compatibility

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!

I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

Related

What is major difference when we want to build IoT solution if we use middleware or libraries or custom development?

What is major difference when we want to build IoT solution if we use middleware or libraries or custom development?
Let's imagine that there are so many street lights, camera for illegal parking or some sensors and we should build some solution to integrate. What I found is that they are using different protocol(tcp, serial) and data type(binary, xml, text). Colleague recommend some way like middleware or libraries but I doubt if it is efficient for maintenance or not.
Middleware is strong tool for IoT solution which provides connection between different layers. It is easy to use, but there might be multiple adjustments needed for matching middleware requirements.
You can use library as a joint. If you have suitable library, you can easily connect using minimum extra programming. You might have to use multiple libraries, and additional libraries could be needed when new, unsupported components are added.
Custom development is a traditional way. It is time-consuming job, but If you code everthing, you don't need any other help.
I heard declarative backend software like Interactor might be another solution. You can construct connections and make your own logic with smaller resources.

Providing documentation with Node/JS REST APIs

I'm looking to build a REST API using Node and Express and I'd like to provide documentation with it. I don't want to craft this by hand and it appears that there are solutions available in the forms of Swagger, RAML and Api Blueprint/Apiary.
What I'd really like is to have the documentation auto-generate from the API code as is possible in .NET land with Swashbuckle or the Microsoft provided solution but they're made possible by strong typing and reflection.
For the JS world it seems like the correct option is to use the Swagger/RAML/Api Blueprint markup to define the API and then generate the documentation and scaffold the server from that. The former seems straightforward but I'm less sure about the latter. What I've seen of the server code generation for all of these options seem very limited. There needs to be some way to separate the auto-generated code from the manual code so that the definition can be updated easily and I've seen no sign or discussion on that. It doesn't seem like an insurmountable problem (I'm much more familiar with .NET than JS so I could easily be missing something) and there is mention of this issue and solutions being worked on in a previous Stack Overflow question from over a year ago.
Can anyone tell me if I'm missing/misunderstanding anything and if any solution for the above problem exists?
the initial version of swagger-node-express did just this--you would define some metadata from the routes, models, etc., and the documentation would auto-generate from it. Given how dynamic javascript is, this became a bit cumbersome for many to use, as it required you to keep the metadata up-to-date against the models in a somewhat decoupled manner.
Fast forward and the latest swagger-node project takes an alternative approach which can be considered in-line with "generating documentation from code" in a sense. In this project (and swagger-inflector for java, and connexion for python) take the approach that the swagger specification is the DSL for the api, and the routing logic is handled by what is defined in the swagger document. From there, you simply implement the controllers.
If you treat the swagger specification "like code" then this is a very efficient way to go--the documentation can literally never be out of date, since it is used to construct all routes, validate all input variables, and connect the API to your business layer.
While true code generation, such as what is available from the swagger-codegen project can be extremely effective, it does require some clever integration with your code after you initially construct the server. That consideration is completely removed from the workflow with the three projects above.
I hope this is helpful!
My experience with APIs and dynamic languages is that the accent is on verification instead of code generation.
For example, if using a compiled language I generate artifacts from the API spec and use that to enforce correctness. Round tripping is supported via the generation of interfaces instead of concrete classes.
With a dynamic language, the spec is used at test time to guarantee that both all the defined API is test covered and that the responses are conform to the spec (I tend to not validate requests because of Postel's law, but it is possible too).

Transition from RestKit to pure AFNetworking 2.0

I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.

ServiceStack Development Tooling?

Not sure if this is the most effective place to ask this question. Please redirect if you think it is best posted elsewhere to reach a better audience.
I am currently building some tooling in Visual Studio 2013 (using NuPattern) for a project that implements standard REST service using the ServiceStack framework. That is, the tooling helps you implement REST services, that meet a set of design rules and guidelines (in this case advocated by the ApiGee guidelines) for good REST service design.
Based on some simple configuration by the service developer, for each resource they wish to expose as a REST endpoint, with any number of named verbs (of type: GET, PUT, POST or DELETE) the tooling generates the following code files, with conventional names and folder structure into the projects of your own solution in Visual Studio (all in C# at this point):
The service class, and the service interface containing each named verb.
Both the request DTO and response DTOs, containing each named field.
The validator classes for each request DTO, which validates each request DTO field.
a manager class (and interface) that handles the actual calls with data unwrapped from DTOs.
Integration Tests that verify each verb with common edge test cases, and verifies status codes, web exceptions, basic connectivity.
Unit Tests for each service and manager class, that verify parameters and common edge cases, and exception handling.
etc.
The toolkit is proving to be extremely useful in getting directly to the inner coding of the actual service, by taking care of the SS plumbing in a consistent manner. From there is basically up to you what you do with the data passed to you from the request DTOs.
Basically, once the service developer names the resource, and chooses the REST verbs they want to support (typically any of these common ones: Get, List, Create, Update, Delete), they simply jump straight to the implementation of the actual code which does the good stuff rather than worrying about coding up all the types around the web operations and plumbing them into the SS framework. Of course we support nested routes and that good stuff so that your REST API can evolve appropriately.
The toolkit is evolving as more is learned about building REST services with ServiceStack, and as we want to add more flexibility to it.
Since there is so much value being discovered with this toolkit in our specific project, I wanted to see if others in the ServiceStack community (particularly those new to it or old hands at it) would see any value in us making it open source, and let the community evolve it with their own expertise to help others move forward quicker with ServiceStack?
(And, of course, selfishly give us a chance to pay forward to others, out of respect for the many contributions others have selflessly made in the ServiceStack communities that have helped us move forward.)
Let us know what you think, we can post a video demonstrating the toolkit as it is now so you can see what the developers experience is currently.
Video walkthrough of the VS.NET Extension
A video walking through of the workflow is available on:
http://www.youtube.com/watch?v=ejTyvKba_vo
Toolkit Project
The toolkit is now available here:
https://github.com/jezzsantos/servicestacktoolkit

Data Access Layer in Asp.Net

Am Afraid If am Overdoing things here.
We recently started a .Net project containig different Class Libraries for DAl,Services and DTO.
Question is about our DAL layer we wanted a clean and easily maintained Data access layer, We wanted go with Entity Framework 4.1.
So still not clear about what to opt for Plain ADO.Net using DAO and DAOImpl methodolgy or
Entity Framework.
Could any one please suggest the best approach.
It depends on how much work you want to put into creating your own customized DAL. It is always better to use ADO.NET and your own implementations, but this also includes maintaining and optimizing it and treating complex cases such as concurrency, caching and the mapping of you BO, the DAL and the Database.
If you want to concentrate more on business value and functionality you might decide to go with Entity Framework (now 4.3 released and 5.0 to come). The advantage would be that you use a DAL that was carefully tested and that already contains solutions for concurrency, caching and mapping.
But I would hardly suggest using the Repository and Unit Of Work patterns on top of it to abstract the usage of Entity Framework out of your other layers. Then you would have the possibility to later completely change the underlying technologies without any impact on the other layers (you could replace EF with your own ADO.NET implementation if you see that the performance is not as good as it should be for example).
It depends on the type of application that you need to build and on its performance requirements. Using EF could really reduce your work and give you much quicker results. It also depends on the development teams capabilities. If you only have senior developers and architects working on the project then you will create you own DAL easily. But for beginners it is really hard to implement a good, optimized and robust DAL.
I hope that helps !
I've been using ADO.NET and DTO combination in DAL ever since i remember and i love the fact that i control the entire process of creating entities and methods. However that comes with the price of having to write classes for every entity and methods for every stored procedure. Which i don't mind, but recently i have discovered PLINQO for LINQ to SQL and I'm loving it. It gives you ease of creation/updating of Classes based on your Database schema while allowing for high levels of customization. Its basically LINQ2SQL on steroids.
I also liked nHibernate but i think it had steeper learning curve than PLINQO.
I'd give PLINQO a try if i was you

Resources