I was wondering whether the namespaces themselves can be grouped?
Our REST server project has a highly decentralized structure (along the lines of a Redux fractal pattern) and every feature has its own namespace. This predictably has led to many namespaces, and the swagger page is getting rather full now.
If this is not achievable, I guess we can live with it, or consider emitting only the swagger json to be consumed by the official Swagger UI that we can run in a separate server. But I'd much prefer a restplus-y solution, since that represents the least amount of code friction.
The underlying OpenAPI Specification has a concept of tags. The namespace feature in Flask-RESTPlus assigns these names as tags for path definitions, so this is how you get the grouping in a Swagger UI. The specification does not offer any hierarchical grouping mechanism, so therefore Flask-RESTPlus doesn't offer any such feature.
You could consider a different strategy for assigning namespaces/tags to create more manageable groupings, split the API across multiple Swagger UI pages/sites, etc. Sounds like there is no way around your Swagger UI needing to render a very large number of API methods, so making it more understandable via general content structuring may be your best approach.
Related
Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.
The Kendo UI for jQuery library had an easy to configure DataSource, that could be used to wrap around a REST API.
Is there something similar (planned to be) in Kendo UI for Angular2? Or is the recommended way to use services that wrap around #angular/http, and use RxJS Observables?
Nope, they consider Angular2 to have sufficient functionality, so your last supposition is correct.
I'm afraid that we do not plan to have a DataSource component. We
believe that having a DataSource like the one in Kendo UI for jQuery
does not fit in the NG2 context. The framework already provides most
of the DataSource functionality such as ability to fetch data, change
tracking etc. Also there are pleiad of libraries and patterns for
working with data. Thus, having single component to handle all of
those seems more of a unnecessary constraint.
However, we do plan to make available helper functions in order to
streamline the Grid data operation descriptors serialization and
operation handling. Providing better flexibility and integration with
the rest of the NG2 ecosystem. Some functions already exist for
sorting and paging for OData. As well as generating a comparer
function for in-memory processing (this is demonstrated in this sample).
https://github.com/telerik/kendo-angular/issues/45
Over the past couple of years or so I have revamped most of my Notes Applications for XPages and of late made extensive use of Java objects in Repeat Controls etc.
I am now implementing, where appropriate, jQuery DataTables in an attempt to generate the same functionality as Notes Views where appropriate. My applications vary from a few document records to several thousand.
Most of the Data Table tutorials etc seem to imply or recommend the use of REST Services for Data Tables. What is the reason for this when I can simply drop my existing Java Objects into Repeat Controls and then access the back end documents via links etc.
Sorry if this is not a coding question, but I am clearly missing something fundamental in my basic knowledge. Any advice would be appreciated.
The short version is that jQuery data tables are built by purely (CS)JS, meaning any "normal" transport of data like a REST service (such as how you're describing using xp:restService) is pretty standard and ubiquitous. jQuery itself has no knowledge directly of any underlying Java objects and doesn't care what backs the service.
If you were using an xp:repeat control you could bind to a backing List or other iterable collection from a backing Java class / bean. This would make far more sense if that's how you'll present the data. The logic shift is that specifically any time you update your xp:repeat, you must send an AJAX (XHR) wrapping around that xp:repeat tag, whereas a jQuery update from a REST service will get only the data response. There is some overhead to using AJAX to refresh part of the page (which literally is replacing part of the existing DOM with the newly fetched HTML and parsing the content), but at smaller scales, it's not a huge amount.
Using a REST service means that:
your front-end implementation will be more consistent with the majority of the rest of the web development industry
your back-end logic will be segregated, (ideally) making it more easy to port, migrate, or document
There's nothing wrong with implementing an xp:repeat (or friends) with backing Java on XPages, especially if you're using primarily XPages controls.
There are many ways to implement a RESTful service in XPages and the reasoning behind why to go for RESTful APIs in the XPages runtime is something both myself and many others have blogged about.
I'm looking to build a REST API using Node and Express and I'd like to provide documentation with it. I don't want to craft this by hand and it appears that there are solutions available in the forms of Swagger, RAML and Api Blueprint/Apiary.
What I'd really like is to have the documentation auto-generate from the API code as is possible in .NET land with Swashbuckle or the Microsoft provided solution but they're made possible by strong typing and reflection.
For the JS world it seems like the correct option is to use the Swagger/RAML/Api Blueprint markup to define the API and then generate the documentation and scaffold the server from that. The former seems straightforward but I'm less sure about the latter. What I've seen of the server code generation for all of these options seem very limited. There needs to be some way to separate the auto-generated code from the manual code so that the definition can be updated easily and I've seen no sign or discussion on that. It doesn't seem like an insurmountable problem (I'm much more familiar with .NET than JS so I could easily be missing something) and there is mention of this issue and solutions being worked on in a previous Stack Overflow question from over a year ago.
Can anyone tell me if I'm missing/misunderstanding anything and if any solution for the above problem exists?
the initial version of swagger-node-express did just this--you would define some metadata from the routes, models, etc., and the documentation would auto-generate from it. Given how dynamic javascript is, this became a bit cumbersome for many to use, as it required you to keep the metadata up-to-date against the models in a somewhat decoupled manner.
Fast forward and the latest swagger-node project takes an alternative approach which can be considered in-line with "generating documentation from code" in a sense. In this project (and swagger-inflector for java, and connexion for python) take the approach that the swagger specification is the DSL for the api, and the routing logic is handled by what is defined in the swagger document. From there, you simply implement the controllers.
If you treat the swagger specification "like code" then this is a very efficient way to go--the documentation can literally never be out of date, since it is used to construct all routes, validate all input variables, and connect the API to your business layer.
While true code generation, such as what is available from the swagger-codegen project can be extremely effective, it does require some clever integration with your code after you initially construct the server. That consideration is completely removed from the workflow with the three projects above.
I hope this is helpful!
My experience with APIs and dynamic languages is that the accent is on verification instead of code generation.
For example, if using a compiled language I generate artifacts from the API spec and use that to enforce correctness. Round tripping is supported via the generation of interfaces instead of concrete classes.
With a dynamic language, the spec is used at test time to guarantee that both all the defined API is test covered and that the responses are conform to the spec (I tend to not validate requests because of Postel's law, but it is possible too).
I communicate with the server through jsons, which both in Nodejs and in Actionscript are objects (serialized through string).
Those objects I use in my client, by reading / modifying them and also creating secondary objects (from Classes) relative to what came from the server.
I have one of two options to design my client and I am stuck at deciding which of them is more flexible/futureproof.
Keep data as it comes, create many methods to modify the objects, keep secondary objects somewhere separate.
Convert the data into instances of classes where each class has its own group of methods instead of piling the methods in the same place.
Usually I go with 2 because OOP is delicious but going with 1 seems much simpler in terms of quantity.
I guess my problem is that I can't figure out if my client is basically a View (from MVC) where the server is the Control (also from MVC), or if my client and server are two independent / separate projects that communicate, and I should consider the client as a MVC project in itself.
I would appreciate your 2 cents.
From your question it's not clear what 1. and 2. differ but looks like 1. is tightly coupled while 2. has better separation of concerns.
It depends on your application. Do you need to create client heavy app with rich UI/UX elements, or maybe a mobile app where bandwidth is limited? If the answer is yes, then go with a second approach (2.): build your MVC like structure or use existing MV* libraries, like Ember, Angular, Backbone, Knockout, etc.
If you need SEO support and don't have much of fron-end code, then rendering on the server-side is still an option. Even with this approach ORM like Mongoose can come in handy.
PS: JavaScript doesn't really have classes, because objects inherit from other objects. You can use prorotypal inheritance patterns for that.