Is there a one-on-one replacement for Kendo DataSource? - kendo-datasource

The Kendo UI for jQuery library had an easy to configure DataSource, that could be used to wrap around a REST API.
Is there something similar (planned to be) in Kendo UI for Angular2? Or is the recommended way to use services that wrap around #angular/http, and use RxJS Observables?

Nope, they consider Angular2 to have sufficient functionality, so your last supposition is correct.
I'm afraid that we do not plan to have a DataSource component. We
believe that having a DataSource like the one in Kendo UI for jQuery
does not fit in the NG2 context. The framework already provides most
of the DataSource functionality such as ability to fetch data, change
tracking etc. Also there are pleiad of libraries and patterns for
working with data. Thus, having single component to handle all of
those seems more of a unnecessary constraint.
However, we do plan to make available helper functions in order to
streamline the Grid data operation descriptors serialization and
operation handling. Providing better flexibility and integration with
the rest of the NG2 ecosystem. Some functions already exist for
sorting and paging for OData. As well as generating a comparer
function for in-memory processing (this is demonstrated in this sample).
https://github.com/telerik/kendo-angular/issues/45

Related

Supporting multiple versions of models for different REST API versions

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

Grouping namespaces

I was wondering whether the namespaces themselves can be grouped?
Our REST server project has a highly decentralized structure (along the lines of a Redux fractal pattern) and every feature has its own namespace. This predictably has led to many namespaces, and the swagger page is getting rather full now.
If this is not achievable, I guess we can live with it, or consider emitting only the swagger json to be consumed by the official Swagger UI that we can run in a separate server. But I'd much prefer a restplus-y solution, since that represents the least amount of code friction.
The underlying OpenAPI Specification has a concept of tags. The namespace feature in Flask-RESTPlus assigns these names as tags for path definitions, so this is how you get the grouping in a Swagger UI. The specification does not offer any hierarchical grouping mechanism, so therefore Flask-RESTPlus doesn't offer any such feature.
You could consider a different strategy for assigning namespaces/tags to create more manageable groupings, split the API across multiple Swagger UI pages/sites, etc. Sounds like there is no way around your Swagger UI needing to render a very large number of API methods, so making it more understandable via general content structuring may be your best approach.

REST Services or Repeat Control with Java Object in a Data Table

Over the past couple of years or so I have revamped most of my Notes Applications for XPages and of late made extensive use of Java objects in Repeat Controls etc.
I am now implementing, where appropriate, jQuery DataTables in an attempt to generate the same functionality as Notes Views where appropriate. My applications vary from a few document records to several thousand.
Most of the Data Table tutorials etc seem to imply or recommend the use of REST Services for Data Tables. What is the reason for this when I can simply drop my existing Java Objects into Repeat Controls and then access the back end documents via links etc.
Sorry if this is not a coding question, but I am clearly missing something fundamental in my basic knowledge. Any advice would be appreciated.
The short version is that jQuery data tables are built by purely (CS)JS, meaning any "normal" transport of data like a REST service (such as how you're describing using xp:restService) is pretty standard and ubiquitous. jQuery itself has no knowledge directly of any underlying Java objects and doesn't care what backs the service.
If you were using an xp:repeat control you could bind to a backing List or other iterable collection from a backing Java class / bean. This would make far more sense if that's how you'll present the data. The logic shift is that specifically any time you update your xp:repeat, you must send an AJAX (XHR) wrapping around that xp:repeat tag, whereas a jQuery update from a REST service will get only the data response. There is some overhead to using AJAX to refresh part of the page (which literally is replacing part of the existing DOM with the newly fetched HTML and parsing the content), but at smaller scales, it's not a huge amount.
Using a REST service means that:
your front-end implementation will be more consistent with the majority of the rest of the web development industry
your back-end logic will be segregated, (ideally) making it more easy to port, migrate, or document
There's nothing wrong with implementing an xp:repeat (or friends) with backing Java on XPages, especially if you're using primarily XPages controls.
There are many ways to implement a RESTful service in XPages and the reasoning behind why to go for RESTful APIs in the XPages runtime is something both myself and many others have blogged about.

Transition from RestKit to pure AFNetworking 2.0

I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.

Choice of technical solution to handling and processing data for a Liferay Project

I am researching to start a new project based on Liferay.
It relies on a system that will require its own data model and a certain agility and flexibility in data management as well as its visualization.
These are my options:
Using Liferay Expando fields and define their own data models. I must do all the view layer.
Using Liferay ECMS adding patches creating structures and hooks that allow me to define data models Master - Detail. It makes much easier viewing issue (velocity templates), but perhaps is the most "dirty" way.
Generating data layer and access to services with Hibernate and Spring. (using Service Factory, for example).
Liferay Service Builder would be similar to the option of creating the platform with Hibernate and Spring.
CRUD generation systems as OpenXava or your XMLPortletFactory
And now my question, what is your advice? What advantages or disadvantages do you think would provide one or another option?
Thanks in advance.
I can't speak for the other CRUD generation systems but I can tell you about the Liferay approaches.
I would take a hybrid approach.
First, I would create the required data models as best as I can with the current requirements in Liferay Service Builder and maintain them there as much as possible. This would require that you rebuild and redeploy your plugin every time you changed the data model but would greatly enhance performance compared to all the other Liferay approaches you've mentioned. Service Builder in that regard is much more rigid and cannot be changed via GUI.
However, in the event for some reason you cannot use Service Builder to redefine your data models and you need certain aspects of it the be changed via GUI, you can also use Expandos to extend the models you've created with Service Builder. So, it is the best of both worlds.
On the other option, using the ECMS would be a specialized case and I would only take this approach if there is a particular requirement it satisfies (like integration with the ECMS).
With that said, Liferay provides you many different ways to create your application. It ultimately depends on how you're going to use your application.

Resources