Unexpected behaviour with requered ordering position XML node on ServiceStack WS - position

Now as I know ServiceStack uses .NET's Xml DataContractSerializer to serialize/deserialize XML, but with it we have some truble in case when our webservice API used not .NET framework.
This problem very good described in this post if try short describe of problem:
Requirement of strict sequence of nodes XML is very uncomfortable for some external no. NET system.
Is it possible to use a different serialize or configure the ServiceStack used, so that he did not expect the elements in a rigidly defined positions?

ServiceStack uses .NET's XML DataContract serializer under the hood. It is not customizable beyond what is offered by the underlying .NET's Framework implementation.
I've answered how to override the ServiceStack's default XML Serialization with a custom XML Serializer in this earlier question:
https://stackoverflow.com/a/13498725/85785

Related

Supporting multiple versions of models for different REST API versions

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

Grouping namespaces

I was wondering whether the namespaces themselves can be grouped?
Our REST server project has a highly decentralized structure (along the lines of a Redux fractal pattern) and every feature has its own namespace. This predictably has led to many namespaces, and the swagger page is getting rather full now.
If this is not achievable, I guess we can live with it, or consider emitting only the swagger json to be consumed by the official Swagger UI that we can run in a separate server. But I'd much prefer a restplus-y solution, since that represents the least amount of code friction.
The underlying OpenAPI Specification has a concept of tags. The namespace feature in Flask-RESTPlus assigns these names as tags for path definitions, so this is how you get the grouping in a Swagger UI. The specification does not offer any hierarchical grouping mechanism, so therefore Flask-RESTPlus doesn't offer any such feature.
You could consider a different strategy for assigning namespaces/tags to create more manageable groupings, split the API across multiple Swagger UI pages/sites, etc. Sounds like there is no way around your Swagger UI needing to render a very large number of API methods, so making it more understandable via general content structuring may be your best approach.

Export Acumatica data without API call

I would like to export a stock item from an Acumatica instance as a data contract, but without calling an API. I don't want to call an API, because I need to retrieve it from inside an instance, not external to the instance. I think all I really need is a way to call the contract-based code to serialize into JSON, but without using a URL. I guess I could call the API within the same instance, but it seems like it should be easier than that.
Strictly speaking you want to make API calls. Specifically using the 'contract-based API' without using the 'web services API'. This seems to go against the design goals of the contract-based API.
Consider the following statement:
Contract-based APIs are built on an object model that the web services API provides.
Source:
https://adn.acumatica.com/blog/yes-we-have-an-api-for-that-an-introduction-to-the-acumatica-cloud-erp-apis/
The web service API provides the object model of the contract-based API. Remove the web services API of the equation and the contract based API is missing the object model it is dependent on for object serialization. Practically this means that methods that deal with contract-based serialization will require the web service object model as input parameter.
I believe there would be several technical hurdles preventing instantiation of the web service object model without using the TCP-IP stack. This is mainly because the original design goal of the contract-based API is to be called through web services.
This boils down to Acumatica using different object models for different contexts. Contract-based uses the 'entity' model while customizations use the 'DAC' model. There's also major difference in the querying API. Customizations uses BQL and contract-based have other methods.
There are obvious advantages in having a unified object model. Learning to use only one is easier than having to learn two. Using exclusively JSON is easier than mix and convert XML and JSON. However each model also have their disadvantage. Having different models allows the use of a model better tailored to the task at hand. Common requirements for object models are performance, human readability, memory footprint, ability to be easily machine parsed etc..
That said if all you need is the object model of contract-based API without the querying interface you could approximate it by using BQL and serializing the DAC objects to JSON. Because almost all DAC objects have the C# Serializable attribute it should be much easier to serialize the DAC objects retrieved with BQL than use data-contract API to retrieve and serialize the records without going through TCP-IP stack. It would also go in the same direction of the design goals of the API which is that the contract-based API should be used for access through webservice.

Securing Web Service parameters against Denial of Service issues

We have some soap based web services using java to wsdl approach in our organization. There is a security requirement to now fix limits on the request parameters being passed to service methods. Currently we have the maxoccurs attribute for a parameter to be unbounded in wsdl because we have the parameter as a collection in java.
To resolve this it looks like we need to make some changes in java source to regenerate the WSDL's which are compliant to this requirement. I know there are some unofficial api's available which can be used as replacement to jaxb providing annotations which can be added in java source. This may result in WSDL generated having maxoccurs to a fixed configured value. But, there are some issues in using these third party solutions due to licensing and other issues. Also, we need to enable schema validation for the WSDL.
I would like to know if there is a solution to have this check done outside the scope of either the WSDL or java source to be compliant with this requirement. What I am looking at is a configurable solution without touching wsdl's or java source. We are using IBM Datapower in our organization. Want to have if we can have a policy or something configured using datapower that will intercept the web service request parameters and throw fault if the maxoccurs for any of the web service method parameters is above a configured value.
Has anyone used datapower for a use case like this. Or is there a better way of achieving it.
I believe you can limit the maximum length of messages. This will actually be better than a WSDL limit for preventing DDOS as it will happen in the network layer.

Can ServiceStack use binary serializers for non-HTTP clients, e.g. Google Protocol Buffers?

As a followup to Does ServiceStack support binary responses?, I'm wondering whether there are injection points built (or planned) to use binary serializers such as Mark Gravell's protobuf-net for efficiency between non-HTTP clients. In fact, it might not be long before protocol buffers work in JavaScript.
Yep, ServiceStack has a custom pluggable format API where its own built-in CSV Format and HTML Report Format are both registered using it. The tutorial of Nortwind Database's custom v-card media type shows how to register your own format/media type using this API.
Support for protobuf-net is planned for the near future. There was someone on the ServiceStack Group looking at exploring adding support for it. In any case, I plan to be catching up with protobuf-net's author soon so I'll find out the best way of how to add support for it then.

Resources