Can I mock a system with custom properties? - sap-cloud-sdk

We're using the Destinations service to configure connections to different kinds of systems. As part of this, we are using the "Additional Properties" section to add non-standard properties, such as my.custom.property=123.
We have successfully used the SAP Cloud SDK's MockUtil to write Spring integration tests that use the files systems.yml and credentials.yml as source for test systems.
However, we couldn't find a way to create an entry there that would provide a test system with a custom property like my.custom.property=123.
The erp section accepts only the properties known for ERP systems, such as sapClient. The general systems section accepts only the absolute basic properties name, type, uri, and proxy. Adding an unknown property in either section results in a runtime error because the mock utils are unable to parse the unknown property into the data classes with fixed structure.
Is there another way to mock a Destination that would allow us to include non-standard properties?
For example, the DestinationAccessorMocker looks promising, as it seems to enable setting up custom implementations of the Destination interface, but we couldn't figure out how to employ it.

Found an option that works.
MockUtil mockUtil = new MockUtil();
MockDestination destination = MockDestination
.builder("my-service", URI.create("http://localhost:1234/"))
.property("my.custom.property", "123")
.build();
mockUtil.mockDestination(destination);
Maybe somebody can confirm that this is an intended way to do this?

Related

Customize Controller from commercewebservices in SAP Commerce Cloud

From what I understand, from SAP Commerce Cloud 2005 onward the way to customize the REST-endpoints within SAP Commerce Cloud for Spartacus is to use commercewebservices (non-template) and then add own occ-extensions with your REST-endpoints.
That works fine for new endpoints, but what if I want to customize an existing controller from within commercewebservices? Since I am not using the template anymore commercewebservices cannot be modified anymore. I don't see a way how I could for example customize de.hybris.platform.commercewebservices.core.v2.controller.CartsController.
Swapping out commercewebservices with your own extension generated from the template does not work since multiple OOTB (e.g. cmsocc) extensions depend on commercewebservices hence it will always be loaded and clash with our own extension derived from commercewebservices.
Customizing commercewebservices with an addOn also does not solve the problem since, as I understand, it is not possible to add your own controller and bind it to the a url-pattern already used from a controller within commercewebservices
If you want to override an existing API endpoint (CartsController in our case), you can do so with the #RequestMappingOverride annotation.
Using this annotation, you can "shadow" the existing request mapping of the out-of-the-box controller with your custom controller in your own OCC extension.
You can find more details and an example here:
Overriding the REST API [help.sap.com]
EDIT
And let's not forget:
All of the action happens in the facades anyway, and you can also extend the API responses without overriding the Controller using the WsDTO concept plus additional converters. (see Extending Data Objects[help.sap.com] for more details)
Thanks for the response.
The annotation RequestMappingOverride works fine. There is one problem with this approach, lets assume I do following:
Introduce an new called MyController extending the CartsController
Override a single method and annotated this method with RequestMappingOverride
Starting up the system I do get now ambiguous mappings on all mappings of CartsController which I did not override
The reason is, I have now two Controllers registered with the same mappings. The CartsController and MyController which inherits all the methods which are not overriden from CartsController. The only solution I found is to override every single method of the CartsController, annotate all methods with RequestMappingOverride and then just do a super call. That is a bit clumsy and leads to a lot of boilerplate code. I wish the annoation RequestMappingOverride would work on class-level rather than only on method level

When to use (decorate with) what and why - DefaultErpHttpDestination, DefaultHttpDestination?

Using java SAP Cloud SDK
I have to use com.sap.cloud.sdk.s4hana.datamodel.odata.namespaces.outbounddeliveryv2.batch.OutboundDeliveryV2ServiceBatch.execute(HttpDestinationProperties destination) to perform some updates on S/4 system. This method, execute takes an argument of type HttpDestinationProperties.
Since I need a destination, I am using below code to get destination:
HttpDestination destination = DestinationAccessor.getDestination("MyErpSystem").asHttp();
Since HttpDestination extends HttpDestinationProperties, we can safely pass it to execute. But according to step 4 of 'Connect to OData Service on Cloud Foundry Using SAP Cloud SDK' tutorial, the code for accessing destination looks like this:
ErpHttpDestination destination = DestinationAccessor.getDestination("MyErpSystem").asHttp().decorate(DefaultErpHttpDestination::new);
and then they pass that destintion to the execute method of the service.
My question is that since execute methods takes an argument of type HttpDestinationProperties, How would I know that I have to use DefaultErpHttpDestination? Same thing goes for DefaultHttpDestination.
I have following questions -
When and why should I wrap the destination in DefaultErpHttpDestination?
When and why should I wrap the destination in DefaultHttpDestination?
Why should I wrap the destination in above two wrappers at all?
This is an excellent question!
The context:
Of course you can keep using your original approach:
HttpDestination destination =
DestinationAccessor.getDestination("MyErpSystem").asHttp();
This is the recommended way for destinations targeting a generic HTTP service endpoint.
It loads the required destination properties for HTTP connections, e.g. URL, Authentication, ...
In the tutorials we are describing the integration with S/4HANA OData services:
HttpDestination destination =
DestinationAccessor.getDestination("MyErpSystem").asHttp()
.decorate(DefaultErpHttpDestination::new);
By "decorating" the HttpDestination instance with ERP properties, we enable additional S/4 related HTTP request headers: sap-client and sap-locale. With the above configuration, those values are read automatically from the destination service - if they are set.
Your questions (changed order):
"When and why should I wrap the destination in DefaultHttpDestination?"
DestinationAccessor#getDestination returns a generic Destination. In order to make sure we are dealing with HTTP (and not RFC) connections, you need to run #asHttp - as you already do. With the resulting HttpDestination instance, you can run HTTP queries like OData and REST. Depending on your use case, no additional wrapping is required.
For example, if you were about to use BAPI endpoints, then you'd need to run #asRfc instead. This method will check for different destination properties to make sure all required values are set.
"When and why should I wrap the destination in DefaultErpHttpDestination?"
It is recommended to wrap the destination in DefaultErpHttpDestination only when you are dealing with S/4 service endpoints and you rely on custom values for sap-client and sap-locale. The wrapping can be done at any time of your application runtime, as long as it happens before #execute(HttpDestinationProperties) method.
If you do not want to wrap it a second time, then you'd need to manually manage the HTTP request headers for sap-client and sap-locale.
"Why should I wrap the destination in above two wrappers at all?"
This is the API contract. It makes sure all required destination properties are correctly set before even invoking the actual request. The (optional) ERP flavored wrapping of the destination instance was provided to make sure all S/4 properties are automatically considered as well.

Service Fabric IServiceRemotingRequestMessageBody Iterate Parameters

Edited left name of a custom type instead of the direct service fabric interface.
I am trying to write an interceptor capable of interrogating the parameters being passed to a remoting service. I can intercept the IServiceRemotingRequestMessage once it gets to the service and am able extract the parameters, but ONLY if I know the position and name of the parameter at the time.
[Pseudo]
var someParam = IServiceRemotingRequestMessageBody.GetParameter(0, "request", serviceRequestInfo.RequestMessage.GetBody().GetType());
What I need is a way to simply iterate the parameters and work with them directly (currently just serialize them to a string so I can log some of the info being passed). However, the IServiceRemotingRequestMessageBody only exposes a GetParameter method that must be passed the index and the name...
I can maybe do some reflection work given the method name and the service contract but I'm hoping there is a much more straightforward way to get this directly.
Thanks for any tips,
Will
There may be an easier way using the default serialization, but the way I solved it, currently, is to replace the Service Fabric serialization providers with JSON Serialization. Then, my interceptors can work with the JSON data as necessary.
I'll assume there is a way to do something similar with the default serialization but, if so, it's not clearly documented how to work with it. If someone proposes an option I would gladly give it a try.

Is there a way to configure the generate method name for grpc-node client?

I am hoping to use a grpc-node client to talk to a microservice built in Go using the go-micro framework. I am running into an issue where go-micro defines method names using periods (.) to separate namespaces and method names, whereas grpc-node slashes (/). Is there anyway to configure this pattern to have these two processes talk to each other?
The gRPC over HTTP/2 protocol documentation defines that the path is constructed as follows:
Path → ":path" "/" Service-Name "/" {method name}
with this additional note
Some gRPC implementations may allow the Path format shown above to be overridden, but this functionality is strongly discouraged. gRPC does not go out of its way to break users that are using this kind of override, but we do not actively support it, and some functionality (e.g., service config support) will not work when the path is not of the form shown above.
So, the Node gRPC client is following the specification, and the alternate format used by go-micro appears to be hard coded in their code generation plugin (here). I would consider that to be a bug.
That being said, there is a viable workaround to match that method name format in the Node gRPC library. When you load a .proto file in the Node each client constructor function has a service member which is a plain JavaScript object that describes the service. It is a map of method names to method definitions, and each method definition includes a path member. You can modify the path of each method to match the pattern that go-micro uses, then pass the resulting service object to grpc.makeGenericClientConstructor to get a new client constructor that connects to the modified service.

Is there a way to link a specific method to a Route in ServiceStack?

The Problem
I'm aware of the basic way to create a route/endpoint in ServiceStack using methods with names like "Get", "Post", "Any", etc inside a service but in the particular case that I'm trying to work with I have an existing service (which I can make an IService via inheritance) that can not be retrofitted w/ServiceStack attributes and currently uses DTOs for the requests and responses.
This service contains many functions that I do not want to manually mask (as this is a pass-through layer) but otherwise already conform to ServiceStack's requirements. What I'm wondering is if there's a way to manually create these routes in a way that would work like I've mocked up here. My existing functions and DTOs already contain the information I would need to define the routes so if this approach is possible it would only require me to enumerate them at initialization time as opposed to generating the services layer manually.
I noticed there is an extension method on Routes.Add that takes an Expression of type Expression> but I was not able to get that working because I believe the underlying code makes assumptions about the type of Expression generated (LambdaExpression vs MemberExpression or something like that). I also may be barking up the wrong tree if that's not the intended purpose of that function but I can not find documentation anywhere on how that variant is supposed to work.
Why?
I'm not sure this is necessary but to shed some light on why I want to do this as opposed to retrofitting my existing layers: The current code is also used outside of a web service context and is consumed by other code internally. Retrofitting ServiceStack in to this layer would make every place that consumes it require ServiceStack's assemblies and be aware of the web service which is a concern I want separated from the lower code. We were previously using MVC/WCF to accomplish this goal but we want some of the features available from ServiceStack.
the current architecture looks like this:
data -> DAL -> discrete business logic -> composition -> web service
Hopefully that makes enough sense and I'm not being obtuse. If you would like any more details about what I want to do or why I'll try to update this post as soon as possible.
Thanks!
You might use the fallback route in order to provide your own routing mechanism.
Then you get the request.Path property and route using your own mapping of path:Function which can be stored in a simple dictionary.
Anyway, if you go this path I don't see much benefit in using servicestack. It seems you just need an http handler that routes requests to existing services.

Resources