Representing thread pooling in Spring Integration rather than ExecutorService - spring-integration

Currently, code similar to the following exists in one of our applications:
#Component
public class ProcessRequestImpl {
private ExecutorService executorService;
public processRequest(...) {
// code to pre-process request
executorService.execute(new Runnable() {
public void run() {
ProcessRequestImpl.this.doWork(...);
}
}
}
private void doWork(...) {
// register in external file that request is being processed
// call external service to handle request
}
}
The intent of this is to create a queue of requests to the external service. The external service may take some time to process each incoming request. After it handles each one, it will update the external file to register that the specific request has been processed.
ProcessRequestImpl itself is stateless, in that all state is set in the constructor and there is no external access to that state. The process() method is called by another component in the application.
If this were to be implemented in a Spring Integration application, which of the following two approaches would be best recommended:
Keep the above code as is.
Extract doWork(), into a separate endpoint, configure that endpoint to receive messages on a channel, and to use configuration to achieve the multi threading in place of the executor service.
Some of the reasons we are looking at Spring Integration are as follows:
To remove the workflow logic from the code itself, so that the workflow and the chain of processing is evident on a higher level.
To simplify each class, enhancing readability and testability.
To avoid threading code if possible, and define that at a higher level of abstraction in configuration.
Given the sample code, could those goals be achieved using Spring Integration. Also, what would be an example of the DSL to achieve that.
Thanks

Something like
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(SomeGatewayInterface.class)
.handle("someBean", "preProcess")
.channel(MessageChannels.executor(someTaskExecutorBean())
.handle("someBean", "doWork")
.get();
The argument passed to the gateway method become the payload of the preprocess method, which would return some object that becomes the message payload, which becomes the parameter passed to doWork.

Related

How best to mock S4 endpoints to do performance tests (load test)?

This is not related to cloud sdk per se, but more regarding mocking the s4 endpoints which we usually use c sdk to query.
We want to do this for our load test, where we would not want the load test to go till s4 endpoint. We are considering using wiremock, to mock the endpoints, but the question is, whether the mocking logic in wiremock itself will contribute not in an insignificant manner to the metrics we are taking. If it would, then the metric becomes somewhwat unreliable since we want the apps performance metric not the mock framework's.
Other approach would be to use a mock server application dedicated for this, so that from app we would not have to do any mocking. Just route the call to the mock server app(using a mock destination perhaps)
My question to you guys is, have you ever encountered this use case? Perhaps yourself, or maybe from a consumer of yours. I would like to know how other teams in SAP solved this problem.
Thanks,
Sachin
In cases like yours, where the entire system (including responses from external services) should be tested, we usually recommend using Wiremock.
This is, because Wiremock is rather easy to setup and works well-enough for regular testing scenarios.
However, as you also pointed out, Wiremock introduces significant runtime overhead for the tested code and, thus, rending performance measurements of any kind more or less useless.
Hence, you could try mocking the HttpClient using Mockito instead:
BasicHttpResponse page = new BasicHttpResponse(new BasicStatusLine(HttpVersion.HTTP_1_1, 200, "OK"));
page.setEntity(new StringEntity("hello world!));
HttpClient httpClient = mock(HttpClient.class);
doReturn(page).when(httpClient).execute(any(HttpUriRequest.class));
This enables fine-grained control over what your applications retrieves from the mocked endpoint without introducing any kind of actual network actions.
Using the code shown above obviously requires your application under test to actually use the mocked httpClient.
Assuming you are using the SAP Cloud SDK in your application, this can be achieved by overriding the HttpClientCache used in the HttpClientAccessor with a custom implementation that returns your mocked client, like so:
class MockedHttpClientCache implements HttpClientCache
{
#Nonnull
#Override
public Try<HttpClient> tryGetHttpClient(#Nonnull final HttpDestinationProperties destination, #Nonnull final HttpClientFactory httpClientFactory) {
return Try.of(yourMockedClient);
}
#Nonnull
#Override
public Try<HttpClient> tryGetHttpClient(#Nonnull final HttpClientFactory httpClientFactory) {
return Try.of(yourMockedClient);
}
}
// in your test code:
HttpClientAccessor.setHttpClientCache(new MockedHttpClientCache());

Download a file using http with Spring Integration

Making the plunge into Java from .NET for a long time.
I am looking for is an example on how to periodically download a file, read the text from it and then take some action based on the read using Springs Integration library and the annotation based approach.
I want to pull GTFS formatted zip file from a transit provider. This provider produces a simple text file with a timestamp to indicate the last publishing time.
Specifically the producers of the data publish a text file at:
https://someserver.com/gfts/published.txt
This file has a simple timestamp to indicate when the last time their data file was published.
Then there is the data:
https://someserver.com/gfts/schedule.zip
I have tried to find some examples on how to go about polling the "published" file. Basically I want to periodically download the file and check the timestamp to determine if the schedule should be downloaded.
Most of the examples I have seen are using the XML based configuration with spring - and I barely am holding onto the annotation based. I have also seen examples of downloading a file using FTP / SFTP.
I need to use http AND I also need to include Basic Authorization (in the header).
This is as far as I have gotten. I am not sure how to go about wiring this up?
From the Spring Integration docs - this is how I am supposed to declare an outbound gateway (I think that is what I need?)
The question is now what? I need that HttpRequestExecutingMessageHandler to save the stream (file) a local file so I can read the contents and take some other action?
#Configuration
#EnableIntegration
public class GtfsConfiguration {
#Bean
public MessageChannel fileUpdateChannel () {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "fileUpdateChannel", polling = #Poller(fixedDelay="5000")
public HttpRequestExecutingMessageHandler fileUpdateGateway() {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler("https://someserver.com/gtfs/raw/published.txt");
handler.setHttpMethod(HttpMethod.GET);
handler.setExpectedResponseType(byte[].class);
return handler;
}
}
If you need to download such a file periodically, you need to use a "fake" Inbound Channel Adapter, for example:
#Bean
#InboundChannelAdapter(value = "fileUpdateChannel"
poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public String downloadFileSchedule() {
return () -> "";
}
The #ServiceActivator for the HttpRequestExecutingMessageHandler is going to be called every second. You don't need to have there a #Poller on the #ServiceActivator. It isn't going to do anything by itself. Plus your fileUpdateChannel is a DirectChannel, not QueueChannel.
I don't think you need to save a downloaded file locally. I even would say that handler.setExpectedResponseType(String.class); is fully enough to get a file content as a reply message payload for downstream analyze.
The easiest way to configure a Basic Authorization is with the Apache Commons HTTP Client:
CredentialsProvider provider = new BasicCredentialsProvider();
UsernamePasswordCredentials credentials
= new UsernamePasswordCredentials("user1", "user1Pass");
provider.setCredentials(AuthScope.ANY, credentials);
HttpClient client = HttpClientBuilder.create()
.setDefaultCredentialsProvider(provider)
.build();
and use this in the HttpComponentsClientHttpRequestFactory, which you then should inject into the mentioned HttpRequestExecutingMessageHandler via its setRequestFactory(ClientHttpRequestFactory requestFactory).

Threads in JSF?

I new in JSF, and I need use Threads for google maps. I am using primefaces for google maps, but I need excute a thread in background to get lat and long from data base and then graphic the markers in the map.
Your question is not specific to JSF, but rather to web applications in general. So, how to perform tasks asynchronously in a Java web applications? Definitely NOT by creating your own threads.
A Java web application runs in an application server (for example jBoss). It is the responsibility of the application server to manage Java threads for you. For instance, it will use a separate thread for each web request that comes in. The application server creates a pool of threads and reuses those threads since it is somewhat expensive to create new ones all the time. That's why you should not create your own, especially if it's done for every web request since it will directly impact scalability.
In order to execute tasks asynchronously, you can use the ejb #Asynchronous annotation (assuming the app is running in a Java EE container like jBoss, but not Tomcat).
import javax.ejb.Singleton;
#Singleton
public class AsyncBean {
#Asynchronous
public void doSomethingAsynchronously() {
// when this EJB is injected somewhere, and this method is called, it will return to the caller immediately and its logic will run in the background
}
}
If the app is not running in a Java EE container, take a look at this answer which nicely lays out some other options for async processing in web apps.
JSF is completely unrelated to your problem. For this case, JSF will act as mere HTML generator. Your specific problem is how to prepare data asynchronously and consume it from your web app.
You can create the thread manually when the application starts on a class that implements ServletContextListener interface, like this:
public class ApplicationListener implements ServletContextListener {
ExecutorService executor;
public ApplicationListener() {
executor = Executors.newSingleThreadExecutor();
}
#Override
public void contextInitialized(ServletContextEvent sce) {
Runnable task = new Runnable() {
#Override
public void run() {
//process the data here...
}
}
executor.submit(task);
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
executor.shutdownNow();
}
}
Improve the design above to fulfill your requirements. Take into account that creating threads in an application server should only be done if you know what you're doing.
Another implementation would be to use another application to do the processing (let's call it Data Processor), which by default will run on a separate thread and environment. Then, communicate your web application with this Data Processor through a cache or nosql application like EhCache, Infinispan or Hazelcast.

Ninject dependency injection in SharePoint Timer Job

I have successfully implemented an enterprise SharePoint solution using Ninject dependency injection and other infrastructure such as NLog logging etc using an Onion architecture. With a HttpModule as an Composition Root for the injection framework, it works great for normal web requests:
public class SharePointNinjectHttpModule: IHttpModule, IDisposable
{
private readonly HttpApplication _httpApplication;
public void Init(HttpApplication context)
{
if (context == null) throw new ArgumentException("context");
Ioc.Container = IocContainerFactory.CreateContainer();
}
public void Dispose()
{
if(_httpApplication == null) return;
_httpApplication.Dispose();
Ioc.Container.Dispose();
}
}
The CreateContainer method loads the Ninject modules from a separate class library and my ioc container is abstracted.
For normal web application requests I used a shared static class for the injector called Ioc. The UI layer has a MVP pattern implementation. E.g in the aspx page the presenter is constructed as follows:
presenter = Ioc.Container.Get<SPPresenter>(new Ninject.Parameters.ConstructorArgument("view", this));
I'm still reliant on a Ninject reference for the parameters. Is there any way to abstract this, other than mapping a lot of methods in a interface? Can't I just pass in simple types for arguments?
The injection itself works great, however my difficulty comes in when using external processes such as SharePoint Timer Jobs. It would obviously be a terrible idea to reuse the ioc container from here, so it needs to bootstrap the dependencies itself. In addition, it needs to load the configuration from the web application pool, not the admin web application. Else the job would only be able to run on the application server. This way the job can run on any web server, and your SharePoint feature only has to deploy configurations etc. to the web apllication.
Here is the execute method of my timer job, it opens the associated web application configuration and passes it to the logging service (nlog) and reads it's configuration from the external web config service. I have written code that reads a custom section in the configuration file and initializes the NLog logging infrastructure.
public override void Execute(Guid contentDbId)
{
try
{
using (var ioc = IocContainerFactory.CreateContainer())
{
// open configuration from web application
var configService = ioc.Get<IConfigService>(new ConstructorArgument("webApplicationName", this.WebApplication.Name));
// get logging service and set with web application configuration
var logginService = ioc.Get<ILoggingService>();
logginService.SetConfiguration(configService);
// reapply bindings
ioc.Rebind<IConfigService>().ToConstant(configService);
ioc.Rebind<ILoggingService>().ToConstant(logginService);
try
{
logginService.Info("Test Job started.");
// use services etc...
var productService = ioc.Get<IProductService>();
var products = productService.GetProducts(5);
logginService.Info("Got products: " + products.Count() + " Config from web application: " + configService.TestConfigSetting);
logginService.Info("Test Job completed.");
}
catch (Exception exception)
{
logginService.Error(exception);
}
}
}
catch (Exception exception)
{
EventLog.WriteError(exception, "Exception thrown in Test Job.");
}
}
This does not make the timer jobs robust enough, and there is a lot of boiler plate code. My question is how do I improve on this design? It's not the most elegant, I'm looking for a way to abstract the timer job operation code and have it's dependencies injected into it for each timer job. I would just like to hear your comments if you think this is a good approach. Or if someone has faced similar problems like this? Thanks
I think I've answered my own question with the presenter construction code above. When using dependency injection in a project, the injection itself is not that important, but the way it changes the way you write code is far more significant. I need to use a similar pattern such as command for my SharePoint timer job operations. I'd just like the bootstrapping to be handled better.

Fire and forget with ServiceStack's AsyncServiceBase

I have following service
public class AppService : AsyncServiceBase<EvaluateStock>
{
public IBus Bus { get; set; }
public override object ExecuteAsync(EvaluateStock request)
{
// this will block the incoming http request
// unitl task is completed
// long computation
// Bus.Publish(result)
}
}
which gets called by different consumers following way
POST
http://srv1/app/json/asynconeway/EvaluateStock
Using asynconeway I was assuming that it will allow me to achieve fire and forget as WCF does with IsOneWay. But seems is not the case.
Do I miss something ?
AsyncServiceBase has been deprecated as ExecuteAsync is now in ServiceBase which is what gets called when a request is made to /asynconeway/XXX pre-defined endpoint.
Rather than overriding ExecuteAsync the recommended approach is to implement IMessageFactory which is what gets called if an IMessageFactory has been registered in the AppHost IOC. If an IMessageFactory wasn't registered than it just gets executed Sync - at which point if you still wanted it non-blocking you would override it. The impl for ExecuteAsync is at:
// Persists the request into the registered message queue if configured,
// otherwise calls Execute() to handle the request immediately.
//
// IAsyncService.ExecuteAsync() will be used instead of IService.Execute() for
// EndpointAttributes.AsyncOneWay requests
public virtual object ExecuteAsync(TRequest request)
{
if (MessageFactory == null)
{
return Execute(request);
}
BeforeEachRequest(request);
//Capture and persist this async request on this Services 'In Queue'
//for execution after this request has been completed
using (var producer = MessageFactory.CreateMessageProducer()) {
producer.Publish(request);
}
return ServiceUtils.CreateResponseDto(request);
}
IMessageFactory (client)/IMessageService (server) is apart of ServiceStack's Messaging API which allows you to publish messages for deferred execution later. See the Redis and Messaging wiki for an example of an end-to-end solution that uses the built-in Redis IMessageService. There are also InMemory and RCon IMesssageService's available and it should be easy to create your own as well.
Future Async support
There is also an async branch that has ServiceStack running on IHttpAsyncHandler and already has a functional alpha build available for you to try at: ServiceStack-v4.00-alpha.zip
With this change ServiceStack supports Task<> as a return type on services. You only need to register the Task<> plugin. To see a full example look at this integration test.

Resources