I created a Spring Boot (1.4.2) REST application. One of the #RestController methods needs to invoke a 3rd party API REST operation (RestOp1) which returns, say between 100-250 records. For each of those records returned by RestOp1, within the same method, another REST operation of the same 3rd party API (RestOp2) must be invoked. My first attempt involved using a Controller class level ExecutorService based on a Fixed Thread Pool of size 100, and a Callable returning a record corresponding to the response of RestOp2:
// Executor thread pool - declared and initialized at class level
ExecutorService executor = Executors.newFixedThreadPool(100);
// Get records from RestOp1
ResponseEntity<RestOp1ResObj[]> restOp1ResObjList
= this.restTemplate.exchange(url1, HttpMethod.GET, httpEntity, RestOp1ResObj[].class);
RestOp1ResObj[] records = restOp1ResObjList.getBody();
// Instantiate a list of futures (to call RestOp2 for each record)
List<Future<RestOp2ResObj>> futureList = new ArrayList<>();
// Iterate through the array of records and call RestOp2 in a concurrent manner, using Callables.
for (int count=0; count<records.length; count++) {
Future<RestOp2ResObj> future = this.executorService.submit(new Callable<RestOp2ResObj>() {
#Override
public RestOp2ResObj call() throws Exception {
return this.restTemplate.exchange(url2, HttpMethod.GET, httpEntity, RestOp2Obj.class);
}
};
futureList.add(future);
});
// Iterate list of futures and fetch response from RestOp2 for each
// record. Build a final response and send back to the client.
for (int count=0; count<futureList.size(); count++) {
RestOp2ResObj response = futureList.get(count).get();
// use above response to build a final response for all the records.
}
The performance of the above code is abysmal to say the least. The response time for a RestOp1 call (invoked only once) is around 2.5 seconds and that for a RestOp2 call (invoked for each record) is about 1.5 seconds. But the code execution time is between 20-30 seconds, as opposed to an expected range of 5-6 seconds! Am I missing something fundamental here?
Is the service you are calling fast enough to handle that many requests per second?
There is an async version of RestService is available called AsyncRestService. Why are you not using that?
I would probably go like this:
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate(new ConcurrentTaskExecutor(Executors.newFixedThreadPool(100)));
asyncRestTemplate.exchange("http://www.example.com/myurl", HttpMethod.GET, new HttpEntity<>("message"), String.class)
.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> result) {
//TODO: Add real response handling
System.out.println(result);
}
#Override
public void onFailure(Throwable ex) {
//TODO: Add real logging solution
ex.printStackTrace();
}
});
Your question involves two parts :
multiple API callbacks asynchronously
handle timeouts (fallback)
both parts are related as you've to handle the timeout of each call.
you may consider use Spring Cloud (based on spring boot) and use some out of the box solution based on OSS Netflix stacks.
The first (timeouts) on should be a circuit breaker hystrix based on feign client
The second (multiple requests) this is an architecture issue, using native Executors isn't a good idea as it will not scale and has a huge maintenance costs. You may relay on Spring Asynchrounous Methods you'll have better results and fully spring compliant.
Hope this will help.
Related
I want to write the testcase for the given method. But there exist a static Transport.sendEmail method.
How can I mock this method using MockitoJunitRunner.
public void sendEmail(final String message, final String contentType) {
final Session session = Session.getDefaultInstance(PROPERTIES, null);
final Multipart mpart = new MimeMultipart();
final MimeBodyPart body = new MimeBodyPart();
try {
body.setContent(message, contentType);
mpart.addBodyPart(body);
Transport.send(createMessage(session, mpart));
LOGGER.info("Email Notification Sent Successfully");
} catch (MessagingException | UnsupportedEncodingException e) {
LOGGER.error("Was not able to send mail", e);
}
So:
Transport.send(createMessage(session, mpart));
that static call means: you can't "control" it using Mockito. Plain and simple. If that call just "passes" in your unit test environment, well, then you can test it, but not verify that the call really took place. Worse, if that call throws some exception in the unit test setup, then heck, what could you do?
Options:
turn to PowerMock(ito) or JMockit, as they allow you to gain control
recommended: improve your design to be easy-to-test
That last idea comes in various flavors:
For example, you could create a minimal interface EmailSenderService that offers a void send(Message whatever) method. Next: you create one implementation of that interface that simply invokes that static method. Now your code that actually has to send that message ... simply gets passed in an instance of EmailSenderService. And within your unit tests, you can now #Mock that interface, and you gain control over it.
Alternatively, you simply deprecate that static method (maybe the whole class), and you design a new "real" EmailSenderService that doesn't rely on static methods.
I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />
In trying to integrate RavenDB usage with Service Stack, I ran across the following solution proposed for session management:
A: using RavenDB with ServiceStack
The proposal to use the line below to dispose of the DocumentSession object once the request is complete was an attractive one.
container.Register(c => c.Resolve<IDocumentStore>().OpenSession()).ReusedWithin(ReuseScope.Request);
From what I understand of the Funq logic, I'm registering a new DocumentSession object with the IoC container that will be resolved for IDocumentSession and will only exist for the duration of the request. That seemed like a very clean approach.
However, I have since run into the following max session requests exception from RavenDB:
The maximum number of requests (30) allowed for this session has been
reached. Raven limits the number of remote calls that a session is
allowed to make as an early warning system. Sessions are expected to
be short lived, and Raven provides facilities like Load(string[] keys)
to load multiple documents at once and batch saves.
Now, unless I'm missing something, I shouldn't be hitting a request cap on a single session if each session only exists for the duration of a single request. To get around this problem, I tried the following, quite ill-advised solution to no avail:
var session = container.Resolve<IDocumentStore>().OpenSession();
session.Advanced.MaxNumberOfRequestsPerSession = 50000;
container.Register(p => session).ReusedWithin(ReuseScope.Request);
Here is a sample of how I'm using the resolved DocumentSession instance:
private readonly IDocumentSession _session;
public UsersService(IDocumentSession session)
{
_session = session;
}
public ServiceResponse<UserProfile> Get(GetUser request)
{
var response = new ServiceResponse<UserProfile> {Successful = true};
try
{
var user = _session.Load<UserProfile>(request.UserId);
if (user == null || user.Deleted || !user.IsActive || !user.IsActive)
{
throw HttpError.NotFound("User {0} was not found.".Fmt(request.UserId));
}
response.Data = user;
}
catch (Exception ex)
{
_logger.Error(ex.Message, ex);
response.StackTrace = ex.StackTrace;
response.Errors.Add(ex.Message);
response.Successful = false;
}
return response;
}
As far as I can see, I'm implementing SS + RavenDB "by the book" as far as the integration point goes, but I'm still getting this max session request exception and I don't understand how. I also cannot reliably replicate the exception or the conditions under which it is being thrown, which is very unsettling.
I have a service in my ServiceStack API to handle image results, by implementing IStreamWriter WriteTo(stream). Works great.
To optimize the processing I am adding support for the InMemory Cache, with a TimeSpan to expire the results. My concern is related to IDispose. Prior to cache implementation I was using IDispose to dispose the result object and its image after returning, but with inmemory cache it cannot implement IDispose, otherwise the data will be wiped before it is refetched from cache.
Question is how, or where, to implement the disposal of the cached results? Will the cache dispose the items on expiration? If so, how to implement Dispose only for calls from cache manager, but not from http handler.
public class ImageResult : IDisposable, IStreamWriter, IHasOptions
{
private readonly Image image;
public void WriteTo(Stream responseStream)
{
image.Save(responseStream, imgFormat);
}
public void Dispose()
{
// if we dispose here, will be disposed after the first result is returned
// want the image to be disposed on cache expiration
//if (this.image != null)
// this.image.Dispose();
}
}
public class ImageService : AssetService
{
public object Get(ImageRequest request)
{
var cacheKey = ServiceStack.Common.UrnId.Create<ImageRequest>(request.id);
if (Cache.Get<ImageResult>(cacheKey) == null)
{
Cache.Set<ImageResult>(cacheKey, GetImage(request), TimeSpan.FromMinutes(1));
}
return Cache.Get<ImageResult>(cacheKey);
}
[...]
}
From a quick look at ServiceStack's InMemoryCache you can see there's no event or callback to hook into for cache entry expiration. Consider using System.Runtime.Caching.MemoryCache which gives you similar caching capabilities, plus specifically you can use a change monitor for callback on expiration and/or removal.
Another alternative: create your own from SS's cache source code to provide you with a callback.
Once you have a callback in place, you could call Dispose() from there - but as you said you don't want the ImageResult to be disposable, instead allow access to its Image property and dispose that from the expiration callback yourself. You could wrap a class around .net's image to allow for unit testing (avoid having to use a real image object in tests).
EDIT: actually.. see below(*), this would create a mess.
On another note, I would make some slight changes to your Get() method. The last call to Cache.Get() is superfluous. Even though you're using an in-memory cache you'd still want to minimize access to it as it's potentially slower than it may seem (needs to use locks to synchronize in-memory access from multiple threads).
var imageResult = Cache.Get<ImageResult>(cacheKey);
if (imageResult == null)
{
imageResult = GetImage(request);
Cache.Set<ImageResult>(cacheKey, imageResult, TimeSpan.FromMinutes(1));
}
return imageResult;
(*) Just realized you could have a request getting the ImageResult from the cache, and then an instance later, before it writes anything to the target (response) stream, it expires and gets disposed. Nasty. Instead, let .net handle this for you: instead of making ImageResult implement IDisposable, create a destructor in which you dispose the internal Image object. This will work with SS's in memory cache:
~ImageResult()
{
image.Dispose();
}
What is the difference between Gateway and Service Activator as Message Endpoints (in terms of Enterprise Integration Patterns)?
http://eaipatterns.com/
Typically, a service activator is used to invoke a local service, in such a manner that the service doesn't know it's being invoked from a messaging system.
A gateway s typically an entry or exit point for the messaging system.
The service activator calls a method on an object where the application developer provides the implementation. Spring Integration takes care of calling the method with messages from the input channel and shunting the results off to some output channel. The application-provided code can do some arbitrary work.
For the gateway the application developer provides only an interface, its implementation is provided by Spring.
An appendix to the Spring Integration documentation includes a Cafe example where Barista is a service called through a service activator, and Cafe is a gateway.
The application's main method looks up a Cafe object from the Spring application context and calls placeOrder, on it, passing an Order in as an argument:
public static void main(String[] args) {
AbstractApplicationContext context = null;
if (args.length > 0) {
context = new FileSystemXmlApplicationContext(args);
}
else {
context = new ClassPathXmlApplicationContext(
"cafeDemo.xml", CafeDemo.class);
}
Cafe cafe = (Cafe) context.getBean("cafe");
for (int i = 1; i <= 100; i++) {
Order order = new Order(i);
order.addItem(DrinkType.LATTE, 2, false);
order.addItem(DrinkType.MOCHA, 3, true);
cafe.placeOrder(order);
}
}
The Cafe is an interface that the application does not provide an implementation for. Spring generates an implementation that sends the Orders passed into it down the input channel called "orders".
Further down the pipeline, there are two service-activators that have a reference to the Barista. The Barista is a POJO that has code for creating a Drink like this:
public Drink prepareHotDrink(OrderItem orderItem) {
try {
Thread.sleep(this.hotDrinkDelay);
System.out.println(Thread.currentThread().getName()
+ " prepared hot drink #" + hotDrinkCounter.incrementAndGet()
+ " for order #" + orderItem.getOrder().getNumber()
+ ": " + orderItem);
return new Drink(orderItem.getOrder().getNumber(),
orderItem.getDrinkType(),
orderItem.isIced(), orderItem.getShots());
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
return null;
}
}
The Barista receives drink orders from the service-activator's input channel and has a method called on it that returns a Drink, which gets sent down the service-activator's output channel, "preparedDrinks".
For me the gateway is used for making an abstraction and provide a normalised API for one or more back-end services.
E.g You have 5 providers which are using different ways to interface with you (SOAP, REST, XML/http, whatever), but your client want only one way to get the data (let says json/REST).
The gateway will convert the json request form your client and convert them to the right backend with its own protocol, after it will convert the backend response to json to give the response to your client.
The service activator acts more as a trigger on an incoming message. Let say your activator poll a database for incoming message and then when the condition meet the "activation" it calls the underlying service.
Info for gateway here.
Info for Activator here.