ServiceStack/Funq not disposing RavenDB document session after request is complete - servicestack

In trying to integrate RavenDB usage with Service Stack, I ran across the following solution proposed for session management:
A: using RavenDB with ServiceStack
The proposal to use the line below to dispose of the DocumentSession object once the request is complete was an attractive one.
container.Register(c => c.Resolve<IDocumentStore>().OpenSession()).ReusedWithin(ReuseScope.Request);
From what I understand of the Funq logic, I'm registering a new DocumentSession object with the IoC container that will be resolved for IDocumentSession and will only exist for the duration of the request. That seemed like a very clean approach.
However, I have since run into the following max session requests exception from RavenDB:
The maximum number of requests (30) allowed for this session has been
reached. Raven limits the number of remote calls that a session is
allowed to make as an early warning system. Sessions are expected to
be short lived, and Raven provides facilities like Load(string[] keys)
to load multiple documents at once and batch saves.
Now, unless I'm missing something, I shouldn't be hitting a request cap on a single session if each session only exists for the duration of a single request. To get around this problem, I tried the following, quite ill-advised solution to no avail:
var session = container.Resolve<IDocumentStore>().OpenSession();
session.Advanced.MaxNumberOfRequestsPerSession = 50000;
container.Register(p => session).ReusedWithin(ReuseScope.Request);
Here is a sample of how I'm using the resolved DocumentSession instance:
private readonly IDocumentSession _session;
public UsersService(IDocumentSession session)
{
_session = session;
}
public ServiceResponse<UserProfile> Get(GetUser request)
{
var response = new ServiceResponse<UserProfile> {Successful = true};
try
{
var user = _session.Load<UserProfile>(request.UserId);
if (user == null || user.Deleted || !user.IsActive || !user.IsActive)
{
throw HttpError.NotFound("User {0} was not found.".Fmt(request.UserId));
}
response.Data = user;
}
catch (Exception ex)
{
_logger.Error(ex.Message, ex);
response.StackTrace = ex.StackTrace;
response.Errors.Add(ex.Message);
response.Successful = false;
}
return response;
}
As far as I can see, I'm implementing SS + RavenDB "by the book" as far as the integration point goes, but I'm still getting this max session request exception and I don't understand how. I also cannot reliably replicate the exception or the conditions under which it is being thrown, which is very unsettling.

Related

Application Insights telemetry correlation using log4net

I'm looking to have our distributed event logging have proper correlation. For our Web Applications this seems to be automatic. Example of correlated logs from one of our App Services API:
However, for our other (non ASP, non WebApp) services were we use Log4Net and the App Insights appender our logs are not correlated. I tried following instructions here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/correlation
Even after adding unique operation_Id attributes to each operation, we're not seeing log correlation (I also tried "Operation Id"). Example of none correlated log entry:
Any help on how to achieve this using log4net would be appreciated.
Cheers!
Across services correlation IDs are mostly propagated through headers. When AI is enabled for Web Application, it reads IDs/Context from the incoming headers and then updates outgoing headers with the appropriate IDs/Context. Within service, operation is tracked with Activity object and every telemetry emitted will be associated with this Activity, thus sharing necessary correlation IDs.
In case of Service Bus / Event Hub communication, the propagation is also supported in the recent versions (IDs/Context propagate as metadata).
If service is not web-based and AI automated correlation propagation is not working, you may need to manually get incoming ID information from some metadata if any exists, restore/initiate Activity, start AI operation with this Activity. When telemetry item is generated in scope of that Activity, it will get proper IDs and will be part of the overarching trace. With that, if telemetry is generated from Log4net trace that was executed in the scope of AI operation context then that telemetry should get right IDs.
Code sample to access correlation from headers:
public class ApplicationInsightsMiddleware : OwinMiddleware
{
// you may create a new TelemetryConfiguration instance, reuse one you already have
// or fetch the instance created by Application Insights SDK.
private readonly TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault();
private readonly TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
public ApplicationInsightsMiddleware(OwinMiddleware next) : base(next) {}
public override async Task Invoke(IOwinContext context)
{
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = $"{context.Request.Method} {context.Request.Uri.GetLeftPart(UriPartial.Path)}"
};
// If there is a Request-Id received from the upstream service, set the telemetry context accordingly.
if (context.Request.Headers.ContainsKey("Request-Id"))
{
var requestId = context.Request.Headers.Get("Request-Id");
// Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
requestTelemetry.Context.Operation.Id = GetOperationId(requestId);
requestTelemetry.Context.Operation.ParentId = requestId;
}
// StartOperation is a helper method that allows correlation of
// current operations with nested operations/telemetry
// and initializes start time and duration on telemetry items.
var operation = telemetryClient.StartOperation(requestTelemetry);
// Process the request.
try
{
await Next.Invoke(context);
}
catch (Exception e)
{
requestTelemetry.Success = false;
telemetryClient.TrackException(e);
throw;
}
finally
{
// Update status code and success as appropriate.
if (context.Response != null)
{
requestTelemetry.ResponseCode = context.Response.StatusCode.ToString();
requestTelemetry.Success = context.Response.StatusCode >= 200 && context.Response.StatusCode <= 299;
}
else
{
requestTelemetry.Success = false;
}
// Now it's time to stop the operation (and track telemetry).
telemetryClient.StopOperation(operation);
}
}
public static string GetOperationId(string id)
{
// Returns the root ID from the '|' to the first '.' if any.
int rootEnd = id.IndexOf('.');
if (rootEnd < 0)
rootEnd = id.Length;
int rootStart = id[0] == '|' ? 1 : 0;
return id.Substring(rootStart, rootEnd - rootStart);
}
}
Code sample for manual correlated operation tracking in isolation:
async Task BackgroundTask()
{
var operation = telemetryClient.StartOperation<DependencyTelemetry>(taskName);
operation.Telemetry.Type = "Background";
try
{
int progress = 0;
while (progress < 100)
{
// Process the task.
telemetryClient.TrackTrace($"done {progress++}%");
}
// Update status code and success as appropriate.
}
catch (Exception e)
{
telemetryClient.TrackException(e);
// Update status code and success as appropriate.
throw;
}
finally
{
telemetryClient.StopOperation(operation);
}
}
Please note that the most recent version of Application Insights SDK is switching to W3C correlation standard, so header names and expected format would be different as per W3C specification.

Handling Acumatica timeout on API Invoke action

I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />

ServiceStack MQ: how to populate data in RequestContext

I'm developing a JWT-based multi-tenancy system using ServiceStack. The JWT token contains shard information, and I use JwtAuthProvider to translate the JWT token to session object following instructions at http://docs.servicestack.net/jwt-authprovider.
Now, I want to use ServiceStack MQ for asynchronous processing. The MQ request needs to be aware of the shard information, so I populate the request context before executing it as follow
mqServer.RegisterHandler<EmployeeAssignedToProject>(m =>
{
var req = new BasicRequest { Verb = HttpMethods.Post };
var sessionKey = SessionFeature.GetSessionKey(m.GetBody().SessionId);
var session = HostContext.TryResolve<ICacheClient>().Get<Context>(sessionKey);
req.Items[Keywords.Session] = session;
var response = ExecuteMessage(m, req);
return response;
});
Here, Context is my custom session class. This technique is stemmed from the instruction at http://docs.servicestack.net/messaging#authenticated-requests-via-mq. Since I execute the message within the context of req, I reckon that I should then be able to resolve Context as follow
container.AddScoped<Context>(c =>
{
var webRequest = HostContext.TryGetCurrentRequest();
if (webRequest != null)
{
return webRequest.SessionAs<Context>();
} else
{
return HostContext.RequestContext.Items[Keywords.Session] as Context;
}
});
However, HostContext.RequestContext.Items is always empty. So the question is, how to populate HostContext.RequestContext.Items from within message handler registration code?
I've tried to dig a little bit into ServiceStack code and found that the ExecuteMessage(IMessage dto, IRequest req) in ServiceController doesn't seem to populate data in RequestContext. For my case, it is a bit too late to get session inside service instance, as a service instance depends on some DB connections whose shard info is kept in session.
The same Request Context instance can't be resolved from the IOC. The Request Context instance is created in the MQ's RegisterHandler<T>() where you can add custom data in the IRequest.Items property, e.g:
mqServer.RegisterHandler<EmployeeAssignedToProject>(m =>
{
var req = new BasicRequest { Verb = HttpMethods.Post };
req.Items[MyKey] = MyValue; //Inject custom per-request data
//...
var response = ExecuteMessage(m, req);
return response;
});
This IRequest instance is available throughout the Request pipeline and from base.Request in your Services. It's not available from your IOC registrations so you will need to pass it in as an argument when calling your dependency, e.g:
public class MyServices : Service
{
public IDependency MyDep { get; set; }
public object Any(MyRequest request) => MyDep.Method(base.Request, request.Id);
}

Using Azure B2C with an MVC app gets into infinite loop resulting with Bad Request - Request Too Long Http 400 error

So I've built and published a new website that uses Azure B2C as the authentication mechanism.
What I found was that the login and sign would work fine for a while. But after a period of time, say couple of hours after visiting the site post deployment, I would find that on login or signup, after successful authentication, instead of being redirected back to the return url set up in the b2c configuration, my browser would get caught between an infinite loop between the post authentication landing page that is protected with an authorise attribute and the Azure B2C Login page, before finally finishing with Http 400 error message with the message - Bad Request - Request too long.
I did some googling around this and there are number of posts that suggest that the problem is with the cookie, and that deleting the cookie should resolve the issue. This is not the case. The only thing I have found to fix this is restarting the application on the webserver, or waiting say 24 hours for some kind of cache or application pool to reset. Anyone has any ideas what's going on here?
Ok, I think I may have found the answer.
Looks like there is an issue with Microsoft.Owin library and the way it sets cookies. Writing directly to System.Web solves this problem according to this article.
There are three suggested solutions:
Ensure session is established prior to authentication: The conflict between System.Web and Katana cookies is per request, so it may be possible for the application to establish the session on some request prior to the authentication flow. This should be easy to do when the user first arrives, but it may be harder to guarantee later when the session or auth cookies expire and/or need to be refreshed.
Disable the SessionStateModule: If the application is not relying on session information, but the session module is still setting a cookie that causes the above conflict, then you may consider disabling the session state module.
Reconfigure the CookieAuthenticationMiddleware to write directly to System.Web's cookie collection.
I will opt for the third option, which is to overwrite the default Cookie AuthenticationMiddleware, as they have suggested below.
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
// ...
CookieManager = new SystemWebCookieManager()
});
public class SystemWebCookieManager : ICookieManager
{
public string GetRequestCookie(IOwinContext context, string key)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
var webContext = context.Get<HttpContextBase>(typeof(HttpContextBase).FullName);
var cookie = webContext.Request.Cookies[key];
return cookie == null ? null : cookie.Value;
}
public void AppendResponseCookie(IOwinContext context, string key, string value, CookieOptions options)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
if (options == null)
{
throw new ArgumentNullException("options");
}
var webContext = context.Get<HttpContextBase>(typeof(HttpContextBase).FullName);
bool domainHasValue = !string.IsNullOrEmpty(options.Domain);
bool pathHasValue = !string.IsNullOrEmpty(options.Path);
bool expiresHasValue = options.Expires.HasValue;
var cookie = new HttpCookie(key, value);
if (domainHasValue)
{
cookie.Domain = options.Domain;
}
if (pathHasValue)
{
cookie.Path = options.Path;
}
if (expiresHasValue)
{
cookie.Expires = options.Expires.Value;
}
if (options.Secure)
{
cookie.Secure = true;
}
if (options.HttpOnly)
{
cookie.HttpOnly = true;
}
webContext.Response.AppendCookie(cookie);
}
public void DeleteCookie(IOwinContext context, string key, CookieOptions options)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
if (options == null)
{
throw new ArgumentNullException("options");
}
AppendResponseCookie(
context,
key,
string.Empty,
new CookieOptions
{
Path = options.Path,
Domain = options.Domain,
Expires = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc),
});
}
}
I will give that a crack, and post my results back here.

Multiple REST calls timing out in Spring Boot web application

I created a Spring Boot (1.4.2) REST application. One of the #RestController methods needs to invoke a 3rd party API REST operation (RestOp1) which returns, say between 100-250 records. For each of those records returned by RestOp1, within the same method, another REST operation of the same 3rd party API (RestOp2) must be invoked. My first attempt involved using a Controller class level ExecutorService based on a Fixed Thread Pool of size 100, and a Callable returning a record corresponding to the response of RestOp2:
// Executor thread pool - declared and initialized at class level
ExecutorService executor = Executors.newFixedThreadPool(100);
// Get records from RestOp1
ResponseEntity<RestOp1ResObj[]> restOp1ResObjList
= this.restTemplate.exchange(url1, HttpMethod.GET, httpEntity, RestOp1ResObj[].class);
RestOp1ResObj[] records = restOp1ResObjList.getBody();
// Instantiate a list of futures (to call RestOp2 for each record)
List<Future<RestOp2ResObj>> futureList = new ArrayList<>();
// Iterate through the array of records and call RestOp2 in a concurrent manner, using Callables.
for (int count=0; count<records.length; count++) {
Future<RestOp2ResObj> future = this.executorService.submit(new Callable<RestOp2ResObj>() {
#Override
public RestOp2ResObj call() throws Exception {
return this.restTemplate.exchange(url2, HttpMethod.GET, httpEntity, RestOp2Obj.class);
}
};
futureList.add(future);
});
// Iterate list of futures and fetch response from RestOp2 for each
// record. Build a final response and send back to the client.
for (int count=0; count<futureList.size(); count++) {
RestOp2ResObj response = futureList.get(count).get();
// use above response to build a final response for all the records.
}
The performance of the above code is abysmal to say the least. The response time for a RestOp1 call (invoked only once) is around 2.5 seconds and that for a RestOp2 call (invoked for each record) is about 1.5 seconds. But the code execution time is between 20-30 seconds, as opposed to an expected range of 5-6 seconds! Am I missing something fundamental here?
Is the service you are calling fast enough to handle that many requests per second?
There is an async version of RestService is available called AsyncRestService. Why are you not using that?
I would probably go like this:
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate(new ConcurrentTaskExecutor(Executors.newFixedThreadPool(100)));
asyncRestTemplate.exchange("http://www.example.com/myurl", HttpMethod.GET, new HttpEntity<>("message"), String.class)
.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> result) {
//TODO: Add real response handling
System.out.println(result);
}
#Override
public void onFailure(Throwable ex) {
//TODO: Add real logging solution
ex.printStackTrace();
}
});
Your question involves two parts :
multiple API callbacks asynchronously
handle timeouts (fallback)
both parts are related as you've to handle the timeout of each call.
you may consider use Spring Cloud (based on spring boot) and use some out of the box solution based on OSS Netflix stacks.
The first (timeouts) on should be a circuit breaker hystrix based on feign client
The second (multiple requests) this is an architecture issue, using native Executors isn't a good idea as it will not scale and has a huge maintenance costs. You may relay on Spring Asynchrounous Methods you'll have better results and fully spring compliant.
Hope this will help.

Resources