how to improve performance when mulitple clients connect concurrently - iis

I've noticed that when more than ~10 concurrent clients request my API endpoint, response time increases significantly.
I'm using IIS 7.5 to host my MVC Web API application (.Net 4.5.2)
I've created a simple API Action to check this:
public SimpleResponse<bool> Get()
{
Thread.Sleep(50);
return new SimpleResponse<bool>
{
Value = true
};
}
And here's how my test client looks:
private static void TestClientConnections(int clients)
{
var client = new HttpClient();
for (int i = 0; i < clients; i ++)
{
Task.Factory.StartNew(() =>
{
Stopwatch sw = new Stopwatch();
while (true)
{
sw.Restart();
var res = client.GetAsync("http://localhost:98/ping").Result;
sw.Stop();
Console.WriteLine("{0}ms", sw.ElapsedMilliseconds);
}
}, TaskCreationOptions.LongRunning);
}
}
Console output when 10 clients are run:
62ms
63ms
62ms
61ms
63ms
64ms
63ms
...
Console output when 30 clients are run:
187ms
187ms
187ms
187ms
186ms
186ms
186ms
...
What I tried to do in order to overcome this:
1) Tried to set up System.Net.ServicePointManager.DefaultConnectionLimit to big number considering that this is the client issue;
2) Tried to change thread limitations in machine.config:
<processModel maxWorkerThreads="200" maxIoThreads="200" minWorkerThreads="150"/>
3) Tried to increase max connections:
<system.net>
<connectionManagement>
<add address="*" maxconnection="2000"/>
</connectionManagement>
</system.net>
4) Tried to make the api action asynchronous.
Nothing of mentioned above did not help me... Any ideas ?
I really need to handle at least 100 concurrent clients simalteniously.

Related

How to resolve external API latency while calling from Azure deployed application?

Info :
I have below 2 method which is part of Web API (not core API) and it is deployed in Azure
Method 1 :
public async Task<bool> ProcessEmployee(list<employee> EmployeeList)
var tasks = new List<Task<EmployeeResponseModel>>();
HttpClient localHttpClient = new HttpClient();
localHttpClient.Timeout = TimeSpan.FromSeconds(100);
foreach (var employee in EmployeeList) // **having 1000 calls**
{
tasks.Add(GetAddressResponse(employee.URL,localHttpClient));
}
var responses = await Task.WhenAll(tasks);
}
Method 2 :
private async Task<EmployeeResponseModel> GetAddressResponse(url, HttpClient client)
{
var response = new EmployeeResponseModel();
try
{
using (HttpResponseMessage apiResponse = await client.GetAsync(**url**))
{
if (apiResponse.IsSuccessStatusCode)
{
var res= await apiResponse.Content.ReadAsStringAsync();
response = JsonConvert.DeserializeObject<EmployeeResponseModel>(res);
}
}
return response;
}
catch (Exception ex)
{
}
return response;
}
If i monitor from Azure -> Diagnose and Solve Problem -> Web App Slow all external API calls is showing latency issue
But if i am calling same external API from Postman is is quite fast and having less latency
method 1 and method 2 is part of one web api and it is deployed on Azure AppService.
getAddress is external API which is been deployed in other environment and don't have much information
if we are calling external API i.e 'getAddress' from 1) we are facing high latency more than 5 sec.
if we are calling external API i.e 'getAddress' from Postman we receive response in 303 ms.
I guess it results from the location of the service plan.
If the location of the service plan is far away from you position, it may cause the latency. But it can't rule out other possibilities, so my suggestion is debug in localhost first to rule out the possibility of the code.

.net core webapi causes iis application pool to shutdown

Background:
I'm building a .net core webapi does practically nothing more than checking if a given URL exists and returns the result. If a URL exists and is a redirect (301, 302), the api follows the redirect and returns that result as well. The webapi is called by an SPA which does an api-call for every given url in a checkrequest-queue. So, if someone adds 500 urls to the queue the SPA will loop through it and will send 500 calls to the API – something I could improve upon.
The problem:
My IIS application pool is being shut down on a regular basis due to high CPU usage and/or memory usage:
A worker process serving application pool 'api.domain.com(domain)(4.0)(pool)' has requested a recycle because it reached its private bytes memory limit.
The only way to get my API going again is to manually restart the application. I don't think the operations performed by the API are that demanding, but I surely must be doing something wrong here. Can somebody help me please? The code called by the SPA is:
var checkResponse = new CheckResponse();
var httpMethod = new HttpMethod(request.HttpMethod.ToUpper());
var httpRequestMessage = new HttpRequestMessage(httpMethod, request.Url);
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls;
var httpResponseMessage = await httpClient.SendAsync(httpRequestMessage);
checkResponse.RequestMessage = httpResponseMessage.RequestMessage;
checkResponse.Headers = httpResponseMessage.Headers;
checkResponse.StatusCode = httpResponseMessage.StatusCode;
switch (httpResponseMessage.StatusCode)
{
case HttpStatusCode.Ambiguous:
case HttpStatusCode.Found:
case HttpStatusCode.Moved:
case HttpStatusCode.NotModified:
case HttpStatusCode.RedirectMethod:
case HttpStatusCode.TemporaryRedirect:
case HttpStatusCode.UseProxy:
var redirectRequest = new CheckRequest
{
Url = httpResponseMessage.Headers.Location.AbsoluteUri,
HttpMethod = request.HttpMethod,
CustomHeaders = request.CustomHeaders
};
checkResponse.RedirectResponse = await CheckUrl(redirectRequest);
break;
}
The Action on my ApiController:
[HttpPost]
public async Task<IActionResult> Post([FromBody] CheckRequest request)
{
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
var result = await CheckService.CheckUrl(request);
return Ok(result);
}

Retrofit2 - OkHttp ConnectionPool threads grows to 100+ threads. Why?

I am using retrofit2 on a java service to connect to a REST API and fetch data.
The code looks like this:
Retrofit retrofit =
new Retrofit.Builder().baseUrl(endPoint).addConverterFactory(JacksonConverterFactory.create())
.build();
SyncCentralLojaProxySvc svc = retrofit.create(SyncCentralLojaProxySvc.class);
LogVerCentralLojaEntity entity = syncSvc.getLogVerByCdFilial(filial);
long cd_log = (entity != null) ? entity.getCdLog() : 0;
Call<LogCentralLojaCompactoCollectionDto> call = svc.getLogCompacto(filial, cd_log);
Response<LogCentralLojaCompactoCollectionDto> response = call.execute();
//NOT_MODIFIED
if (response.code() == 304) {
return 0;
}
if (!response.isSuccessful())
throw new IOException(response.errorBody().string());
LogCentralLojaCompactoCollectionDto body = response.body();
Its a simple data fetch that runs synchronously (not in parallel) every few seconds.
I noticed throught VisualVM that the OkHttp thredas grows too much. The app would never user 100 operations in parallel. In fact, it only needs one.
How do I tune this? Is it natural to have so many threads?
Setting a global client with the connection pool configuration solved the issue:
ConnectionPool pool = new ConnectionPool(5, 10000, TimeUnit.MILLISECONDS);
OkHttpClient client = new OkHttpClient.Builder()
.connectionPool(pool)
.build();
Retrofit retrofit =
new Retrofit.Builder().baseUrl(endPoint)
.client(client)
.addConverterFactory(JacksonConverterFactory.create())
.build();

Subscribing to Service Fabric cluster level events

I am trying to create a service that will update an external list of Service Endpoints for applications running in my service fabric cluster. (Basically I need to replicate the Azure Load Balancer in my on premises F5 Load Balancer.)
During last month's Service Fabric Q&A, the team pointed me at RegisterServiceNotificationFilterAsync.
I made a stateless service using this method, and deployed it to my development cluster. I then made a new service by running the ASP.NET Core Stateless service template.
I expected that when I deployed the second service, the break point would hit in my first service, indicating that a service had been added. But no breakpoint was hit.
I have found very little in the way of examples for this kind of thing on the internet, so I am asking here hopping that someone else has done this and can tell me where I went wrong.
Here is the code for my service that is trying to catch the application changes:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
var fabricClient = new FabricClient();
long? filterId = null;
try
{
var filterDescription = new ServiceNotificationFilterDescription
{
Name = new Uri("fabric:")
};
fabricClient.ServiceManager.ServiceNotificationFilterMatched += ServiceManager_ServiceNotificationFilterMatched;
filterId = await fabricClient.ServiceManager.RegisterServiceNotificationFilterAsync(filterDescription);
long iterations = 0;
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
ServiceEventSource.Current.ServiceMessage(this.Context, "Working-{0}", ++iterations);
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
}
}
finally
{
if (filterId != null)
await fabricClient.ServiceManager.UnregisterServiceNotificationFilterAsync(filterId.Value);
}
}
private void ServiceManager_ServiceNotificationFilterMatched(object sender, EventArgs e)
{
Debug.WriteLine("Change Occured");
}
If you have any tips on how to get this going, I would love to see them.
You need to set the MatchNamePrefix to true, like this:
var filterDescription = new ServiceNotificationFilterDescription
{
Name = new Uri("fabric:"),
MatchNamePrefix = true
};
otherwise it will only match specific services. In my application I can catch cluster wide events when this parameter is set to true.

ServiceStack/Funq not disposing RavenDB document session after request is complete

In trying to integrate RavenDB usage with Service Stack, I ran across the following solution proposed for session management:
A: using RavenDB with ServiceStack
The proposal to use the line below to dispose of the DocumentSession object once the request is complete was an attractive one.
container.Register(c => c.Resolve<IDocumentStore>().OpenSession()).ReusedWithin(ReuseScope.Request);
From what I understand of the Funq logic, I'm registering a new DocumentSession object with the IoC container that will be resolved for IDocumentSession and will only exist for the duration of the request. That seemed like a very clean approach.
However, I have since run into the following max session requests exception from RavenDB:
The maximum number of requests (30) allowed for this session has been
reached. Raven limits the number of remote calls that a session is
allowed to make as an early warning system. Sessions are expected to
be short lived, and Raven provides facilities like Load(string[] keys)
to load multiple documents at once and batch saves.
Now, unless I'm missing something, I shouldn't be hitting a request cap on a single session if each session only exists for the duration of a single request. To get around this problem, I tried the following, quite ill-advised solution to no avail:
var session = container.Resolve<IDocumentStore>().OpenSession();
session.Advanced.MaxNumberOfRequestsPerSession = 50000;
container.Register(p => session).ReusedWithin(ReuseScope.Request);
Here is a sample of how I'm using the resolved DocumentSession instance:
private readonly IDocumentSession _session;
public UsersService(IDocumentSession session)
{
_session = session;
}
public ServiceResponse<UserProfile> Get(GetUser request)
{
var response = new ServiceResponse<UserProfile> {Successful = true};
try
{
var user = _session.Load<UserProfile>(request.UserId);
if (user == null || user.Deleted || !user.IsActive || !user.IsActive)
{
throw HttpError.NotFound("User {0} was not found.".Fmt(request.UserId));
}
response.Data = user;
}
catch (Exception ex)
{
_logger.Error(ex.Message, ex);
response.StackTrace = ex.StackTrace;
response.Errors.Add(ex.Message);
response.Successful = false;
}
return response;
}
As far as I can see, I'm implementing SS + RavenDB "by the book" as far as the integration point goes, but I'm still getting this max session request exception and I don't understand how. I also cannot reliably replicate the exception or the conditions under which it is being thrown, which is very unsettling.

Resources