The code below does not release IIS threads for new requests, why?
public async Task<string> Get()
{
await Task.Delay(15000);
}
I'm using Windows 10, .Net Framework 4.7, no changes to the Default AppPool (single IIS worker process) on an i7:
Using performance counters, I can see that 6 out of my 16 parallel CURL requests get queued, while my CPU is free (20% usage).
Related
Backgound:
I'm currently hosting an ASP.NET application in Azure with the following specs:
ASP .Net Core 2.2
Using Flurl for HTTP requests
Kestrel Webserver
Docker (Linux - mcr.microsoft.com/dotnet/core/aspnet:2.2 runtime)
Azure App Service on P2V2 tier app service plan
I have a a couple of background jobs that run on the service that makes a lot of outbound HTTP calls to a 3rd party service.
Issue:
Under a small load (approximately 1 call per 10 seconds), all requests are completed in under a second with no issue. The issue I'm having is that under a heavy load, when service can make up to 3/4 calls in a 10 second span, some of the requests will randomly timeout and throw an exception. When I was using RestSharp the exception would read "The operation has timed out". Now that I'm using Flurl, the exception reads "The call timed out".
Here's the kicker - If I run the same job from my laptop running Windows 10 / Visual Studios 2017, this problem does NOT occur. This leads me to believe I'm hitting some limit or running out of some resource in my hosted environment. Unclear if that is connection/socket or thread related.
Things I've tried:
Ensure all code paths to the request are using async/await to prevent lockouts
Ensure Kestrel Defaults allow unlimited connections (it does by default)
Ensure Dockers default connection limits are sufficient (2000 by default, more than enough)
Configuring ServicePointManager settings for connection limits
Here is the code in my startup.cs that I'm currently using to try and prevent this issue:
public class Startup
{
public Startup(IHostingEnvironment hostingEnvironment)
{
...
// ServicePointManager setup
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = int.MaxValue;
ServicePointManager.EnableDnsRoundRobin = true;
ServicePointManager.ReusePort = true;
// Set Service point timeouts
var sp = ServicePointManager.FindServicePoint(new Uri("https://placeholder.thirdparty.com"));
sp.ConnectionLeaseTimeout = 15 * 1000; // 15 seconds
FlurlHttp.ConfigureClient("https://placeholder.thirdparty.com", cli => cli.Settings.ConnectionLeaseTimeout = new TimeSpan(0, 0, 15));
}
}
Has anyone else run into a similar issue to this? I'm open to any suggestions on how to best debug this situation, or possible methods to correct the issue. I'm at a complete loss after researching this for several days.
Thank you in advance.
I had similar issues. Take a look at Asp.net Core HttpClient has many TIME_WAIT or CLOSE_WAIT connections . Debugging via netstat helped identify the problem for me. As one possible solution. I suggest you use IHttpClientFactory. You can get more info from https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-2.2 It should be fairly easy to use as described in Flurl client lifetime in ASP.Net Core 2.1 and IHttpClientFactory
Hello currently I am developing an Arquillian extension for Moco framework (https://github.com/dreamhead/moco). Moco is used for testing RESTful services and relies on Netty for dealing with communication. Currently Moco is using Netty 4.0.18.Final.
But I have found a problem when running Moco (and Netty server) inside a container (Arquillian runs tests within the container) and is that it starts correctly but when the application is undeployed and server is shutdown next log error messages are printed:
SEVERE: The web application [/ba32e781-3a18-44b3-9547-7c26787f3fe7] appears to have started a thread named [pool-2-thread-1] but has failed to stop it. This is very likely to create a memory leak.
abr 08, 2014 10:29:06 AM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [/ba32e781-3a18-44b3-9547-7c26787f3fe7] created a ThreadLocal with key of type [io.netty.util.internal.ThreadLocalRandom$2] (value [io.netty.util.internal.ThreadLocalRandom$2#77468cae]) and a value of type [io.netty.util.internal.ThreadLocalRandom] (value [io.netty.util.internal.ThreadLocalRandom#6cd3851]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Basically it seems that there are some threads that are not closed yet when the server tries to shutdown.
From the point of view of Arquillian extension when the application is deployed into the server the start method of Moco is called and before undeploying the application the stop method from Moco is called.
But let me show you the code of Moco:
public int start(final int port, ChannelHandler pipelineFactory) {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(pipelineFactory);
try {
future = bootstrap.bind(port).sync();
SocketAddress socketAddress = future.channel().localAddress();
address = (InetSocketAddress) socketAddress;
return address.getPort();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
and the stop method looks like:
private void doStop() {
if (future != null) {
future.channel().close().syncUninterruptibly();
future = null;
}
So it seems that the close method returns before killing all the threads and for this reason containers warns you about possible memory leaks.
Because I have never used Netty I was wondering if there is a way to ensure that the whole Netty runtime is closed.
Thank you so much for your help.
I am new to Netty as well (and unfamiliar with Arquillian), but based on the Netty Docs examples I believe you might not be shutting down the EventLoopGroups you created (bossGroup, workerGroup). From the Netty 4.0 User Guide:
Shutting down a Netty application is usually as simple as shutting down all EventLoopGroups you created via shutdownGracefully(). It returns a Future that notifies you when the EventLoopGroup has been terminated completely and all Channels that belong to the group have been closed.
So your doStop() method might look like:
private void doStop() {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
An example in the Netty docs: Http Static File Server Example
I'm using MVC4 ApiController to upload data to Azure Blob. Here is the sample code:
public Task PostAsync(int id)
{
return Task.Factory.StartNew(() =>
{
// CloudBlob.UploadFromStream(stream);
});
}
Does this code even make sense? I think ASP.NET is already processing the request in a worker thread, so running UploadFromStream in another thread doesn't seem to make sense since it now uses two threads to run this method (I assume the original worker thread is waiting for this UploadFromStream to finish?)
So my understanding is that async ApiController only makes sense if we are using some built-in async methods such as HttpClient.GetAsync or SqlCommand.ExecuteReaderAsync. Those methods probably use I/O Completion Ports internally so it can free up the thread while doing the actual work. So I should change the code to this?
public Task PostAsync(int id)
{
// only to show it's using the proper async version of the method.
return TaskFactory.FromAsync(BeginUploadFromStream, EndUploadFromStream...)
}
On the other hand, if all the work in the Post method is CPU/memory intensive, then the async version PostAsync will not help throughput of requests. It might be better to just use the regular "public void Post(int id)" method, right?
I know it's a lot questions. Hopefully it will clarify my understanding of async usage in the ASP.NET MVC. Thanks.
Yes, most of what you say is correct. Even down to the details with completion ports and such.
Here is a tiny error:
I assume the original worker thread is waiting for this UploadFromStream to finish?
Only your task thread is running. You're using the async pipeline after all. It does not wait for the task to finish, it just hooks up a continuation. (Just like with HttpClient.GetAsync).
I have below application:
Its windows console .NET 3.0 application
I'm creating 20 workloads and assigning them to threadpool to process.
Each thread in ThreadPool creates WCF Client and calls service with request created using workload assigned.
Sometimes on production servers[12 core machines], I get following exception:
There was an error reflecting type 'xyz' while invoking operation using WCF client. This starts appearing in all threads. After sometime it suddenly disappears and starts appearing again.
Code:
Pseudo Code:
for(int i=0;i<20;i++)
{
MultiThreadedProcess proc =new MultThreadedProcess(someData[i]);
ThreadPool.QueueUserWorkItem(proc.CallBack,i);
}
In Class MultiThreadedProcess, I do something like this:
public void Callback(object index)
{
MyServiceClient client = new MyServiceClient();
MyServiceResponse response =client.SomeOperation(new MyServiceRequest(proc.SomeData));
client.close();
//Process Response
}
Can anyone suggest some resolutions for this problem?
If you can turn on diagnostic, appears to me serialization issue, there might be chance that certain data members/values are not able to de-serialized properly for operation call.
I have a .NET 4.5 WCF client app that uses the async/await pattern to make volumes of calls. My development machine is dual-proc with 8gb RAM (production will be 5 CPU with 8gb RAM at Amazon AWS) . The remote WCF service called by my code uses out and ref parameters on a web method that I need. My code instances a proxy client each time, writes any results to a public ConcurrentDictionary, and then returns null.
I ran Perfmon, watching the thread count on the system, and it goes between 28-30. It takes hours for my client to complete the volumes of calls that are made. Yes, hours. The remote service is backed by a big company, they have many servers to receive my WCF calls, so the more calls I can throw at them, the better.
I think that things are actually still happening synchronously, even though the method that makes the WCF call is decorated with "async" because the proxy method cannot have "await". Is that true?
My code looks like this:
async private void CallMe()
{
Console.WriteLine( DateTime.Now );
var workTasks = this.AnotherConcurrentDict.Select( oneB => GetData( etcetcetc ).Cast<Task>().ToList();
await Task.WhenAll( workTasks );
}
private async Task<WorkingBits> GetData(etcetcetc)
{
var commClient = new RemoteClient();
var cpResponse = new GetPackage();
var responseInfo = commClient.GetData( name, password , ref (cpResponse.aproperty), filterid , out cpResponse.Identifiers);
foreach (var onething in cpResponse.Identifiers)
{
// add to the ConcurrentDictionary
}
return null; // I already wrote to the ConcurrentDictionary so no need to return anything
responseInfo is not awaitable beacuse the WCF call has ref and out parameters.
I was thinking that way to speed this up is not to put async/await in this method, but instead create a wrapper method where I can make things await/async, but I am not that is the smartest/safest way to work it.
What is a smart way to get more outbound calls to the service (expand IO completion thread pool, trick calls into running in the background so Task.WhenAll can complete quicker)?
Thanks for all ideas/samples/pointers. I am hitting a bottleneck somewhere.
1) Make sure you're really calling it asynchronously, rather than just blocking on the calls. Code samples would help here.
2) You may need to do this:
ServicePointManager.DefaultConnectionLimit = 100;
By default it only allows 2 simultaneous connections to the same server.
3) Make sure you dispose the proxy object after the call is complete so you're not tying up resources.
If you're doing things asynchronously the threadpool size shouldn't be a bottleneck. To get a better idea of what kind of problem you're having, you can use Interlocked.Increment and Interlocked.Decrement to track the number of pending calls and see if it's being limited somewhere.
You could also substitute your real call with a call to a very simple method that you know will not have any bottlenecks, to see if the problem is in the client or server.