I use ServiceStack.Core to test concurrency in Windows and Ubuntu, all with a maximum of 6 concurrency, how to set up to improve concurrency?
public class AppHost : AppHostBase
{
...
}
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls("http://localhost:1337/")
.Build();
host.Run();
}
[Route("/test")]
public class Test { }
public object Get(Test request)
{
System.Threading.Thread.Sleep(3000);
return '';
}
Only 6 concurrent
CPU
Note: it's not a good idea to test concurrency in a browser which have their own Max concurrency limits. Use something like wrk or ab Apache Bench.
ServiceStack doesn't have a separate concurrency model in .NET Core nor does it spawn new threads per request, it just uses .NET Core's Kestrel's configured concurrency.
Previously in ASP.NET Core 1.1 you can specify the ThreadCount when you configure Kestrel:
var host = new WebHostBuilder()
.UseKestrel(options => options.ThreadCount = 10)
Where it specifies the number of libuv I/O threads used to process requests which defaults to half of ProcessorCount
Although ThreadCount has since been moved and only available if you configure Kestrel to you use the Libuv Transport:
WebHost.CreateDefaultBuilder(args)
.UseLibuv(options => {
options.ThreadCount = 10;
})
Note from .NET Core 2.1 Kestrel uses Managed Sockets for the default transport not Kestrel.
Related
I exposed 2 api's
/endpoint/A and /endpoint/B .
#GetMapping("/endpoint/A")
public ResponseEntity<ResponseA> controllerA() throws InterruptedException {
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
#GetMapping("/endpoint/B")
public ResponseEntity<ResponseA> controllerB() throws InterruptedException {
ResponseA responseB = serviceB.responseClient();
return ResponseEntity.ok().body(responseB);
}
Services implemented regarding endpoint A internally call /endpoint/C and endpoint B internally call /endpoint/D.
As external service /endpoint/D taking more time i.e getting response from /endpoint/A takes more time hence whole threads are stucked that is affecting /endpoint/B.
I tried to solve this using executor service having following implementation
#Bean(name = "serviceAExecutor")
public ThreadPoolTaskExecutor serviceAExecutor(){
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(100);
taskExecutor.setMaxPoolSize(120);
taskExecutor.setQueueCapacity(50);
taskExecutor.setKeepAliveSeconds(120);
taskExecutor.setThreadNamePrefix("serviceAExecutor");
return taskExecutor;
}
Even after implementing this if I received more than 200 request on /endpoint/A simultaneously (greater than default max number of threads in Tomcat server) then I am not getting responses from /endpoint/B as all threads are busy for getting response from endpoint A or in queue.
Can someone plz suggest is there any way to apply bucketization on each exposed endpoint level and allow only limited request to process at a time & put remaining into bucket/queue so that request on other endpoints can work properly ?
Edit:- following is solution approach
#GetMapping("/endpoint/A")
public CompletableFuture<ResponseEntity<ResponseA>> controllerA() throws InterruptedException {
return CompletableFuture.supplyAsync(()->controllerHelperA());
}
#GetMapping("/endpoint/B")
public CompletableFuture<ResponseEntity<ResponseB>> controllerB() throws InterruptedException {
return CompletableFuture.supplyAsync(()->controllerHelperB());
}
private ResponseEntity<ResponseA> controllerHelperA(){
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
private ResponseEntity<ResponseB> controllerHelperB(){
ResponseB responseB = serviceB.responseClient();
return ResponseEntity.ok().body(responseB);
}
Spring MVC supports the async servlet API introduced in Servlet API 3.0. To make it easier when your controller returns a Callable, CompletableFuture or DeferredResult it will run in a background thread and free the request handling thread for further processing.
#GetMapping("/endpoint/A")
public CompletableFuture<ResponseEntity<ResponseA>> controllerA() throws InterruptedException {
return () {
return controllerHelperA();
}
}
private ResponseEntity<ResponseA> controllerHelperA(){
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
Now this will be executed in a background thread. Depending on your version of Spring Boot and if you have configured your own TaskExecutor it will either
use the SimpleAsycnTaskExecutor (which will issue a warning in your logs),
the default provided ThreadPoolTaskExecutor which is configurable through the spring.task.execution namespace
Use your own TaskExecutor but requires additional configuration.
If you don't have a custom TaskExecutor defined and are on a relatively recent version of Spring Boot 2.1 or up (IIRC) you can use the following properties to configure the TaskExecutor.
spring.task.execution.pool.core-size=20
spring.task.execution.pool.max-size=120
spring.task.execution.pool.queue-capacity=50
spring.task.execution.pool.keep-alive=120s
spring.task.execution.thread-name-prefix=async-web-thread
Generally this will be used to execute Spring MVC tasks in the background as well as regular #Async tasks.
If you want to explicitly configure which TaskExecutor to use for your web processing you can create a WebMvcConfigurer and implement the configureAsyncSupport method.
#Configuration
public class AsyncWebConfigurer implements WebMvcConfigurer {
private final AsyncTaskExecutor taskExecutor;
public AsyncWebConfigurer(AsyncTaskExecutor taskExecutor) {
this.taskExecutor=taskExecutor;
}
public void configureAsyncSupport(AsyncSupportConfigurer configurer) {
configurer.setTaskExecutor(taskExecutor);
}
}
You could use an #Qualifier on the constructor argument to specify which TaskExecutor you want to use.
What is the best practice for creating multiple queueclients for listening to different service bus queues? There is a MessagingFactory class however Microsoft.ServiceBus.Messaging not seems to be available as a nuget package anymore (.net core console application).
Considering QueueClient as static object what would be the recommended pattern to create multiple queueclients from a singleton host process?
Appreciate the feedback.
For .net core applications, you can make use of Microsoft.Azure.ServiceBus instead of Microsoft.ServiceBus.Messaging nuget. As this is build over .net standard, this can be used in both framework and core applications. Methods and classes similar to Microsoft.ServiceBus.Messaging are available under this. Check here for samples.
Able to get it working however could not use dependency injection. Any suggestions on improving this implementation would be much appreciated.
Startup.cs
// Hosted services
services.AddSingleton();
ServiceBusListener.cs
public class ServiceBusListener : BackgroundService, IServiceBusListener
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
Console.WriteLine($"ServiceBusListener is starting.");
Dictionary<string, QueueClient> queueClients = new Dictionary<string, QueueClient>();
foreach (var queue in _svcBusSettings.Queues)
{
var svcBusQueueClient = new ServiceBusQueueClient(queue.Value, queue.Key);
queueClients.Add(queue.Key, svcBusQueueClient.QueueClient);
}
}
}
ServiceBusQueueClient.cs
public class ServiceBusQueueClient : IServiceBusQueueClient
{
private IQueueClient _queueClient;
public QueueClient QueueClient
{
get { return _queueClient as QueueClient; }
}
public ServiceBusQueueClient(string serviceBusConnection, string queueName)
{
_queueClient = new QueueClient(serviceBusConnection, queueName);
RegisterOnMessageHandlerAndReceiveMessages();
}
}
I created a .NET CORE console application and uploaded as a continuous mode webjob to an Azure application (ASP.NET Core). With webjob running, webapp is very slow responding to API request (Request time rise to few seconds).
WebJob code
static void Main(string[] args)
{
queueClient.RegisterMessageHandler(
async (message, token) =>
{
// Process the message
// Complete the message so that it is not received again.
// This can be done only if the queueClient is opened in ReceiveMode.PeekLock mode.
await queueClient.CompleteAsync(message.SystemProperties.LockToken);
},
new MessageHandlerOptions(exce => {
return Task.CompletedTask;
})
{ MaxConcurrentCalls = 1, AutoComplete = false });
//Console.ReadKey();
while (true) ;
}
And the processing the message operation takes few seconds.
from SCM console
Request time
The while(true) ; will pin the CPU so I suggest you don't do that.
Check the Queue message handling example for the proper way how to implement message handling: https://github.com/Azure/azure-webjobs-sdk/wiki/Queues
You will have to change your Main to:
static void Main(string[] args)
{
JobHost host = new JobHost();
host.RunAndBlock();
}
And then you can make a message handler in another file:
public static void ProcessQueueMessage([QueueTrigger("logqueue")] string logMessage, TextWriter logger)
{
logger.WriteLine(logMessage);
}
The 3.0.0 preview versions of the Microsoft.Azure.WebJobs library support .NET Standard 2.0, so it can be used with .NET Core 2.0 projects.
If you want to implement it yourself, you can check the Webjobs SDK source code for an example how they do it: https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/JobHost.cs
I'm trying to get an MVC6 app to be self-hosted for testing. I can do in-memory testing using TestServer, but for testing integration of multiple web apps, one of which includes a middleware that I have no control over that connects to the other app, I need at least one of the apps to be accessible over TCP.
I have tried using WebApp.Start, but it works with an IAppBuilder rather than IApplicationBuilder, so I can't get it to work with my Startup.
Is there any way to get an MVC6 app to be self-hosted in an xUnit test, via OWIN or any other way?
UPDATE:
FWIW, based on Pinpoint's answer and some additional research, I was able to come up with the following base class that works in xUnit, at least when the tests are in the same project as the MVC project:
public class WebTestBase : IDisposable
{
private IDisposable webHost;
public WebTestBase()
{
var env = CallContextServiceLocator.Locator.ServiceProvider.GetRequiredService<IApplicationEnvironment>();
var builder = new ConfigurationBuilder(env.ApplicationBasePath)
.AddIniFile("hosting.ini");
var config = builder.Build();
webHost = new WebHostBuilder(CallContextServiceLocator.Locator.ServiceProvider, config)
.UseEnvironment("Development")
.UseServer("Microsoft.AspNet.Server.WebListener")
.Build()
.Start();
}
public void Dispose()
{
webHost.Dispose();
}
}
Katana's WebApp static class has been replaced by WebHostBuilder, that offers a much more flexible approach: https://github.com/aspnet/Hosting/blob/dev/src/Microsoft.AspNet.Hosting/WebHostBuilder.cs.
You've probably already used this API without realizing it, as it's the component used by the hosting block when you register a new web command in your project.json (e.g Microsoft.AspNet.Hosting server=Microsoft.AspNet.Server.WebListener server.urls=http://localhost:54540) and run it using dnx (e.g dnx . web):
namespace Microsoft.AspNet.Hosting
{
public class Program
{
private const string HostingIniFile = "Microsoft.AspNet.Hosting.ini";
private const string ConfigFileKey = "config";
private readonly IServiceProvider _serviceProvider;
public Program(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public void Main(string[] args)
{
// Allow the location of the ini file to be specified via a --config command line arg
var tempBuilder = new ConfigurationBuilder().AddCommandLine(args);
var tempConfig = tempBuilder.Build();
var configFilePath = tempConfig[ConfigFileKey] ?? HostingIniFile;
var appBasePath = _serviceProvider.GetRequiredService<IApplicationEnvironment>().ApplicationBasePath;
var builder = new ConfigurationBuilder(appBasePath);
builder.AddIniFile(configFilePath, optional: true);
builder.AddEnvironmentVariables();
builder.AddCommandLine(args);
var config = builder.Build();
var host = new WebHostBuilder(_serviceProvider, config).Build();
using (host.Start())
{
Console.WriteLine("Started");
var appShutdownService = host.ApplicationServices.GetRequiredService<IApplicationShutdown>();
Console.CancelKeyPress += (sender, eventArgs) =>
{
appShutdownService.RequestShutdown();
// Don't terminate the process immediately, wait for the Main thread to exit gracefully.
eventArgs.Cancel = true;
};
appShutdownService.ShutdownRequested.WaitHandle.WaitOne();
}
}
}
}
https://github.com/aspnet/Hosting/blob/dev/src/Microsoft.AspNet.Hosting/Program.cs
You can use Microsoft.AspNet.TestHost
See http://www.strathweb.com/2015/05/integration-testing-asp-net-5-asp-net-mvc-6-applications/ for details on use.
TestHost can work with your startup using a line like
TestServer dataServer = new TestServer(TestServer.CreateBuilder().UseStartup<WebData.Startup>());
where is the name of the application. The application has to be referenced in the test harness
I started a netty4 nio server with multiple business threads for handling long-term businesses
like below
public void start(int listenPort, final ExecutorService ignore)
throws Exception {
...
bossGroup = new NioEventLoopGroup();
ioGroup = new NioEventLoopGroup();
businessGroup = new DefaultEventExecutorGroup(businessThreads);
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, ioGroup).channel(NioServerSocketChannel.class)
.childOption(ChannelOption.TCP_NODELAY,
Boolean.parseBoolean(System.getProperty(
"nfs.rpc.tcp.nodelay", "true")))
.childOption(ChannelOption.SO_REUSEADDR,
Boolean.parseBoolean(System.getProperty(
"nfs.rpc.tcp.reuseaddress", "true")))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast("decoder",
new Netty4ProtocolDecoder());
ch.pipeline().addLast("encoder",
new Netty4ProtocolEncoder());
ch.pipeline().addLast(businessGroup, "handler",
new Netty4ServerHandler());
}
});
b.bind(listenPort).sync();
LOGGER.warn("Server started,listen at: " + listenPort + ", businessThreads is " + businessThreads);
}
I found that there was only one thread working when the server accepted one connection.
How can I bootstrap a server that can start multiple business threads for only one connection?
Thanks,
Mins
Netty will always use the same thread for one connection. It's by design. If you would like to change this you may be able to implement a custom EventExecutorGroup and pass it in when adding your ChannelHandler to the ChannelPipeline.
Be aware this may result in messed up order of packets.