ThreadPoolExecutor getting shutdown automatically after few hour later of server start - multithreading

I have a created a ThreadPoolExecuter with corePool size = 10 and Max Pool Size = 50 and work Queue = 100 and on local machine everything is working as expected but on dev server (linux machine) the thread pool is active for few hours and then gets shutdown automatically.
So After that all the new tasks are getting rejected.
The tasks that we are assigning to this thread pool has a timeout of 25 seconds.
And we have multiple ThreadPools as well but they get shutdown when we shutdown the server.
private static ArrayBlockingQueue<Runnable> workQueue = new ArrayBlockingQueue<Runnable>(100);
ThreadFactory threadFactory = getNamedTreadFactory(false, "Thread-01");
xecutorService es = new ThreadPoolExecutor(10, 50, 3, TimeUnit.SECONDS, workQueue, threadFactory, handler);

The Reason behind this issue is that we had the ServletContextListner implemention that listen context activity. So when on DEV server code get pushed then application was getting hot-deployed and application context was getting recreated , this time it was shutting down the pool.
public class ApplicationContextListner implements ServletContextListener{
#Override
public void contextDestroyed(ServletContextEvent arg0) {
//Here the pool shutdown code was present - so we removed it
}}
After removing pool shutdown call from destroy method.It worked.
Issue Resolved.

Related

Understanding asynchronous web processing

I've just finished reading up about asynchronous WebServlet processing. [This article] is a good read.
However, fundamentally I'm confused why this method is the "next generation of web processing" and is in fact used at all. It seems we are avoiding better configuring our Web Application Servers (WAS) - nginx, apache, tomcat, IIS - and instead, putting the problem on to the Web Developer.
Before I dive into my reasoning, I want to briefly explain how Requests are accepted and then handled by a WAS.
NETWORK <-> OS -> QUEUE <- WEB APPLICATION SERVER (WAS) <-> WEB APPLICATION (APP)
A Web Application Server (WAS) tells the Operating System (OS) that it wants to receive Requests on a specific Port, e.g. Port 80 for HTTP.
The OS opens a Listener on the Port (if it's free) and waits for Clients to connect.
When the OS receives a Connection, it adds it to a Queue assigned to the WAS (if there is space, otherwise the Client's Connection is rejected) - the size of the Queue is defined by the WAS when it requests the Port).
The WAS monitors the Queue for Connections and when a Connection is available, accepts the Connection for processing - removing it from the Queue.
The WAS passes the Connection on to the Web Application for processing - it could also handle the process itself if programmed to.
The WAS can handle multiple Connections at the same time by using multiple Processors (normally one per CPU core), each with multiple Threads.
So this now brings me to my query. If the amount of Requests the WAS can handle depends on the speed at which it can process the Queue, which is down to the number of Processors/Threads assigned to the WAS, why do we create an async method inside our APP to offload the Request from the WAS to another Thread not belonging to the WAS instead of just increasing the number of Threads available to the WAS?
If you consider the (not so) new Web Sockets that are popping up, when a Web Socket makes a connection to a WAS, a Thread is assigned to that Connection which is held open so Client and WAS can have continual communication. This Thread is ultimately a Thread on the WAS - meaning it is taking up Server resources - whether belonging to the WAS or independent of it (depending on APP design).
However, instead of creating an independent Thread not belonging to the WAS, why not just increase the number of Threads available to the WAS? Ultimately, the number of Threads you can have is down to the resources - MEMORY, CPU - available on the Server. Or is it a case that by offloading the Connection to a new Thread, you simply don't need to think about how many Threads to assign to the WAS (which seems dangerous because now you can use up Server resources without proper monitoring). It just seems as if a problem is being passed down to the APP - and thus the Developer - instead of being managed at the WAS.
Or am I simply misunderstanding how a Web Application Server works?
Putting it into a simple Web Application Server example. The following offloads the incoming Connection straight to a Thread. I am not limiting the number of Threads that can be created, however I am limited to the number of Open Connections allowed on my macbook. I have also noticed that if the backlog (the second number in the ServerSocket, currently 50) is set too small, I start receiving Broken Pipes and Connection Resets on the Client side.
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 50)) {
while (true) {
new Run(listener.accept()).start();
}
}
}
static class Run extends Thread {
private Socket socket;
Run(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
And now using Asynchronous, you are just passing the Thread on to another Thread. You are still limited by System Resources - allowed number of open files, connections, memory, CPU, etc..
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 100)) {
while (true) {
new Synchronous(listener.accept()).start();
}
}
}
// assumed Synchronous but really it's a Thread from the WAS
// so is already asynchronous when it enters this Class
static class Synchronous extends Thread {
private Socket socket;
Synchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
System.out.println("Passing Socket to Asynchronous " + getName());
new Asynchronous(this.socket).start();
}
}
static class Asynchronous extends Thread {
private Socket socket;
Asynchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
Looking at this Blog about Netflix 'tuning-tomcat-for-a-high-throughput', it looks like Tomcat does the same as my first code example. So Asynchronous processing in the Application shouldn't be necessary.
Tomcat by default has two properties that affect load, acceptCount which defines the maximum Queue size (default: 100) and maxThreads which defines the maximum number of simultaneous request processing threads (default: 200). There is also maxConnections but I'm not sure the point of this with maxThreads defined. You can read about them at Tomcat Config
Late, but maybe better than never. :)
I don't have a great answer to "why asych servlets?" but I think there is another bit on information which would be helpful to you.
What you are describing for the WAS is what Tomcat used to do in it's BIO connector. It was basically a thread per connection model. This limits the number of requests you can serve not just because of the maxThreads setting, but also because the worker thread would potentially continue to be tied up waiting for additional requests on the connection if a Connection:Close wasn't sent. (See https://www.javaworld.com/article/2077995/java-concurrency/java-concurrency-asynchronous-processing-support-in-servlet-3-0.html and What is the difference between Tomcat's BIO Connector and NIO Connector?)
Switching to NIO connector allows tomcat to maintain thousands of connections while still maintaining only a small pool of worker threads.

Troubleshooting the websocket limit in Azure, active connections

I'm in the process of troubleshooting an App Service that is using websockets.
It's running on service plan Basic which allows for 350 websockets.
This is the only app on that plan that uses websockets.
The problem is that after abou 20 hours I get 503 responses saying I reached my websocket limit.
The setup right now has 3 clients connecting to the service.
In the process of investigating websocket leakage in my app I would like to track the number of websockets in use.
Is there anywhere, from my app or in Azure portal, where I can see the number of active websocket connections?
Follow up:
I've logged the websocket connections as Amor suggested.
The HTTP part of my app is still working, I can get dynamic results from the app which now reports what websocket connections are active and how many has been created since start.
After restarting the app service and configured one client to reconnect indefinetely.
It worked fine until the "total websocket connections" reached 350. At this time I shut down the client.
The limit should be 350 concurrent connections but it looks like it is 350 in total since start.
Most (at least 340) of these connections were initiated by a single client which disposed each connection before starting a new one, it has been shutdown once the limit was reached.
I've been suggested to upgrade from Basic to Standard since standard doesn't have the artificial limitation. The only reason I can see this work would be if there is a bug in the websocket limitation for the Basic plan.
Update 2
In parallel I've been in contact with Microsoft Developer Support and they noticed what appears to be that the sockets are stuck in IIS whereas not in Kestrel. The cause of this is still being investigated.
Support could show me graphs of the connection usage over time which clearly showed how the limit was reached.
I'll keep this question updated in case there was some error in my code.
I suggest you define a variable to count the connections. If a web socket connection is opened, just increase the number of connections. If a web socket connection is closed, decrease the number of connections. Code below is for your reference.
Count the connections for ASP.NET SignalR.
public class MyHub : Hub
{
private int _connectionCount = 0;
public override Task OnConnected()
{
_connectionCount++;
return base.OnConnected();
}
public override Task OnReconnected()
{
_connectionCount++;
return base.OnReconnected();
}
public override Task OnDisconnected(bool stopCalled)
{
_connectionCount--;
return base.OnDisconnected(stopCalled);
}
}
Count the connections in traditional ASP.NET.
public class WSChatController : ApiController
{
private int _connectionCount = 0;
public HttpResponseMessage Get()
{
if (HttpContext.Current.IsWebSocketRequest)
{
HttpContext.Current.AcceptWebSocketRequest(ProcessWSChat);
}
return new HttpResponseMessage(HttpStatusCode.SwitchingProtocols);
}
private async Task ProcessWSChat(AspNetWebSocketContext context)
{
WebSocket socket = context.WebSocket;
while (true)
{
ArraySegment<byte> buffer = new ArraySegment<byte>(new byte[1024]);
WebSocketReceiveResult result = await socket.ReceiveAsync(
buffer, CancellationToken.None);
if (socket.State == WebSocketState.Open)
{
_connectionCount++;
//Process the request
}
else
{
_connectionCount--;
break;
}
}
}
}

Run Thread in JavafX Service

I'm confused about how to continue writing my program.
Basically, it connects to multiple serial devices, and then updates the javafX Application based on the responses from the devices (I first have to send the machine a message). So what I did was create a thread to run in the service thread, so that my program would not freeze and the Thread could pause until the message is read (there's a delay between sending and receiving a message over the serial device).
service = new Service() {
#Override
protected Task<String> createTask() {
return new Task<String>(){
#Override
protected String call() throws Exception {
new Thread(thread).start();
return null;
}
};
}
};
Where the thread does some loop, continuously sending and reading messages.
#Override
public synchronized void run() {
while(serialOn && isRunning){
sendMessages();
}
}
public synchronized void sendMessages(){
sendSerial1();
this.wait();
sendSerial2();
this.wait();
}
public synchronized void readMessage1(){ // same readMessage2 for the sendSerial2()
getMessage(); // updates variables that are bound to the Javafx App
this.notify();
}
But, I think the service finishes (i.e. succeeds or fails) before it event starts my serial thread. But I want the service to continue running while the program sends and receives messages.
Let me know if you need more code, it's a little long and requires the serial devices to run, but I can include it here if it makes the question easier to understand.
Don't create a new thread in the call() method of the service's Task.
A service automatically creates threads on which the call() will be invoked. If you want control over the thread creation and use, then you can (optionally) supply an executor to the service (though in your case you probably don't need to do that unless you don't want the service to be a daemon thread).
From the Service javadoc:
If an Executor is specified on the Service, then it will be used to actually execute the service. Otherwise, a daemon thread will be created and executed.
So shift the code inside the run() method of your Runnable and put it into the call() method of the Task for the Service (the Task itself is a Callable, which is a Runnable, so having an additional Runnable is both redundant and confusing).

Invoking a simple worker role

I'm trying to gain some understanding and experience in creating background processes on Azure.
I've created a simple console app and converted it to Azure Worker Role. How do I invoke it? I tried to use Azure Scheduler but looks like the scheduler can only invoke a worker role through message queues or HTTP/HTTPS.
I never thought about any type of communication as my idea was to create a background process that does not really communicate with any other app. Do I need to convert the worker role to a web role and invoke it using Azure Scheduler using HTTP/HTTPS?
Worker role has three events:
OnStart
OnRun
OnStop
public class WorkerRole : RoleEntryPoint
{
ManualResetEvent CompletedEvent = new ManualResetEvent(false);
public override void Run()
{
//Your background processing code
CompletedEvent.WaitOne();
}
public override bool OnStart()
{
return base.OnStart();
}
public override void OnStop()
{
CompletedEvent.Set();
base.OnStop();
}
}
The moment you run/debug your console converted worker role. First two (OnStart & OnRun) fires in sequence. Now in OnRun you have to keep the thread alive, either by using a while loop or using ManualResetEvent this is where your background processing code would live.
OnStop is fired when you either release the thread from OnRun or something un-expected goes. This is the place to dispose your objects. Close unclosed file-handles database connection etc.

Closing Netty server cleanly

Hello currently I am developing an Arquillian extension for Moco framework (https://github.com/dreamhead/moco). Moco is used for testing RESTful services and relies on Netty for dealing with communication. Currently Moco is using Netty 4.0.18.Final.
But I have found a problem when running Moco (and Netty server) inside a container (Arquillian runs tests within the container) and is that it starts correctly but when the application is undeployed and server is shutdown next log error messages are printed:
SEVERE: The web application [/ba32e781-3a18-44b3-9547-7c26787f3fe7] appears to have started a thread named [pool-2-thread-1] but has failed to stop it. This is very likely to create a memory leak.
abr 08, 2014 10:29:06 AM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [/ba32e781-3a18-44b3-9547-7c26787f3fe7] created a ThreadLocal with key of type [io.netty.util.internal.ThreadLocalRandom$2] (value [io.netty.util.internal.ThreadLocalRandom$2#77468cae]) and a value of type [io.netty.util.internal.ThreadLocalRandom] (value [io.netty.util.internal.ThreadLocalRandom#6cd3851]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Basically it seems that there are some threads that are not closed yet when the server tries to shutdown.
From the point of view of Arquillian extension when the application is deployed into the server the start method of Moco is called and before undeploying the application the stop method from Moco is called.
But let me show you the code of Moco:
public int start(final int port, ChannelHandler pipelineFactory) {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(pipelineFactory);
try {
future = bootstrap.bind(port).sync();
SocketAddress socketAddress = future.channel().localAddress();
address = (InetSocketAddress) socketAddress;
return address.getPort();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
and the stop method looks like:
private void doStop() {
if (future != null) {
future.channel().close().syncUninterruptibly();
future = null;
}
So it seems that the close method returns before killing all the threads and for this reason containers warns you about possible memory leaks.
Because I have never used Netty I was wondering if there is a way to ensure that the whole Netty runtime is closed.
Thank you so much for your help.
I am new to Netty as well (and unfamiliar with Arquillian), but based on the Netty Docs examples I believe you might not be shutting down the EventLoopGroups you created (bossGroup, workerGroup). From the Netty 4.0 User Guide:
Shutting down a Netty application is usually as simple as shutting down all EventLoopGroups you created via shutdownGracefully(). It returns a Future that notifies you when the EventLoopGroup has been terminated completely and all Channels that belong to the group have been closed.
So your doStop() method might look like:
private void doStop() {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
An example in the Netty docs: Http Static File Server Example

Resources