Understanding asynchronous web processing - multithreading

I've just finished reading up about asynchronous WebServlet processing. [This article] is a good read.
However, fundamentally I'm confused why this method is the "next generation of web processing" and is in fact used at all. It seems we are avoiding better configuring our Web Application Servers (WAS) - nginx, apache, tomcat, IIS - and instead, putting the problem on to the Web Developer.
Before I dive into my reasoning, I want to briefly explain how Requests are accepted and then handled by a WAS.
NETWORK <-> OS -> QUEUE <- WEB APPLICATION SERVER (WAS) <-> WEB APPLICATION (APP)
A Web Application Server (WAS) tells the Operating System (OS) that it wants to receive Requests on a specific Port, e.g. Port 80 for HTTP.
The OS opens a Listener on the Port (if it's free) and waits for Clients to connect.
When the OS receives a Connection, it adds it to a Queue assigned to the WAS (if there is space, otherwise the Client's Connection is rejected) - the size of the Queue is defined by the WAS when it requests the Port).
The WAS monitors the Queue for Connections and when a Connection is available, accepts the Connection for processing - removing it from the Queue.
The WAS passes the Connection on to the Web Application for processing - it could also handle the process itself if programmed to.
The WAS can handle multiple Connections at the same time by using multiple Processors (normally one per CPU core), each with multiple Threads.
So this now brings me to my query. If the amount of Requests the WAS can handle depends on the speed at which it can process the Queue, which is down to the number of Processors/Threads assigned to the WAS, why do we create an async method inside our APP to offload the Request from the WAS to another Thread not belonging to the WAS instead of just increasing the number of Threads available to the WAS?
If you consider the (not so) new Web Sockets that are popping up, when a Web Socket makes a connection to a WAS, a Thread is assigned to that Connection which is held open so Client and WAS can have continual communication. This Thread is ultimately a Thread on the WAS - meaning it is taking up Server resources - whether belonging to the WAS or independent of it (depending on APP design).
However, instead of creating an independent Thread not belonging to the WAS, why not just increase the number of Threads available to the WAS? Ultimately, the number of Threads you can have is down to the resources - MEMORY, CPU - available on the Server. Or is it a case that by offloading the Connection to a new Thread, you simply don't need to think about how many Threads to assign to the WAS (which seems dangerous because now you can use up Server resources without proper monitoring). It just seems as if a problem is being passed down to the APP - and thus the Developer - instead of being managed at the WAS.
Or am I simply misunderstanding how a Web Application Server works?
Putting it into a simple Web Application Server example. The following offloads the incoming Connection straight to a Thread. I am not limiting the number of Threads that can be created, however I am limited to the number of Open Connections allowed on my macbook. I have also noticed that if the backlog (the second number in the ServerSocket, currently 50) is set too small, I start receiving Broken Pipes and Connection Resets on the Client side.
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 50)) {
while (true) {
new Run(listener.accept()).start();
}
}
}
static class Run extends Thread {
private Socket socket;
Run(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
And now using Asynchronous, you are just passing the Thread on to another Thread. You are still limited by System Resources - allowed number of open files, connections, memory, CPU, etc..
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 100)) {
while (true) {
new Synchronous(listener.accept()).start();
}
}
}
// assumed Synchronous but really it's a Thread from the WAS
// so is already asynchronous when it enters this Class
static class Synchronous extends Thread {
private Socket socket;
Synchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
System.out.println("Passing Socket to Asynchronous " + getName());
new Asynchronous(this.socket).start();
}
}
static class Asynchronous extends Thread {
private Socket socket;
Asynchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
Looking at this Blog about Netflix 'tuning-tomcat-for-a-high-throughput', it looks like Tomcat does the same as my first code example. So Asynchronous processing in the Application shouldn't be necessary.
Tomcat by default has two properties that affect load, acceptCount which defines the maximum Queue size (default: 100) and maxThreads which defines the maximum number of simultaneous request processing threads (default: 200). There is also maxConnections but I'm not sure the point of this with maxThreads defined. You can read about them at Tomcat Config

Late, but maybe better than never. :)
I don't have a great answer to "why asych servlets?" but I think there is another bit on information which would be helpful to you.
What you are describing for the WAS is what Tomcat used to do in it's BIO connector. It was basically a thread per connection model. This limits the number of requests you can serve not just because of the maxThreads setting, but also because the worker thread would potentially continue to be tied up waiting for additional requests on the connection if a Connection:Close wasn't sent. (See https://www.javaworld.com/article/2077995/java-concurrency/java-concurrency-asynchronous-processing-support-in-servlet-3-0.html and What is the difference between Tomcat's BIO Connector and NIO Connector?)
Switching to NIO connector allows tomcat to maintain thousands of connections while still maintaining only a small pool of worker threads.

Related

How to Kill Apache Camel Parent Thread after process complete successfully in Standalone Application?

I start Camel Main Standalone application using Unix Scheduler.
It initiates Routes; But as i have Thread.sleep(time) after context.start().
Now application first execute; whatsoever in routes; and when route finish processing(stop(), application still working and finishes when thread.sleep time over.
Any idea how to completely stop the standalone application after my route finish process?
Following is code snippet for reference:
SimpleRegistry sr = new SimpleRegistry();
sr.put("masterdata", dataSource);
CamelContext context = new DefaultCamelContext(sr);
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("timer://alertstrigtimer?period=60s&repeatCount=1")....
from("etc").....
from("etc").....
from("etc").stop()
}})
context.start();
Thread.sleep(30000);
} catch (Exception e) {
LOGGER.warn("configure(): Exception in creationg flow:", e);
}
Is any way within camel or may be in java to kill the thread after camel route stop all processing.
You have different options, here some I would consider:
use camel-main and configure it to shut-down when a certain amount of the exchanges are done
use a route policy and shut-down the camel context according to your own rule

How can a process handle multiple requests on a web server using sockets?TCP

I know that you utilize a port to address a process and that you have to use sockets for handling multiple requests on web server, but how does it work? Is the process creating multiple socket threads for each connection? Is threading the answer?
Overview
This is a great question, and one that will take a bit to explain fully. I will step through different parts of this topic below. I personally learned multi-threading in Java, which has quite an extensive concurrency library. Although my examples will be in Java, the concepts will stand between languages.
Is threading valid?
In short, yes this is a perfect use case for multi-threading, although single-threaded is fine for simple scenarios as well. However, there does exist better designs that may yield better performance and safer code. The great thing is there are loads of examples on how to do this on the internet!
Multi-Threading
Lets investigate sample code from this article, seen below.
public class Server
{
public static void main(String[] args) throws IOException
{
// server is listening on port 5056
ServerSocket ss = new ServerSocket(5056);
// running infinite loop for getting
// client request
while (true)
{
Socket s = null;
try
{
// socket object to receive incoming client requests
s = ss.accept();
System.out.println("A new client is connected : " + s);
// obtaining input and out streams
DataInputStream dis = new DataInputStream(s.getInputStream());
DataOutputStream dos = new DataOutputStream(s.getOutputStream());
System.out.println("Assigning new thread for this client");
// create a new thread object
Thread t = new ClientHandler(s, dis, dos);
// Invoking the start() method
t.start();
}
catch (Exception e){
s.close();
e.printStackTrace();
}
}
}
}
The Server code is actually quite basic but still does the job well. Lets step through all the logic seen here:
The Server sets up on Socket 5056
The Server begins its infinite loop
The client blocks on ss.accept() until a client request is received on part 5056
The Server does relatively minimal operations (i.e. System.out logging, set up IO streams)
A Thread is created and assigned to this request
The Thread is started
The loop repeats
The mentality here is that the server acts as a dispatcher. Requests enter the server, and the server allocates workers (Threads) to complete the operations in parallel so that the server can wait for and assist the next, incoming request.
Pros
Simple, readable code
Operations in parallel allows for increased performance with proper synchronization
Cons
The dangers of multi-threading
The creation of threads is quite cumbersome and resource intensive, thus should not be a frequent operation
No re-use of threads
Must manually limit threads
Thread Pool
Lets investigate sample code from this article, seen below.
while(! isStopped()){
Socket clientSocket = null;
try {
clientSocket = this.serverSocket.accept();
} catch (IOException e) {
if(isStopped()) {
System.out.println("Server Stopped.") ;
break;
}
throw new RuntimeException("Error accepting client connection", e);
}
this.threadPool.execute(new WorkerRunnable(clientSocket,"Thread Pooled Server"));
}
Note, I excluded the setup because it is rather similar to the Multi-Threaded example. Lets step through the logic in this example.
The server waits for a request to arrive on its alloted port
The server sends the request to a handler that is given to the ThreadPool to run
The ThreadPool receives Runnable code, allocated a worker, and begin code execution in parallel
The loop repeats
The server again acts as a dispatcher; it listens for the request, receives one, and ships it to a ThreadPool. The ThreadPool abstracts the complex resource management from the developer and executes the code optimized fully. This is very similar to the multi-thread example, but all resource management is packaged into the ThreadPool. The code is reduced further from the above example, and it is much safer for non-multi-threading professionals. Note, the WorkerRunnable is only a Runnable, not a raw Thread, whilst the ClientHandler in the Multi-Thread example was a raw Thread.
Pros
Threads are managed and re-used by the pool
Further simplify code base
Inherits all benefits from the Multi-Threaded example
Cons
There is a learning curve to fully understanding pooling and different configurations of them
Notes
In Java, there exists another implementation called RMI, that attempts to abstract away the network, thus allowing the communication of Client-Server to happen as though it is on one JVM, even if it is on many. Although this as well can use multi-threading, it is another approach to the issue instead of sockets.

Troubleshooting the websocket limit in Azure, active connections

I'm in the process of troubleshooting an App Service that is using websockets.
It's running on service plan Basic which allows for 350 websockets.
This is the only app on that plan that uses websockets.
The problem is that after abou 20 hours I get 503 responses saying I reached my websocket limit.
The setup right now has 3 clients connecting to the service.
In the process of investigating websocket leakage in my app I would like to track the number of websockets in use.
Is there anywhere, from my app or in Azure portal, where I can see the number of active websocket connections?
Follow up:
I've logged the websocket connections as Amor suggested.
The HTTP part of my app is still working, I can get dynamic results from the app which now reports what websocket connections are active and how many has been created since start.
After restarting the app service and configured one client to reconnect indefinetely.
It worked fine until the "total websocket connections" reached 350. At this time I shut down the client.
The limit should be 350 concurrent connections but it looks like it is 350 in total since start.
Most (at least 340) of these connections were initiated by a single client which disposed each connection before starting a new one, it has been shutdown once the limit was reached.
I've been suggested to upgrade from Basic to Standard since standard doesn't have the artificial limitation. The only reason I can see this work would be if there is a bug in the websocket limitation for the Basic plan.
Update 2
In parallel I've been in contact with Microsoft Developer Support and they noticed what appears to be that the sockets are stuck in IIS whereas not in Kestrel. The cause of this is still being investigated.
Support could show me graphs of the connection usage over time which clearly showed how the limit was reached.
I'll keep this question updated in case there was some error in my code.
I suggest you define a variable to count the connections. If a web socket connection is opened, just increase the number of connections. If a web socket connection is closed, decrease the number of connections. Code below is for your reference.
Count the connections for ASP.NET SignalR.
public class MyHub : Hub
{
private int _connectionCount = 0;
public override Task OnConnected()
{
_connectionCount++;
return base.OnConnected();
}
public override Task OnReconnected()
{
_connectionCount++;
return base.OnReconnected();
}
public override Task OnDisconnected(bool stopCalled)
{
_connectionCount--;
return base.OnDisconnected(stopCalled);
}
}
Count the connections in traditional ASP.NET.
public class WSChatController : ApiController
{
private int _connectionCount = 0;
public HttpResponseMessage Get()
{
if (HttpContext.Current.IsWebSocketRequest)
{
HttpContext.Current.AcceptWebSocketRequest(ProcessWSChat);
}
return new HttpResponseMessage(HttpStatusCode.SwitchingProtocols);
}
private async Task ProcessWSChat(AspNetWebSocketContext context)
{
WebSocket socket = context.WebSocket;
while (true)
{
ArraySegment<byte> buffer = new ArraySegment<byte>(new byte[1024]);
WebSocketReceiveResult result = await socket.ReceiveAsync(
buffer, CancellationToken.None);
if (socket.State == WebSocketState.Open)
{
_connectionCount++;
//Process the request
}
else
{
_connectionCount--;
break;
}
}
}
}

Qt - How to add QTCPSocket to the pending connections of QTCPServer

I have a simple multithreading application that works like the example "Threaded Fortune Server Example" on Qt website.
When my QTCPServer receive an incoming connection, I create a QRunnable task, passing the socketDescritptor and then submitting it to a QThreadPool:
void Server::incomingConnection(qintptr socketDescriptor)
{
Task *task = new Task(socketDescriptor, this);
this->m_pool->start(task);
}
then in the Task method run() I create the socket:
void Task::run()
{
QTcpSocket tcpSocket;
if (!tcpSocket.setSocketDescriptor(socketDescriptor)) {
emit error(tcpSocket.error());
return;
}
...
}
However, I read those note about incomingConnection method of QTCPServer
from the Qt Doc
Note: If another socket is created in the reimplementation of this method, it needs to be added to the Pending Connections mechanism by calling addPendingConnection().
Note: If you want to handle an incoming connection as a new QTcpSocket
object in another thread you have to pass the socketDescriptor to the
other thread and create the QTcpSocket object there and use its
setSocketDescriptor() method.
So, my questions are:
how can I add the socket created from another
thread to the pending connection of the Server?
is this a needful operation?
Thanks,

Run Thread in JavafX Service

I'm confused about how to continue writing my program.
Basically, it connects to multiple serial devices, and then updates the javafX Application based on the responses from the devices (I first have to send the machine a message). So what I did was create a thread to run in the service thread, so that my program would not freeze and the Thread could pause until the message is read (there's a delay between sending and receiving a message over the serial device).
service = new Service() {
#Override
protected Task<String> createTask() {
return new Task<String>(){
#Override
protected String call() throws Exception {
new Thread(thread).start();
return null;
}
};
}
};
Where the thread does some loop, continuously sending and reading messages.
#Override
public synchronized void run() {
while(serialOn && isRunning){
sendMessages();
}
}
public synchronized void sendMessages(){
sendSerial1();
this.wait();
sendSerial2();
this.wait();
}
public synchronized void readMessage1(){ // same readMessage2 for the sendSerial2()
getMessage(); // updates variables that are bound to the Javafx App
this.notify();
}
But, I think the service finishes (i.e. succeeds or fails) before it event starts my serial thread. But I want the service to continue running while the program sends and receives messages.
Let me know if you need more code, it's a little long and requires the serial devices to run, but I can include it here if it makes the question easier to understand.
Don't create a new thread in the call() method of the service's Task.
A service automatically creates threads on which the call() will be invoked. If you want control over the thread creation and use, then you can (optionally) supply an executor to the service (though in your case you probably don't need to do that unless you don't want the service to be a daemon thread).
From the Service javadoc:
If an Executor is specified on the Service, then it will be used to actually execute the service. Otherwise, a daemon thread will be created and executed.
So shift the code inside the run() method of your Runnable and put it into the call() method of the Task for the Service (the Task itself is a Callable, which is a Runnable, so having an additional Runnable is both redundant and confusing).

Resources