Run Thread in JavafX Service - multithreading

I'm confused about how to continue writing my program.
Basically, it connects to multiple serial devices, and then updates the javafX Application based on the responses from the devices (I first have to send the machine a message). So what I did was create a thread to run in the service thread, so that my program would not freeze and the Thread could pause until the message is read (there's a delay between sending and receiving a message over the serial device).
service = new Service() {
#Override
protected Task<String> createTask() {
return new Task<String>(){
#Override
protected String call() throws Exception {
new Thread(thread).start();
return null;
}
};
}
};
Where the thread does some loop, continuously sending and reading messages.
#Override
public synchronized void run() {
while(serialOn && isRunning){
sendMessages();
}
}
public synchronized void sendMessages(){
sendSerial1();
this.wait();
sendSerial2();
this.wait();
}
public synchronized void readMessage1(){ // same readMessage2 for the sendSerial2()
getMessage(); // updates variables that are bound to the Javafx App
this.notify();
}
But, I think the service finishes (i.e. succeeds or fails) before it event starts my serial thread. But I want the service to continue running while the program sends and receives messages.
Let me know if you need more code, it's a little long and requires the serial devices to run, but I can include it here if it makes the question easier to understand.

Don't create a new thread in the call() method of the service's Task.
A service automatically creates threads on which the call() will be invoked. If you want control over the thread creation and use, then you can (optionally) supply an executor to the service (though in your case you probably don't need to do that unless you don't want the service to be a daemon thread).
From the Service javadoc:
If an Executor is specified on the Service, then it will be used to actually execute the service. Otherwise, a daemon thread will be created and executed.
So shift the code inside the run() method of your Runnable and put it into the call() method of the Task for the Service (the Task itself is a Callable, which is a Runnable, so having an additional Runnable is both redundant and confusing).

Related

How to Kill Apache Camel Parent Thread after process complete successfully in Standalone Application?

I start Camel Main Standalone application using Unix Scheduler.
It initiates Routes; But as i have Thread.sleep(time) after context.start().
Now application first execute; whatsoever in routes; and when route finish processing(stop(), application still working and finishes when thread.sleep time over.
Any idea how to completely stop the standalone application after my route finish process?
Following is code snippet for reference:
SimpleRegistry sr = new SimpleRegistry();
sr.put("masterdata", dataSource);
CamelContext context = new DefaultCamelContext(sr);
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("timer://alertstrigtimer?period=60s&repeatCount=1")....
from("etc").....
from("etc").....
from("etc").stop()
}})
context.start();
Thread.sleep(30000);
} catch (Exception e) {
LOGGER.warn("configure(): Exception in creationg flow:", e);
}
Is any way within camel or may be in java to kill the thread after camel route stop all processing.
You have different options, here some I would consider:
use camel-main and configure it to shut-down when a certain amount of the exchanges are done
use a route policy and shut-down the camel context according to your own rule

Understanding asynchronous web processing

I've just finished reading up about asynchronous WebServlet processing. [This article] is a good read.
However, fundamentally I'm confused why this method is the "next generation of web processing" and is in fact used at all. It seems we are avoiding better configuring our Web Application Servers (WAS) - nginx, apache, tomcat, IIS - and instead, putting the problem on to the Web Developer.
Before I dive into my reasoning, I want to briefly explain how Requests are accepted and then handled by a WAS.
NETWORK <-> OS -> QUEUE <- WEB APPLICATION SERVER (WAS) <-> WEB APPLICATION (APP)
A Web Application Server (WAS) tells the Operating System (OS) that it wants to receive Requests on a specific Port, e.g. Port 80 for HTTP.
The OS opens a Listener on the Port (if it's free) and waits for Clients to connect.
When the OS receives a Connection, it adds it to a Queue assigned to the WAS (if there is space, otherwise the Client's Connection is rejected) - the size of the Queue is defined by the WAS when it requests the Port).
The WAS monitors the Queue for Connections and when a Connection is available, accepts the Connection for processing - removing it from the Queue.
The WAS passes the Connection on to the Web Application for processing - it could also handle the process itself if programmed to.
The WAS can handle multiple Connections at the same time by using multiple Processors (normally one per CPU core), each with multiple Threads.
So this now brings me to my query. If the amount of Requests the WAS can handle depends on the speed at which it can process the Queue, which is down to the number of Processors/Threads assigned to the WAS, why do we create an async method inside our APP to offload the Request from the WAS to another Thread not belonging to the WAS instead of just increasing the number of Threads available to the WAS?
If you consider the (not so) new Web Sockets that are popping up, when a Web Socket makes a connection to a WAS, a Thread is assigned to that Connection which is held open so Client and WAS can have continual communication. This Thread is ultimately a Thread on the WAS - meaning it is taking up Server resources - whether belonging to the WAS or independent of it (depending on APP design).
However, instead of creating an independent Thread not belonging to the WAS, why not just increase the number of Threads available to the WAS? Ultimately, the number of Threads you can have is down to the resources - MEMORY, CPU - available on the Server. Or is it a case that by offloading the Connection to a new Thread, you simply don't need to think about how many Threads to assign to the WAS (which seems dangerous because now you can use up Server resources without proper monitoring). It just seems as if a problem is being passed down to the APP - and thus the Developer - instead of being managed at the WAS.
Or am I simply misunderstanding how a Web Application Server works?
Putting it into a simple Web Application Server example. The following offloads the incoming Connection straight to a Thread. I am not limiting the number of Threads that can be created, however I am limited to the number of Open Connections allowed on my macbook. I have also noticed that if the backlog (the second number in the ServerSocket, currently 50) is set too small, I start receiving Broken Pipes and Connection Resets on the Client side.
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 50)) {
while (true) {
new Run(listener.accept()).start();
}
}
}
static class Run extends Thread {
private Socket socket;
Run(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
And now using Asynchronous, you are just passing the Thread on to another Thread. You are still limited by System Resources - allowed number of open files, connections, memory, CPU, etc..
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 100)) {
while (true) {
new Synchronous(listener.accept()).start();
}
}
}
// assumed Synchronous but really it's a Thread from the WAS
// so is already asynchronous when it enters this Class
static class Synchronous extends Thread {
private Socket socket;
Synchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
System.out.println("Passing Socket to Asynchronous " + getName());
new Asynchronous(this.socket).start();
}
}
static class Asynchronous extends Thread {
private Socket socket;
Asynchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
Looking at this Blog about Netflix 'tuning-tomcat-for-a-high-throughput', it looks like Tomcat does the same as my first code example. So Asynchronous processing in the Application shouldn't be necessary.
Tomcat by default has two properties that affect load, acceptCount which defines the maximum Queue size (default: 100) and maxThreads which defines the maximum number of simultaneous request processing threads (default: 200). There is also maxConnections but I'm not sure the point of this with maxThreads defined. You can read about them at Tomcat Config
Late, but maybe better than never. :)
I don't have a great answer to "why asych servlets?" but I think there is another bit on information which would be helpful to you.
What you are describing for the WAS is what Tomcat used to do in it's BIO connector. It was basically a thread per connection model. This limits the number of requests you can serve not just because of the maxThreads setting, but also because the worker thread would potentially continue to be tied up waiting for additional requests on the connection if a Connection:Close wasn't sent. (See https://www.javaworld.com/article/2077995/java-concurrency/java-concurrency-asynchronous-processing-support-in-servlet-3-0.html and What is the difference between Tomcat's BIO Connector and NIO Connector?)
Switching to NIO connector allows tomcat to maintain thousands of connections while still maintaining only a small pool of worker threads.

Clean shutdown Spring Integration and Spring Boot when using ThreadPoolTaskExecutor

I am using Spring Integration and Spring Boot for some development on my location machine based on the Spring Guides. I am using Gradle to build and run the application. The following code is used to bootstrap Spring and I can terminate the application by pressing the enter key.
public class Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext ctx = new SpringApplication(Application.class).run(args);
System.out.println("Hit Enter to terminate");
System.in.read();
ctx.close();
}
}
This works fine but when I introduce a ThreadPoolTaskExecutor into the integration flow, the application never terminates. I have to use ^C to kill the application. The code I am using is as follows.
...
channel(MessageChannels.executor(myTaskExecutor()))
...
#Bean
public ThreadPoolTaskExecutor myTaskExecutor() {
ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
pool.setCorePoolSize(10);
pool.setMaxPoolSize(20);
pool.setWaitForTasksToCompleteOnShutdown(true);
pool.setAwaitTerminationSeconds(1);
pool.initialize();
return pool;
}
I have:
Tried to shutdown the executor (using the shutdown()) method) before and after the context is closed.
Tried the above code also within the onApplicationEvent(ContextClosedEvent event) method.
Temporarily commenting the code which is run in the thread to make sure that is not holding on to the thread in any way.
Is there any anything else I need to do?
ctx.close() will shutdown any executor beans (by calling the destroy() method), so it is likely you have a thread "stuck" somewhere.
Take a Thread dump (jstack) to see what the executor thread is doing.

Qt - How to add QTCPSocket to the pending connections of QTCPServer

I have a simple multithreading application that works like the example "Threaded Fortune Server Example" on Qt website.
When my QTCPServer receive an incoming connection, I create a QRunnable task, passing the socketDescritptor and then submitting it to a QThreadPool:
void Server::incomingConnection(qintptr socketDescriptor)
{
Task *task = new Task(socketDescriptor, this);
this->m_pool->start(task);
}
then in the Task method run() I create the socket:
void Task::run()
{
QTcpSocket tcpSocket;
if (!tcpSocket.setSocketDescriptor(socketDescriptor)) {
emit error(tcpSocket.error());
return;
}
...
}
However, I read those note about incomingConnection method of QTCPServer
from the Qt Doc
Note: If another socket is created in the reimplementation of this method, it needs to be added to the Pending Connections mechanism by calling addPendingConnection().
Note: If you want to handle an incoming connection as a new QTcpSocket
object in another thread you have to pass the socketDescriptor to the
other thread and create the QTcpSocket object there and use its
setSocketDescriptor() method.
So, my questions are:
how can I add the socket created from another
thread to the pending connection of the Server?
is this a needful operation?
Thanks,

Invoking a simple worker role

I'm trying to gain some understanding and experience in creating background processes on Azure.
I've created a simple console app and converted it to Azure Worker Role. How do I invoke it? I tried to use Azure Scheduler but looks like the scheduler can only invoke a worker role through message queues or HTTP/HTTPS.
I never thought about any type of communication as my idea was to create a background process that does not really communicate with any other app. Do I need to convert the worker role to a web role and invoke it using Azure Scheduler using HTTP/HTTPS?
Worker role has three events:
OnStart
OnRun
OnStop
public class WorkerRole : RoleEntryPoint
{
ManualResetEvent CompletedEvent = new ManualResetEvent(false);
public override void Run()
{
//Your background processing code
CompletedEvent.WaitOne();
}
public override bool OnStart()
{
return base.OnStart();
}
public override void OnStop()
{
CompletedEvent.Set();
base.OnStop();
}
}
The moment you run/debug your console converted worker role. First two (OnStart & OnRun) fires in sequence. Now in OnRun you have to keep the thread alive, either by using a while loop or using ManualResetEvent this is where your background processing code would live.
OnStop is fired when you either release the thread from OnRun or something un-expected goes. This is the place to dispose your objects. Close unclosed file-handles database connection etc.

Resources