Stopping orphan quartz scheduler - linux

I have set up quartz scheduler for my Tomcat server and restarted the server several times.
The scheduler started giving errors of the kind (very often btw - every 30 seconds):
DEBUG org.quartz.core.QuartzSchedulerThread - batch acquisition of 0 triggers
log4j:ERROR Error occured while converting date.
java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at java.lang.AbstractStringBuilder.getChars(AbstractStringBuilder.java:328)
at java.lang.StringBuffer.getChars(StringBuffer.java:201)
at org.apache.log4j.helpers.ISO8601DateFormat.format(ISO8601DateFormat.java:130)
at java.text.DateFormat.format(DateFormat.java:316)
at org.apache.log4j.helpers.PatternParser$DatePatternConverter.convert(PatternParser.java:444)
at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
at org.apache.log4j.PatternLayout.format(PatternLayout.java:502)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:302)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:209)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:268)
I think I fixed the problem, but now I face another issue: the first few times I restarted Tomcat, I did not shut down the quartz scheduler. Now I added the following code to my scheduler:
public class QuartzListener implements ServletContextListener {
private Scheduler scheduler;
#Override
public void contextDestroyed(ServletContextEvent arg0) {
if (scheduler != null) {
try {
scheduler.shutdown();
} catch (SchedulerException e) {
e.printStackTrace();
}
}
}
...
}
This seems to be stopping the new quartz instances, but the old ones keep on logging the same old error. So my question is: how do I stop quartz instances that I accidently forgot to stop gracefully via shutdown()?
I am running on linux, but I never saw the quartz processes/ deamons when I run ps aux, maybe I am missing something.

Related

Right way to start a worker thread with Quarkus?

I am implementing a Quarkus server. On server start, a (never ending) background process should be started.
I know I can observe the start event with an #ApplicationScopedbean which implements:
void onStart(#Observes StartupEvent ev).
But what is the best way to start a background process? Are there restrictions?
In J2EE one should not create Threads, but use ManagedExecutorService or an EJB with a #Asynchronous annotated method.
Is there something similar in Quarkus? I only found the scheduler annotations (which are nice, but I want to start a process only once at the beginning).
So can I just create threads? Or just put my infinite code in void onStart(#Observes StartupEvent ev) ?
Thank you
As in EJB you should not do such things with a background process. Such processes that are "out of control" of the framework cause most of time very annoying problems.
The answer is: It depends on what you want to do in that job.
If you want to execute tasks on a regular base you could use timers.
If you want to use it as an asynchronous worker, you can use a message queue.
Both is most easily done with the vert.x integration into Quarkus.
Use #ConsumeEvent to create a queue, use
#Inject
EventBus bus;
bus.send("Example message");
to send messages.
Use #Scheduled to work on regular jobs, like in this example.
If you need to permanently listen to some socket or file it is getting more difficult. Maybe the websockets will be helpful in this case.
The easiest way to start a worker thread is using Vertx#executeBlocking, like this:
#Inject
Vertx vertx;
void foo() {
vertx.<String>executeBlocking(promise -> {
// This code will be executed in a worker thread
System.out.println("Thread: " + Thread.currentThread());
promise.complete("Done");
}, asyncResult -> {
System.out.println(asyncResult.result()); // Done
});
}
If it will be a long running thread maybe it's a good idea not to use the default worker thread pool and create a new one:
...
#PostConstruct
void init() {
this.executor = vertx.createSharedWorkerExecutor("my-worker", 10);
}
void foo() {
executor.<String>executeBlocking(promise -> {
...
}
}
Other way of doing this is using a Verticle.

AEM cronjob based on certain time interval

I am trying to create a cronjob in cq using a time interval
I see on the link https://sling.apache.org/documentation/bundles/scheduler-service-commons-scheduler.html I could make job1 run and it will work. But I have a questions on the code.
In the below code
Why is job1.run() invoked in a catch block? Can we not add it to the try block?
Can I replace the catch block instead of job1.run() using thread using start and can I add in try block or must it be in the catch block?
Thread newThread = new Thread(job1);
newThread.start();
I see the cronjob code in the above link
protected void activate(ComponentContext componentContext) throws Exception {
//case 1: with addJob() method: executes the job every minute
String schedulingExpression = "0 * * * * ?";
String jobName1 = "case1";
Map<String, Serializable> config1 = new HashMap<String, Serializable>();
boolean canRunConcurrently = true;
final Runnable job1 = new Runnable() {
public void run() {
log.info("Executing job1");
}
};
try {
this.scheduler.addJob(jobName1, job1, config1, schedulingExpression, canRunConcurrently);
} catch (Exception e) {
job1.run();
}
}
According to the Javadoc, addJob, addPeriodicJob and fireJobAt will throw an Exception if the job cannot be added. The docs do not suggest anything regarding the cause of such failures.
The snippet on the Apache Sling Scheduler documentation page that you quoted in your question catches and ignores these exceptions.
Looking at the implementation provided, job1 is just a regular runnable so executing the run method manually in the catch block does not affect the Scheduler at all.
What it seems to be doing is attempt to add the job and in case of failure, silently ignore it and run it manually so that it prints "Executing job1"
There are at least two serious problems with this approach:
The code ignores the fact that something went wrong while adding the job and pretends this never happens (no logging, nothing)
It runs the job manually giving you the impression that it has been scheduled and just ran for the first time.
As to why it's happening? I have no idea. It's just silly and It's certainly not something I'd like to see in actual, non-tutorial code.
The API using such a generic exception to signal failure is also quite unfortunate.
Coincidentally, Sling 7 deprecates addJob, addPeriodicJob and fireJobAt and replaces them all with schedule. The schedule method returns a boolean so it doesn't give any more information about what exactly happened but it doesn't require you to use ugly try catch blocks.
If you're unable to use the latest version of Sling, make sure to use a logger and log the exceptions. Running your jobs manually, whatever they are, probably won't make much sense but that's something you need to decide.

How do I cleanly shutdown a distributed ActivePivot setup?

We have an ActivePivot cube that is a polymorphic cube (2 nodes) where 1 node is itself a horisontally distributed cube (8 nodes). Running in Tomcat using JGroup TCP for distribution. It is restarted on a daily basis, but every time it is shut down (node services are stopped in sequence), various errors show up in the logs. This is harmless, but anoying from a monitoring perspective.
Example from one day (all same node):
19:04:43.100 ERROR [Pool-LongPollin][streaming] A listener dropped (5f587379-ac67-4645-8554-2e02ed739924). The number of listeners is now 1
19:04:45.767 ERROR [Pool-LongPollin][streaming] Publishing global failure
19:05:16.313 ERROR [localhost-start][core] Failed to stop feed type MDXFEED with id A1C1D8D92CF7D867F09DCB7E65077B18.0.PT0
Example from another day (same error from multiple different nodes):
19:00:17.353 ERROR [pivot-remote-0-][distribution] A safe broadcasting task could not be performed
com.quartetfs.fwk.QuartetRuntimeException: [<node name>] Cannot run a broadcasting task with a STOPPED messenger
Does anyone know of a clean way to shut down a setup like this?
Those errors appear because on application shutdown the ActivePivotManager is aggressively stopping the distribution, without waiting for each distributed ActivePivot to be notified that other cubes have been stopped.
To smoothly stop distribution you can use the methods from the DistributionUtil class. For instance:
public class DistributionStopper {
protected final IActivePivotManager manager;
public DistributionStopper (IActivePivotManager manager){
this.manager = manager;
}
public void stop(){
// Get all the schemas from the manager
final Collection<IActivePivotSchema> schemas = manager.getSchemas().values();
// To store all the available messengers
final List<IDistributedMessenger<?>> availableMessengers = new LinkedList<>();
// Find all the messengers
for(IActivePivotSchema schema : schemas){
for(String pivotId : schema.getPivotIds()){
// Retrieve the activePivot matching this id
final IMultiVersionActivePivot pivot = schema.retrieveActivePivot(pivotId);
if(pivot instanceof IMultiVersionDistributedActivePivot){
IDistributedMessenger<IActivePivotSession> messenger = ((IMultiVersionDistributedActivePivot) pivot).getMessenger();
if(messenger != null){
availableMessengers.add(messenger);
}
}
}
}
// Smoothly stop the messengers
DistributionUtil.stopMessengers(availableMessengers);
}
}
Then register this custom class as a Spring bean depending on the activePivotManager singleton bean, in order to have its destroyMethod called before the one of the manager.
#Bean(destroyMethod="stop")
#DependsOn("activePivotManager")
public DistributionStopper distributionStopper(IActivePivotManager manager){
return new DistributionStopper(manager);
}

What is the proper way to shutdown threads when tomcat closes?

I am trying to shutdown threads when Tomcat is being shutdown.
Specifically I am trying to shutdown log4j watchdog (for filechanges) and also I am trying to shutdown an executor which uses a class in my web app.
On shutdown I see exceptions in Catalina.out.
For Log4J I see:
INFO: Illegal access: this web application instance has been stopped
already. Could not load org.apache.log4j.helpers.NullEnumeration.
The eventual following stack trace is caused by an error thrown for
debugging purposes as well as to attempt to terminate the thread which
caused the illegal access, and has no functional impact. Throwable
occurred: java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1587)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1546)
at org.apache.log4j.Category.getAllAppenders(Category.java:413)
at org.apache.log4j.Category.closeNestedAppenders(Category.java:226)
at org.apache.log4j.Hierarchy.shutdown(Hierarchy.java:467)
at org.apache.log4j.LogManager.shutdown(LogManager.java:267)
at com.listeners.myListener$1.run(myListener.java:232)
Exception in thread "Thread-14" java.lang.NoClassDefFoundError:
org.apache.log4j.helpers.NullEnumeration
at org.apache.log4j.Category.getAllAppenders(Category.java:413)
at org.apache.log4j.Category.closeNestedAppenders(Category.java:226)
at org.apache.log4j.Hierarchy.shutdown(Hierarchy.java:467)
at org.apache.log4j.LogManager.shutdown(LogManager.java:267)
And for the executor part:
INFO: Illegal access: this web application instance has been stopped
already. Could not load com.my.class.SomeClass. The eventual
following stack trace is caused by an error thrown for debugging
purposes as well as to attempt to terminate the thread which caused
the illegal access, and has no functional impact. Throwable occurred:
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1587)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1546)
at
Exception in thread "Thread-13" java.lang.NoClassDefFoundError:
com.my.class.SomeClass
What I am doing is in ServletContextListener on contextDestroyed I have added shutdown hooks as follows:
public void contextDestroyed(ServletContextEvent arg0) {
Runtime.getRuntime().addShutdownHook(new Thread(){
#Override
public void run(){
LogManager.shutdown();
}
});
}
public void contextDestroyed(ServletContextEvent arg0) {
Runtime.getRuntime().addShutdownHook(new Thread(){
#Override
public void run(){
SomeClass.updater.shutdown();
}
});
}
What am I doing wrong here? Why do I get exceptions?
UPDATE:
SomeClass.updater is a public static ScheduledExecutorService.
LogManager is org.apache.log4j.LogManager
UPDATE2:
After following the answer from BGR I do directly
public void contextDestroyed(ServletContextEvent arg0) {
SomeClass.updater.shutdown();
}
and
public void contextDestroyed(ServletContextEvent arg0) {
LogManager.shutdown();
}
I don't get exception from Log4j but I get the following exception for SomeClass.updater which is a public static ScheduledExecutorService:
INFO: Illegal access: this web application instance has been stopped
already. Could not load java.util.concurrent.ExecutorService. The
eventual following stack trace is caused by an error thrown for
debugging purposes as well as to attempt to terminate the thread which
caused the illegal access, and has no functional impact. Throwable
occurred: java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1587)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1546)
Why? Are the classes already been garbage collected?
I would register shutdown hooks in the init() method of the servlet rather than contextDetroyed(), but anyway, why do you need Shutdown hooks in the first place?
Can't you just call SomeClass.updater.shutdown();directly in the contextDestroyed() method ?
EDIT
contextDestroyed() of the listener is to late for the executor service. As stated in the javadoc All servlets and filters will have been destroyed before any ServletContextListeners are notified of context destruction.
whereas overriding the servlet destroy() should be OK as according to the javadoc This method gives the servlet an opportunity to clean up any resources that are being held (for example, memory, file handles, threads...
#Override
public void destroy( ) {
myThreadExecutor.shutdown();
super.destroy( );
}
Calling
LogManager.shutdown();
in the contextDestroyed() method is the first step but the ExecutorService does not close immediately. You are getting the exceptions because the ExecutorService threads are still running after the contextDestroyed() method returns. You need to do:
public void contextDestroyed(ServletContextEvent arg0) {
LogManager.shutdown();
if(LogManager.awaitTermination(10, TimeUnit.SECONDS) == false) {
LogManager.shutdownNow();
}
}
This way the thread pool has closed and stopped all threads when contextDestroyed() exits.

How to properly kill local threads owned by a webapp running on tomcat instructed to shutdown

A backend webapp is deployed on a Tomcat 6 servlet container. In the webapp, several monitoring threads are started. The problem is with shutdown.
How do I know that the webapp is requested to shutdown?
How should I handle this in my threads?
Currently my thread is implemented as below. When the servlet is instructed to shutdown (shutdown.sh) it does complete a clean shutdown and does not hang because of this thread -- Why?
class Updater extends Thread {
volatile boolean interrupted = false;
#Override
public void run() {
Integer lastUpdateLogId = CommonBeanFactory.getXXX()
.getLastUpdateLogRecordKey(MLConstants.SMART_DB_NAME);
List<UpdateLog> updateLogRecords;
while (!interrupted) {
boolean isConfigurationUpdateRequested = false;
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException e) {
setInterrupted(true);
}
updateLogRecords = CommonBeanFactory.getXXX()
.getLastFactsUpdateLogRecords(MLConstants.XXXX, lastUpdateLogId);
for(UpdateLog updateLog : updateLogRecords) {
if (updateLog.getTable_name().equals(MLConstants.CONFIG_RELOAD)) {
isConfigurationUpdateRequested = true;
}
lastUpdateLogId = updateLog.getObjectKey();
}
if (isConfigurationUpdateRequested) {
Configuration.getInstance().loadConfiguration();
}
}
}
public boolean getInterrupted() {
return interrupted;
}
public void setInterrupted(boolean interrupted) {
this.interrupted = interrupted;
}
}
I guess I can't reply to answers yet. Eddie's answer is not quite correct.
I found this question because I'm trying to figure out why my webapp doesn't shut down properly; I have threads that don't get killed when I run shutdown.*. In fact, it stops some threads but ultimately just sits there in some limbo state. My class is almost exactly like this one, actually.
Typing Ctrl+C in the foreground Tomcat window (on Windows) does stop everything, however using the init script that comes with Tomcat does not. Unfortunately, I haven't figured out why yet...
Edit: I figured it out. Most of my monitoring threads are started in a ServletContextListener, but when that context was "destroyed", the child threads weren't notified. I fixed it by simply keeping all child threads in a List and looping through, calling Thread.interrupt() on each within the contextDestroyed() method. It's almost the same as what Eddie said about the servlet destroy() method.
However, it's not correct that the JVM is summarily shut down when you run shutdown.{sh|bat}. It's more like that script sends a shutdown request to the Tomcat components. It's up to you to receive those shutdown messages and pass them along to your own objects.
Servlets receive a lifecycle event when instructed to shut down. You can use this event to stop your monitoring Thread. That is, when a servlet is started, its init() method is called. When it is stopped, its destroy() method is called.
Override the destroy() method in your servlet and stop the thread there.
When you call shutdown.sh the whole JVM is shut down. Because the JVM stops, all threads (no matter what their state) are forcibly stopped if still running. It's the logical equivalent of calling System.exit(0);

Resources