Does a DOTS tasklet have access to in memory objects residing in the XPages OSGi container?
simple use case:
value object posted to server via RESTful service
service stores the value object in memory of a concurrent queue
tasklet polls queue every n seconds to process value object(s)
Is this possible with DOTS, or does DOTS assume the object will be persisted to disk as a document before a tasklet can process its data?
thanks
-Mark
According to this, this is (almost) imposible:
An XPager's Guide to Process Server-Side Jobs on IBM® Domino®
Related
I have used Indy most of the time in past but decided to modify the existing project and use synapse instead of Indy. Although, i do have a small question to ask i.e. we all know whenever we create the socket object in Indy it runs on it's own thread, it does all IO operations on the thread it created and doesn't get free or shut down till the object is freed i think.
So pretty much i want to mimic this on synapse.
tl;dr:
How to create a Ttcpblocksocket object in such a way that it runs all its IO operations on a thread which doesn't get terminated until the object is free ?
Both libraries do not create a thread to manage client-side socket operations. This allows to create and use them on the application main thread - for example in a VCL event handler which runs a HTTP request - or to move them to a thread (for example to wait in the background for messages sent from the server to the client).
There is the TIdTCPServer component in Indy which creates threads to process incoming data concurrently, but there is no TCP multi-threaded server component in the Synapse library AFAIK.
tl;dr
There is no significant difference between the Indy and Synapse TCP client components regarding their usage with threads.
I have a site that makes the standard data-bound calls, but then also have a few CPU-intensive tasks which are ran a few times per day, mainly by the admin.
These tasks include grabbing data from the db, running a few time-consuming different algorithms, then reuploading the data. What would be the best method for making these calls and having them run without blocking the event loop?
I definitely want to keep the calculations on the server so web workers wouldn't work here. Would a child process be enough here? Or should I have a separate thread running in the background handling all /api/admin calls?
The basic answer to this scenario in Node.js land is to use the core cluster module - https://nodejs.org/docs/latest/api/cluster.html
It is an acceptable API to :
easily launch worker node.js instances on the same machine (each instance will have its own event loop)
keep a live communication channel for short messages between instances
this way, any work done in the child instance will not block your master event loop.
we are facing an issue with initializing our cache at server startup or application deployment. Initializing the cache involves
Querying a database to get the list of items
Making an rmi call for each item
Listening to the data on a JMS queue/topic
Constructing the cache
This initialization process is in startup code. All this is taking lot of time due to which the deployment is taking lot of time or server start time is increasing.
So what I proposed is to create a thread in the startup and run the initialization code in it. I wrote a sample application to demonstrate it.
It involves a ServletContextListener, a filter. In the listener I am creating a new thread in which the HeavyProcess will run. When it finishes an event will be fired which the filter will be listening. On receiving the event the filter will allow incoming http requests. Until then the filter redirects all clients to a default page which shows a message that the application is initializing.
I presented this approach and few concerns were raised.
We should not ideally create a thread because handling the thread will be difficult.
My question is why cant we create a thread like these in web applications.
If this is not good, then what is the best approach?
If you can use managed threads, avoid unmanaged ones. The container has no control over unmanaged threads, and unmanaged threads survive redeployments, if you do not terminate these properly. So you have to register unmanaged threads, and terminate these somehow (which is not easy as well, because you have to handle race-conditions carefully).
So one solution is to use #Startup, and something like this:
#Schedule(second = "*/45", minute = "*", hour = "*")
protected void asyncInit(final Timer timer) {
timer.cancel();
// Do init here
// Set flag that init has been completed
}
I have learned about this method here: Executing task after deployment of Java EE application
So this gives you an async managed thread, and deployment will not be delayed by #PostConstruct. Note the timer.cancel().
Looking at your actual problem: I suggest using a cache which supports "warm starts".
For example, Infinispan supports cache stores so that the cache content survives restarts. If you have a cluster, there are distributed or replicated caching modes as well.
JBoss 7 embeds Infinispan (it's an integrated service in the same JVM), but it can be operated independently as well.
Another candidate is Redis (and any other key/value store with persistence will do as well).
In general, creating unmanaged threads in a Java EE environment is a bad idea. You will loose container managed transactions, user context and many more Java EE concepts within your unmanaged thread. Additionally unmanaged threads may block the conainer on shutdown if your thread handling isn't appropriate.
Which Java EE Version are you using? Perhaps you can use Servlet 3.0's async feature?
Or call a asynchronous EJB for doing the heavy stuff at startup (#PostConstruct). The call will then set a flag when its job is done.
My understanding is that in Tomcat, each request will take up one Java/(and thus OS) thread.
Imagine I have an app with lots of long-running requests (eg a poker game with multiple players,) that involves in-game chat, and AJAX long-polling etc.
Is there a way to change the tomcat configuration/architecture for my webapp so that I'm not using a thread for each request but 'intercept' the request and response so they can be processed as part of a queue?
I think you're right about tomcat likes to handle each request in its own thread. This could be problematic for several concurrent threads. I have the following suggestions:
Configure maxThreads and acceptCount attributes of the Connector elements in server.xml. In this way you limit the number of threads that can get spawned to a threshold. Once that limit is reached, requests get queued. The acceptCount attribute is to set this queue size. Simplest to implement but not a good long term solution
Configure multiple Connector elements in server.xml and make them share a threadpool by adding an Executor element in server.xml. You probably want to point tomcat to your own implementation of Executor interface.
If you want finer grain control no how requests are serviced, consider implementing your own connector. The 'protocol' attribute of the Connector element in server.xml should point to your new connector. I have done this to add a custom SSL connector and this works great.
Would you reduce this problem to a general requirement to make tomcat more scalable in terms of the number of requests/connections? The generic solution to that would be configuring a loadbalancer to handle multiple instances of tomcat.
I have a Java servlet that acts as a facade to other webservices deployed on the same Tomcat instance. My wrapper servlet creates N more threads, each which invokes a webservice, collates the response and sends it back to the client. The webservices are deployed all on the same Tomcat instance as different applications.
I am seeing thread blocking on this facade wrapper service after a few hours of deployment which brings down the Tomcat instance. All blocked threads are endpoints to this facade webservice (like http://domain/appContext/facadeService)
Is there a way to control such thread-blocking, due to starvation of available threads that actually do the processing? What are the best practices to prevent such deadlocks?
The common solution to this problem is to use the Executor framework. You need to express your web service call as Callable and pass it to the executor either as it stands, or as a Collection<Callable> (see the Javadoc for complete list of options).
You have two choices to control the time. First is to use parameters of an appropriate method of the Executor class where you specify the max web service timeout. Another option is to do get the result (which is expressed as Future<T>) and use .get(long, TimeUnit) to specify the maximum amount of time you can wait for a result.