I have a requirement of executing parent task which may or maynot have child task. Each parent and child task should be run in thread. If something goes wrong in parent or child execution the transaction of both parent and child task must be rollback. I am using hibernate4.
If I got it, the parent and the child task will run in differents threads.
According to me it's a very bad idea that does not worth considering.
While it may be possible using jta transaction, it's clearly not the case using hibernate transaction management delegation to underlying jdbc connection (you have one connection per session and MUST NOT share an hibernate session between threads).
Using jta you will have to handle connection retrieval and transactions yourself and can't so take advantages of connection pooling and container managed transaction (spring or java ee ones). It may be overcomplicated for about no performance improvments as sharing the database connection between two threads will just probably move the bottleneck one level below.
See how to share one transaction between multi threads
According to OP expectation here is a pseudo code for Hibernate 4 standalone session management with jdbc transaction (I personnaly advise to go with a container (Java ee or spring) and JTA container managed transaction)
In hibernate.cfg.xml
<property name="hibernate.current_session_context_class">thread</property>
SessionFactory :
Configuration configuration = new Configuration();
configuration.configure("hibernate.cfg.xml");
StandardServiceRegistryBuilder builder = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties());
SessionFactory sessionFactory = configuration.buildSessionFactory(builder.build());
The session factory should be exposed using a singleton (any way you choose you must have only one instance for the whole app)
public void executeParentTask() {
try {
sessionFactory.getCurrentSession().beginTransaction();
sessionFactory.getCurrentSession().persist(someEntity);
myChildTask.execute();
sessionFactory.getCurrentSession().getTransaction().commit();
}
catch (RuntimeException e) {
sessionFactory .getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
}
getCurrentSession() will return the session bound to the current thread. If you manage the thread execution yourself you should create the session at the beginning of the thread execution and close it at the end.
the child task will retrieve the same session than the parent one using sessionFactory.getCurrentSession()
See https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch03.html#configuration-sessionfactory
http://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html_single/#transactions-demarcation-nonmanaged
You may find this interesting too : How to configure and get session in Hibernate 4.3.4.Final?
Related
I want my Rest Controller POST Endpoint to only allow one thread to execute the method and every other thread shall get 429 until the first thread is finished.
#ResponseStatus(code = HttpStatus.CREATED)
#PostMapping(value ="/myApp",consumes="application/json",produces="application/json")
public Execution execute(#RequestBody ParameterDTO StartDateParameter)
{
if(StartDateParameter.getStartDate()==null) {
throw new ResponseStatusException(HttpStatus.BAD_REQUEST);
}else {
if(Executer.isProcessAlive()) {
throw new ResponseStatusException(HttpStatus.TOO_MANY_REQUESTS);
}else {
return Executer.execute(StartDateParameter);
}
}
}
When I send multithreaded requests, every request gets 201. So I think the requests get in earlier than the isAlive() method beeing checked. How can I change it to only process the first request and "block" every other?
Lifecycle of a controller in spring is managed by the container and by default, it is singleton, which means that there is one instance of the bean created at startup and multiple threads can use it. The only way you can make it single threaded is if you use a synchronized block or handle the request call through an Executor service. But that defeats the entire purpose of using spring framework.
Spring provides other means to make your code thread safe. You can use the #Scope annotation to override the default scope. Since you are using a RestController, you could use the "request" scope (#Scope("request")), which creates a new instance to process your every http request. Doing it this way will make ensure that only 1 thread will be accessing your controller code at any given time.
I have a quarkus application with an async endpoint that creates an entity with default properties, starts a new thread within the request method and executes a long running job and then returns the entity as a response for the client to track.
#POST
#Transactional
public Response startJob(#NonNull JsonObject request) {
// create my entity
JobsRecord job = new JobsRecord();
// set default properties
job.setName(request.getString("name"));
// make persistent
jobsRepository.persist(job);
// start the long running job on a different thread
Executor.execute(() -> longRunning(job));
return Response.accepted().entity(job).build();
}
Additionally, the long running job will make updates to the entity as it runs and so it must also be transactional. However, the database entity just doesn't get updated.
These are the issues I am facing:
I get the following warnings:
ARJUNA012094: Commit of action id 0:ffffc0a80065:f2db:5ef4e1c7:0 invoked while multiple threads active within it.
ARJUNA012107: CheckedAction::check - atomic action 0:ffffc0a80065:f2db:5ef4e1c7:0 commiting with 2 threads active!
Seems like something that should be avoided.
I tried using #Transaction(value = TxType.REQUIRES_NEW) to no avail.
I tried using the API Approach instead of the #Transactional approach on longRunning as mentioned in the guide as follows:
#Inject UserTransaction transaction;
.
.
.
try {
transaction.begin();
jobsRecord.setStatus("Complete");
jobsRecord.setCompletedOn(new Timestamp(System.currentTimeMillis()));
transaction.commit();
} catch (Exception e) {
e.printStackTrace();
transaction.rollback();
}
but then I get the errors: ARJUNA016051: thread is already associated with a transaction! and ARJUNA016079: Transaction rollback status is:ActionStatus.COMMITTED
I tried both the declarative and API based methods again this time with context propagation enabled. But still no luck.
Finally, based on the third approach, I thought keeping the #Transactional on the Http request handler and leaving longRunning as is without declarative or API based transaction approaches would work. However the database still does not get updated.
Clearly I am misunderstanding how JTA and context propagation works (among other things).
Is there a way (or even a design pattern) that allows me to update database entities asynchronously in a quarkus web application? Also why wouldn't any of the approaches I took have any effect?
Using quarkus 1.4.1.Final with ext: [agroal, cdi, flyway, hibernate-orm, hibernate-orm-panache, hibernate-validator, kubernetes-client, mutiny, narayana-jta, rest-client, resteasy, resteasy-jackson, resteasy-mutiny, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui]
You should return an async type from your JAX-RS resource method, the transaction context will then be available when the async stage executes. There is some relevant documentation in the quarkus guide on context propagation.
I would start by looking at the one of the reactive examples such as the getting started quickstart. Try annotating each resource endpoint with #Transactional and the async code will run with a transaction context.
I have a Spring Boot Application that uses Spring Data for Cassandra. One of the requirements is that the application will start even if the Cassandra Cluster is unavailable. The Application logs the situation and all its endpoints will not work properly but the Application does not shutdown. It should retry to connect to the cluster during this time. When the cluster is available the application should start to operate normally.
If I am able to connect during the application start and the cluster becomes unavailable after that, the cassandra java driver is capable of managing the retries.
How can I manage the retries during application start and still use Cassandra Repositories from Spring Data?
Thanx
It is possible to start a Spring Boot application if Apache Cassandra is not available but you need to define the Session and CassandraTemplate beans on your own with #Lazy. The beans are provided out of the box with CassandraAutoConfiguration but are initialized eagerly (default behavior) which creates a Session. The Session requires a connection to Cassandra which will prevent a startup if it's not initialized lazily.
The following code will initialize the resources lazily:
#Configuration
public class MyCassandraConfiguration {
#Bean
#Lazy
public CassandraTemplate cassandraTemplate(#Lazy Session session, CassandraConverter converter) throws Exception {
return new CassandraTemplate(session, converter);
}
#Bean
#Lazy
public Session session(CassandraConverter converter, Cluster cluster,
CassandraProperties cassandraProperties) throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(cassandraProperties.getKeyspaceName());
session.setSchemaAction(SchemaAction.NONE);
return session.getObject();
}
}
One of the requirements is that the application will start even if the Cassandra Cluster is unavailable
I think you should read this session from the Java driver doc: http://datastax.github.io/java-driver/manual/#cluster-initialization
The Cluster object does not connect automatically unless some calls are executed.
Since you're using Spring Data Cassandra (that I do not recommend since it has less feature than the plain Mapper Module of the Java driver ...) I don't know if the Cluster object or Session object are exposed directly to the users ...
For retry, you can put the cluster.init() call in a try/catch block and if the cluster is still unavaible, you'll catch an NoHostAvailableException according to the docs. Upon the exception, you can schedule a retry of cluster.init() later
I am trying solve this problem. I have WCF service. Client can call web method from this service which only "fire" another method (this method only write data to database) in another thread.
Code is here:
//this method will write data to database
public void WriteToDb()
{
}
//this web method will call only mehod WriteToDb() in another thread
public void SomeWebMethod()
{
new Task(WriteToDb).Start();
}
Problem is that in same time can web method call 5 clients. This cause that method WriteToDb is called 5 times in 5 thread.
In all 5 cases method WriteToDb will use same data.
My aim is achieve this behavior. 5 clients called web method SomeWebMethod. Method WriteToDb will run in 5 thread.
But I would like execute first thread, then second thread ....etc and on the end 5th thread.
I don’t want run method WriteToDb in same time in 5 thread.
So maybe I can use lock.
{
private object locker = new object();
//this method will write data to database
public void WriteToDb()
{
lock(locker)
{
//write to DB
}
}
I am not sure because .net assembly is host on app domain a app domain is host on win process. I woud like to avoid deadlocks.
What happens if I have a machine with 6 CPU? Use mutex instead lock ?
Thank you for help...
I'm not particulary sure what you are writing to DB, but your question is loosely coupled with WCF to be frank, try to read CLR via C# on multithreading etc.
Also regarding WCF, you can setup how your service object is created upon requests, ie per call, per session or singleton, and for later use specify if it's methods will stuck in queue or will be called on object concurrently.
So depending on choosing architecture you can either relay on WCF ability to host single object which will have logic you described or you can go the way tried.
Links
http://msdn.microsoft.com/en-us/magazine/cc163590.aspx
http://msdn.microsoft.com/en-us/library/ms731193.aspx
A lock is fine here, but you should make your locker object static so the same object instance is used in the lock every time.
It does not matter how many cores you have - if you hold the lock on an object then any other threads that attempt to acquire the lock will wait until the lock is released.
A deadlock can only occur if you are acquiring multiple locks in different orders in different threads.
I suggest you read Joe Albahari's excellent free ebook
I have a problem regarding Hibernate and lazy loading.
Background:
I have a Spring MVC web app, I use Hibernate for my persistence layer. I'm using OpenSessionInViewFilter to enable me to lazy load entities in my view layer. And I'm extending the HibernateDaoSupport classes and using HibernateTemplate to save/load objects. Everything has been working quite well. Up until now.
The Problem:
I have a task which can be started via a web request. When the request is routed to a controller, the controller will create a new Runnable for this task and start the thread to run the task. So the original thread will return and the Hibernate session which was put in ThreadLocal (by OpenSessionInViewFilter) is not available to the new thread for the Task. So when the task does some database stuff I get the infamous LazyInitializationException.
Can any one suggest the best way I can make a Hibernate session available to the Task?
Thanks for reading.
Make your Runnable a Spring bean and add #Transactional annotation over run. You must be warned thou that this asynchronous task won't run in the same transaction as your web request.
And please don't start new thread, use pooling/executor.
Here is a working example on how to use the Hibernate session inside a Runnable:
#Service
#Transactional
public class ScheduleService {
#Autowired
private SessionFactory sessionFactory;
#Autowired
private ThreadPoolTaskScheduler scheduler;
public void doSomething() {
ScheduledFuture sf = scheduler.schedule(new Runnable() {
#Override
public void run() {
SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(scheduler);
final Session session = sessionFactory.openSession();
// Now you can use the session
}
}, new CronTrigger("25 8 * * * *"));
}
}
SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext() takes a reference to any Spring managed bean, so the scheduler itself is fine. Any other Spring managed bean would work as well.
Do I understand correctly, you want to perform some action in a completely dedicated background thread, right? In that case, I recommend you not accessing the Hibernates OpenSessionInViewFilter and further session logic for that thread at all, because it will, is you correctly noted, run in a decoupled thread and therefore information loaded in the original thread (i.e, the one that dealt with the initial HttpRequest). I think it would be wise to open and close the session yourself within that thread.
Otherwise, you might question why you are running that operation in a separated thread. May be it is sufficient to run the operation normally and present the user with some 'loading' screen in the meantime?