Spring Cache with Redis - How to gracefully handle or even skip Caching in case of Connection Failure to Redis - spring-cache

I've enabled Caching in my Spring app and I use Redis to serve the purpose.
However, whenever a connection failure occurs, the app stops working whereas I think it had better
skip the Caching and go on with normal execution flow.
So, does anyone have any idea on how to gracefully do it in Spring ?
Here is the exception I got.
Caused by: org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool

As from Spring Framework 4.1, there is a CacheErrorHandler that you can implement to handle such exceptions. Refer to the javadoc for more details.
You can register it by having your #Configuration class extends CachingConfigurerSupport (see errorHandler()).

You can use CacheErrorHandler as suggested by Stephane Nicoll. But you should make sure to make
RedisCacheManager transactionAware to false in your Redis Cache Config(to make sure the transaction is committed early when executing the caching part and the error is caught by CacheErrorHandler and don't wait until the end of the execution which skips CacheErrorHandler part). The function to set transactionAware to false looks like this:
#Bean
public RedisCacheManager redisCacheManager(LettuceConnectionFactory lettuceConnectionFactory) {
JdkSerializationRedisSerializer redisSerializer = new JdkSerializationRedisSerializer(getClass().getClassLoader());
RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofHours(redisDataTTL))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(redisSerializer));
redisCacheConfiguration.usePrefix();
RedisCacheManager redisCacheManager = RedisCacheManager.RedisCacheManagerBuilder.fromConnectionFactory(lettuceConnectionFactory)
.cacheDefaults(redisCacheConfiguration)
.build();
redisCacheManager.setTransactionAware(false);
return redisCacheManager;
}

Similar to what Stephane has mentioned, I have done in by consuming the error in try catch block. Adding a fall back mechanism where if Redis is not up or may be the data is not present then I fetch the data from DB.(Later if I find one then I add the same data in Redis,if it is up to maintain consistency.)

Related

Efficient OkHttp configuration for Multithreaded environment

What is the best configuration I can use to set up the OkHttp3 client correctly in a multi threaded environment? Had 2 main questions:
Connection pool - How do we define the number of available connections in the pool? Can it be scaled at runtime? The number of concurrent users will be very high and need to make sure users aren't waiting a long time for the connection to be available from the pool.
I read the OkHttp might end up doing multiple retries in case of failures or timeouts. Is it possible to only enable this for only the "Gets" and not "Post" while using just 1 OkHttp client?
Also Anything else I should be considering?
Here is my starting code for the client.
private static final int timeout = 15000;
private static final OkHttpClient okClient = new OkHttpClient()
.newBuilder()
.connectTimeout(timeout, TimeUnit.MILLISECONDS)
.readTimeout(timeout, TimeUnit.MILLISECONDS)
.writeTimeout(timeout, TimeUnit.MILLISECONDS)
.retryOnConnectionFailure(false)
.addInterceptor(new HttpLoggingInterceptor().setLevel(HttpLoggingInterceptor.Level.BASIC))
.build();
You can configure the connection pool then pass into the client builder.
https://square.github.io/okhttp/3.x/okhttp/okhttp3/ConnectionPool.html
See Connection Pool - OkHttp for an example.
For the second question, you can disable automatic retries and do this in your application code instead. Use retryOnConnectionFailure(false) as you show above.
To have this applied differently for get and posts you should use customise one client like the following
val postClient = client.newBuilder().retryOnConnectionFailure(false).build()

EventHubConsumerClient Apache Qpid memory leak?

I am reading events from an Azure EventHub cluster synchronously via the receiveFromPartition method on the EventHubConsumerClient class.
I create the client once like so:
EventHubConsumerClient eventHubConsumerClient = new EventHubClientBuilder()
.connectionString(eventHubConnectionString)
.consumerGroup(consumerGroup)
.buildConsumerClient());
I then just use a ScheduledExecutorService to retrieve events every 1.5s via:
IterableStream<PartitionEvent> receivedEvents = eventHubConsumerClient.receiveFromPartition(
partitionId, 1, eventPosition);
The equivalent logic in V3 of the SDK worked fine (using PartitionReceivers), but now I am seeing OOMs in my JVM.
Running a profiler against a local version of the logic I see the majority of the heap (90%, mainly in OG) is being taken up by byte[]s, referenced by org.apache.qpid.proton.codex.CompositeReadableBuffer. This pattern is not present when I profile the V3 logic.
What could be causing a leak of the AMQP messages here, do I need to interact with the SDK further, for example close a connection that I'm not aware of after each call?
Any advise would be very appreciated, thanks!
Turns out it was a bug, solved here: https://github.com/Azure/azure-sdk-for-java/issues/13775

Quarkus Transactions on different thread

I have a quarkus application with an async endpoint that creates an entity with default properties, starts a new thread within the request method and executes a long running job and then returns the entity as a response for the client to track.
#POST
#Transactional
public Response startJob(#NonNull JsonObject request) {
// create my entity
JobsRecord job = new JobsRecord();
// set default properties
job.setName(request.getString("name"));
// make persistent
jobsRepository.persist(job);
// start the long running job on a different thread
Executor.execute(() -> longRunning(job));
return Response.accepted().entity(job).build();
}
Additionally, the long running job will make updates to the entity as it runs and so it must also be transactional. However, the database entity just doesn't get updated.
These are the issues I am facing:
I get the following warnings:
ARJUNA012094: Commit of action id 0:ffffc0a80065:f2db:5ef4e1c7:0 invoked while multiple threads active within it.
ARJUNA012107: CheckedAction::check - atomic action 0:ffffc0a80065:f2db:5ef4e1c7:0 commiting with 2 threads active!
Seems like something that should be avoided.
I tried using #Transaction(value = TxType.REQUIRES_NEW) to no avail.
I tried using the API Approach instead of the #Transactional approach on longRunning as mentioned in the guide as follows:
#Inject UserTransaction transaction;
.
.
.
try {
transaction.begin();
jobsRecord.setStatus("Complete");
jobsRecord.setCompletedOn(new Timestamp(System.currentTimeMillis()));
transaction.commit();
} catch (Exception e) {
e.printStackTrace();
transaction.rollback();
}
but then I get the errors: ARJUNA016051: thread is already associated with a transaction! and ARJUNA016079: Transaction rollback status is:ActionStatus.COMMITTED
I tried both the declarative and API based methods again this time with context propagation enabled. But still no luck.
Finally, based on the third approach, I thought keeping the #Transactional on the Http request handler and leaving longRunning as is without declarative or API based transaction approaches would work. However the database still does not get updated.
Clearly I am misunderstanding how JTA and context propagation works (among other things).
Is there a way (or even a design pattern) that allows me to update database entities asynchronously in a quarkus web application? Also why wouldn't any of the approaches I took have any effect?
Using quarkus 1.4.1.Final with ext: [agroal, cdi, flyway, hibernate-orm, hibernate-orm-panache, hibernate-validator, kubernetes-client, mutiny, narayana-jta, rest-client, resteasy, resteasy-jackson, resteasy-mutiny, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui]
You should return an async type from your JAX-RS resource method, the transaction context will then be available when the async stage executes. There is some relevant documentation in the quarkus guide on context propagation.
I would start by looking at the one of the reactive examples such as the getting started quickstart. Try annotating each resource endpoint with #Transactional and the async code will run with a transaction context.

Azure SQL MultipleActiveResultSets

For some reason Azure SQL does not seem to be picking with my MultipleActiveResultSets=True in the connection string of my dotnet core app.
my apps connection string looks like this
Server=tcp:xxx.database.windows.net,1433;Initial Catalog=xxx;User ID=xxx;Password=xxxxx;MultipleActiveResultSets=True;Encrypt=True;"
I still keep getting this error
A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread safe.
All my code has awaits when using async methods so I have no idea what else to do because this code works with local sql.
I am using DI with my DbContext and adding the service like this
services.AddDbContext<Models.Database.DBContext>();
Any help would be greatly appreciated.
UPDATE
The problem was in my Startup.cs. I missed it. OnTokenValidated had a method which was calling dbcontext but i never awaited the result. Didn't see any errors from the IDE cause normally it will warn you about not awaiting an async method. Updated the method with an await and added OnTokenValidated = async context. All fine now.
This
A second operation started on this context before a previous operation
completed. Any instance members are not guaranteed to be thread safe.
Is an EF error, not a SQL Server error. So MultipleActiveResultSets is irrelevant.
All my code has awaits when using async methods . . .I am using DI with my DbContext
So probably your DI is allowing a DbContext instance to be shared between requests.

Transaction management and Multithreading in Hibernate 4

I have a requirement of executing parent task which may or maynot have child task. Each parent and child task should be run in thread. If something goes wrong in parent or child execution the transaction of both parent and child task must be rollback. I am using hibernate4.
If I got it, the parent and the child task will run in differents threads.
According to me it's a very bad idea that does not worth considering.
While it may be possible using jta transaction, it's clearly not the case using hibernate transaction management delegation to underlying jdbc connection (you have one connection per session and MUST NOT share an hibernate session between threads).
Using jta you will have to handle connection retrieval and transactions yourself and can't so take advantages of connection pooling and container managed transaction (spring or java ee ones). It may be overcomplicated for about no performance improvments as sharing the database connection between two threads will just probably move the bottleneck one level below.
See how to share one transaction between multi threads
According to OP expectation here is a pseudo code for Hibernate 4 standalone session management with jdbc transaction (I personnaly advise to go with a container (Java ee or spring) and JTA container managed transaction)
In hibernate.cfg.xml
<property name="hibernate.current_session_context_class">thread</property>
SessionFactory :
Configuration configuration = new Configuration();
configuration.configure("hibernate.cfg.xml");
StandardServiceRegistryBuilder builder = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties());
SessionFactory sessionFactory = configuration.buildSessionFactory(builder.build());
The session factory should be exposed using a singleton (any way you choose you must have only one instance for the whole app)
public void executeParentTask() {
try {
sessionFactory.getCurrentSession().beginTransaction();
sessionFactory.getCurrentSession().persist(someEntity);
myChildTask.execute();
sessionFactory.getCurrentSession().getTransaction().commit();
}
catch (RuntimeException e) {
sessionFactory .getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
}
getCurrentSession() will return the session bound to the current thread. If you manage the thread execution yourself you should create the session at the beginning of the thread execution and close it at the end.
the child task will retrieve the same session than the parent one using sessionFactory.getCurrentSession()
See https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch03.html#configuration-sessionfactory
http://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html_single/#transactions-demarcation-nonmanaged
You may find this interesting too : How to configure and get session in Hibernate 4.3.4.Final?

Resources