How to add properties via Spring TestExecutionListener - spring-test

I created a TestExecutionListener to run a docker container for integration testing. I would like to be able to inject properties (eg container port) that can be used by #ConfigurationProperties annotated beans.
I thought I might be able to call TestPropertySourceUtils.addInlinedPropertiesToEnvironment(...) but when I try to get an application context object the context ends up fully loading before I can even attempt to inject any properties.
Is there any way to do this from a TestExecutionListener?
My workaround is to set the properties via System.setProperty(...) but that obviously does not permit any concurrency.
I'm running Spring Boot 2.1.1.RELEASE.

Related

No transaction behaviour needed in Spring Data

I am trying to figure it out how to configure a method not to run within a transaction using Spring. I have read that the Spring Data repositories by default activate the transactional behaviour in its methods. I dont want this transaction because I have many "save" calls to a repository and each of them is independent from the other. I think creating a transaction for each call to a repository method can slow down the code and the performance of the app. So :
Is this possible or every service or dao method has to run within a transaction?
If it has, why?
If this is possible, how to configure a method not to run within a transaction? Just removing the Spring transactional annotation?
Thanks
Spring service beans by default are not transactional. You can add the #Transactional at a class or a method level to force it to be transactional. Here are a few links explaining in detail on how transactional in Spring works.
What is the difference between defining #Transactional on class vs method .
Spring - #Transactional - What happens in background? .
https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/transaction.html#tx-decl-explained .
It is also discussed in the below thread .
Is Spring #Service transactional?

Hazelcast and the need for custom serializers; works when creating the server but not when connecting to existing

We are using Hazelcast to store stuff in distributed maps. We are having a problem with remote servers and I need some feedback on what we can do to resolve the issue.
We create the server - WORKS
We create a new server (Hazelcast.newHazelcastInstance) inside our application's JVM. The hazelcast Config object we pass in has a bunch of custom serializers defined for all the types we are going to put in the maps. Our objects are a mixture of Protobufs, plain java objects, and a combination of the two. The server starts, we can put objects in the map and get objects back out later. We recently decided to start running Hazelcast in its own dedicated server so we tried the scenario below.
Server already exists externally, we connect as a client - DOESN'T WORK
Rather than creating our Hazelcast instance we connect to a remote instance that is already running. We pass in a config with all the same serializers we used before. We successfully connect to Hazelcast and we can put stuff in the map (works as far as I can tell) but we don't get anything back out. No events get fired letting our listeners know objects were added to a map.
I want to be able to connect to a Hazelcast instance that is already running outside of our JVM. It is not working for our use case and I am not sure how it is supposed to work.
Does the JVM running Hazelcast externally need in its class loader all of the class types we might put into the map? It seems like that might be where the problem is but wouldn't that make it very limiting to use Hazelcast?
How do you typically manage those class loader issues?
Assuming the above is true, is there a way to tell Hazelcast we will serialize the objects before even putting them in the map? Basically we would give Hazelcast an ID and byte array and that is all we would expect back in return. If so that would avoid the entire class loader issue I think we are running into. We do not need to be able to search on objects based on their fields. We just need to know as objects come and go and what their ID is.
#Jonathan, when using client-server architecture, unless you use queries or other operations that require data to be serialized on the cluster, members don't need to know anything about serialization. They just store already serialized data & serve it. If these listeners that you mentioned are on the client app, it should be working fine.
Hazelcast has a feature called User Code Deployment, https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#member-user-code-deployment-beta, but it's mainly for user classes. Serialization related config should be present on members or you should add that later & do a rolling restart.
If you can share some of the exceptions/setup etc, I can give specific answers as well.

How to get which hazelcast instance is running on the node

I am running many instances of hazelcast with different group names(i.e different cluster) on different nodes. Now I want to make program which runs on a given node and needs to know which HazelcastInstance is running on this node and access its config file. I dont want this program to create any new hazelcast instance. How this can be done?
It depends.
You can always look up the HazelcastInstance(s) using Hazelcast.getHazelcastInstanceByName if you know the name or get them all using getAllHazelcastInstances.
In some cases you want to get the HazelcastInstance after deserialization (e.g. you send a task to a hz instance using an iexecutor). In this case you can implement the HazelcastInstanceAware interface to get the instance injected.
So it depends a bit on your setup.
The config object you can load using HazelcastInstance.getConfig. The instance doesn't know if the config was made using a xml file, or was made programmatically.

HazelCast, Glassfish 3.1.2.2 and CDI/Weld Serialization class not found error?

I am testing HazelCast 3.1.3 and its HTTP Session Clustering/WM. My target applications is a JSF 2.1/PrimeFaces app and it makes heavy use of CDI.
It has a some javax.enterprise.context.SessionScoped beans in it, among many other things.
I have written a simple WAR matching this and it uses very simple SessionScoped Bean. I have configured HC/WM following the HC directions here: http://www.hazelcast.org/docs/latest/manual/html-single/#HttpSessionClustering
Note: I am not running an embedded HC; but rather configured WM to be client to an already running HC 'server' instance. So far I have my single GF instance and the HC sever running on same box for this test.
This sorta works, in the WM/HC connects and creates sessions and stuff. HC Server sees and accepts the WM client connects.
However, once more interesting stuff (interactions with SessionScoped objects in web app) starts to happen HC/WM starts tossing ClassNotFound exceptions. In particular CNF's for org.jboss.weld.context.conversation.ConversationIdGenerator.
I think this is because CDI in GF 3.1.2.2 is provided by a WELD OSGI 'thing' and that gets loaded by a lower level class loader that is 'closer' to the session manager within GF. However, when WM/HC filter (loaded by the WAR classloader) visits the CDI/WELD proxied or wrapped session object to serialize it -- it cannot see the WELD classes (I have verified that ConversationIdGenerator is serializable).
Does anybody have any ideas on how to work around this issue?
I suppose delivering weld in my WAR may work or making WELD available in the common class loader may work -- but that is sub-optimal.
Hmm...Will this be a endemic problem with CDI provided as a service by an App container but then the session clustering provided as an application-level facet? (or Will this sort of issue happen in WildFly/other too?)

multiple requests accessing the same web method

It is a very basic question. When two requests access the same method of a singleton object , how is it handled. Does tomcat(or any other container) create a ThreadLocal instance for every object ?
I am assuming by web method you mean a method in your code that a servlet container like tomcat's catalina would map a HTTP request to. Tomcat tries to service each request in its own thread and I would assume these threads would eventually get run on the sole instance of the singleton object that has the web method. The maxThreads attribute in server.xml can set a limit on how many such threads would get spawned at a time.

Resources