In grails we have the following config:
DataSource.groovy:
hibernate {
flush.mode="commit"
}
which prints "COMMIT" when we log it in a transactional context:
println "session=${sessionFactory.currentSession.flushMode}"
but when we create a new thread
this prints "AUTO".
New thread does seem to get the other hibernate settings, ie database, username and factory, but the currentSession doesn't take the flush.mode setting.
Can anyone advise?
Are you using the Quartz plugin?
Quartz changes the flush mode:
https://fisheye.codehaus.org/browse/~raw,r=41198/grails-plugins/grails-quartz/tags/LATEST_RELEASE/src/java/org/codehaus/groovy/grails/plugins/quartz/listeners/SessionBinderJobListener.java
public void jobToBeExecuted(JobExecutionContext context) {
Session session = SessionFactoryUtils.getSession(sessionFactory, true);
session.setFlushMode(FlushMode.AUTO);
TransactionSynchronizationManager.bindResource(sessionFactory, new SessionHolder(session));
if( LOG.isDebugEnabled()) LOG.debug("Hibernate Session is bounded to Job thread");
}
The workaround is to change the flush mode in the Job:
def sessionFactory
.
.
.
def session=SessionFactoryUtils.getSession(sessionFactory, false)
session?.setFlushMode(FlushMode.COMMIT)
Related
I am using hazelcast-3.11.2 and SubZero-0.9 as global serializer. I am trying to configure Spring Session using this example. When I have more than one node in cluster - I get next exception when trying to get session id:
2019-03-20 15:01:59.088 ERROR 13635 --- [ration.thread-3]
c.h.m.i.operation.EntryBackupOperation : [x.x.x.x]:5701
[hazelcast-group] [3.11.2] null
java.lang.NullPointerException: null at
com.hazelcast.map.AbstractEntryProcessor$EntryBackupProcessorImpl.processBackup(AbstractEntryProcessor.java:83)
at
com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:314)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:181)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:166)
at
com.hazelcast.map.impl.operation.EntryBackupOperation.run(EntryBackupOperation.java:60)
at
com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:158)
at com.hazelcast.spi.Operation.call(Operation.java:170) at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:208)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:413)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
My instance config looks like this:
#Configuration
#EnableHazelcastHttpSession
public class HazelcastSessionConfig extends AbstractHttpSessionApplicationInitializer {
#Bean
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
SubZero.useAsGlobalSerializer(config);
MapAttributeConfig attributeConfig = new MapAttributeConfig()
.setName(HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)
.setExtractor(PrincipalNameExtractor.class.getName());
config.getMapConfig(HazelcastSessionRepository.DEFAULT_SESSION_MAP_NAME)
.addMapAttributeConfig(attributeConfig)
.addMapIndexConfig(new MapIndexConfig(
HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE, false));
return Hazelcast.newHazelcastInstance(config);
}
}
Removing SubZero from configuration, removes exception, so it looks like it is SubZero issue. I do use this instance as my cache provider also and hibernate second level cache, so I can not get rid off SubZero.
My thoughts were:
Having two different clusters: one for cache, another for session.
Don't work for me, since I do not know how to configure Spring
Session to use specific hazelcast instance (pass instance name, or
bean itself etc)
Specify which classes should be used with SubZero - but since I have
plenty and new classes going to be added - this is not the best idea
Will appreciate any help.
I am using Spring Batch StoredProcedureItemReader to retrive the result set and insert it to another database using JpaItemWriter.
Below is my code configuration.
#Bean
public JdbcCursorItemReader jdbcCursorItemReader(){
JdbcCursorItemReader jdbcCursorItemReader = new JdbcCursorItemReader();
jdbcCursorItemReader.setSql("call myProcedure");
jdbcCursorItemReader.setRowMapper(new MyRowMapper());
jdbcCursorItemReader.setDataSource(myDataSource);
jdbcCursorItemReader.setFetchSize(50);
jdbcCursorItemReader.setVerifyCursorPosition(false);
jdbcCursorItemReader.setSaveState(false);
return jdbcCursorItemReader;
}
#Bean
public Step step() {
threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(50);
threadPoolTaskExecutor.setMaxPoolSize(100);
threadPoolTaskExecutor.setThreadNamePrefix("My-TaskExecutor ");
threadPoolTaskExecutor.setWaitForTasksToCompleteOnShutdown(Boolean.TRUE);
threadPoolTaskExecutor.initialize();
return stepBuilderFactory.get("myJob").transactionManager(secondaryTransactionManager)
.chunk(50).reader(jdbcCursorItemReader())
.writer(myJpaItemWriter())
.taskExecutor(threadPoolTaskExecutor)
.throttleLimit(100)
.build();
}
The code works fine without multithreading or threadpooltaskexecutor.However, when using them i encounter below error.
Caused by: java.sql.SQLDataException: Current position is after the last row
could not execute statement [n/a] com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY constraint
I have tried using JdbcCursotItemReader, even then i am facing the same error.Any Suggestions on how to make this work
JdbcCursorItemReader is not thread safe because it is based on a ResultSet which is not thread safe. The StoredProcedureItemReader is also based on a ResultSet, hence it is not thread-safe neither. See https://stackoverflow.com/a/53964556/5019386
Try to use the JdbcPagingItemReader which is thread-safe or if you really have to use the StoredProcedureItemReader, then make it thread-safe by wrapping it in a SynchronizedItemStreamReader.
Hope this helps.
I'm implementing a SparkHealthListener by extending SparkListener class.
#Component
class ClusterHealthListener extends SparkListener with Logging {
val appRunning = new AtomicBoolean(false)
val executorCount = new AtomicInteger(0)
override def onApplicationStart(applicationStart: SparkListenerApplicationStart) = {
logger.info("Application Start called .. ")
this.appRunning.set(true)
logger.info(s"[appRunning = ${appRunning.get}]")
}
override def onExecutorAdded(executorAdded: SparkListenerExecutorAdded) = {
logger.info("Executor add called .. ")
this.executorCount.incrementAndGet()
logger.info(s"[executorCount = ${executorCount.get}]")
}
}
appRunning and executorCount are two variables declared in ClusterHealthListener class. ClusterHealthReporterThread will only read the values.
#Component
class ClusterHealthReporterThread #Autowired() (healthListener: ClusterHealthListener) extends Logging {
new Thread {
override def run(): Unit = {
while (true) {
Thread.sleep(10 * 1000)
logger.info("Checking range health")
logger.info(s"[appRunning = ${healthListener.appRunning.get}] [executorCount=${healthListener.executorCount.get}]"
}
}
}.start()
}
ClusterHealthReporterThread is always reporting the initialized values regardless of the changes made to the variable by main thread? What am I doing wrong? Is this because I inject healthListener to ClusterHealthReporterThread?
Update
I played around a bit and looks like it has something to do with the way i initiate spark listener.
If I add the spark listener like this
val sparkContext = SparkContext.getOrCreate(sparkConf)
sparkContext.addSparkListener(healthListener)
Parent thread will show appRunning as 'false' always but shows executor count correctly. Child thread (health reporter) will also show proper executor counts but appRunning was always reporting 'false' like that of the main thread.
Then I stumbled across this Why is SparkListenerApplicationStart never fired? and tried setting listener at the spark config level,
.set("spark.extraListeners", "HealthListener class path")
If I do this, main thread will report 'true' for appRunning and will report correct executor counts but child thread will always report 'false' and '0' value for executors.
I can't immediately see what's wrong here, you might have found an interesting edge case.
I think #m4gic's comment might be correct, that the logging library is perhaps caching that interpolated string? It looks like you are using https://github.com/lightbend/scala-logging which claims that this interpolation "has no effect on behavior", so maybe not. Please could you follow his suggestion to retry without using that feature and report back?
A second possibility: I wonder if there is only one ClusterHealthListener in the system? Perhaps the autowiring is causing a second instance to be created? Can you log the object ids of the ClusterHealthListener reference in both locations and verify that they are the same object?
If neither of those suggestions fix this, are you able to post a working example that I can play with?
I've compiled a docker image of GridGain Pro and run this.
with Java i do the following...
Create the following #Configuration file
#Configuration
#EnableCaching
public class CustomConfiguration extends CachingConfigurerSupport {
#Bean
#Override
public KeyGenerator keyGenerator() {
return (target, method, params) -> {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName());
sb.append(method.getName());
for (Object obj : params) {
sb.append("|");
sb.append(obj.toString());
}
return sb.toString();
};
}
#Bean("cacheManager")
public SpringCacheManager cacheManager(IgniteConfiguration igniteConfiguration){
try {
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setIgniteInstanceName("ignite");
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(new CacheConfiguration<>().setCacheMode(CacheMode.REPLICATED));
return springCacheManager;
}
catch (Exception ex){
}
return null;
}
#Bean
#Profile("!dev")
IgniteConfiguration igniteConfiguration() {
GridGainConfiguration gridGainConfiguration = new GridGainConfiguration();
gridGainConfiguration.setRollingUpdatesEnabled(true);
IgniteConfiguration igniteConfiguration = new IgniteConfiguration()
.setPluginConfigurations(gridGainConfiguration)
.setClientMode(true)
.setPeerClassLoadingEnabled(false)
.setIgniteInstanceName("MyIgnite");
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
DataRegionConfiguration dataRegionConfiguration = new DataRegionConfiguration();
dataRegionConfiguration.setInitialSize(20 * 1024 * 1024);
dataRegionConfiguration.setMaxSize(40 * 1024 * 1024);
dataRegionConfiguration.setMetricsEnabled(true);
dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder tcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder();
tcpDiscoveryVmIpFinder.setAddresses(Arrays.asList("192.168.99.100:47500..47502"));
tcpDiscoverySpi.setIpFinder(tcpDiscoveryVmIpFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
return igniteConfiguration;
}
}
Start spring and get the following error.
2018-04-18 12:27:29.277 WARN 12588 --- [ main] .GridEntDiscoveryNodeValidationProcessor : GridGain node cannot be in one cluster with Ignite node [locNodeAddrs=[server/0:0:0:0:0:0:0:1, server/10.29.96.164, server/127.0.0.1, /192.168.56.1, /192.168.99.1], rmtNodeAddrs=[172.17.0.1/0:0:0:0:0:0:0:1%lo, 192.168.99.100/10.0.2.15, 10.0.2.15/127.0.0.1, /172.17.0.1, /192.168.99.100]]
2018-04-18 12:27:29.283 ERROR 12588 --- [ main] o.a.i.internal.IgniteKernal%MyIgnite : Got exception while starting (will rollback startup routine).
I'm trying to use gridgain as a replacement for redis and use the #Cacheable annotation.
Does anyone have a working gridgain example?
What is causing the error above?
G.
1) okay seems the issue was not providing H2 as a dependency.
2) using GridGain professional instead of GridGain Enterprise.
G.
GridGain node cannot be in one cluster with Ignite node is pretty self-explanatory.
Either you have forgot to stop some local Apache Ignite from earlier experiments.
Or you have deliberately tried to make GridGain join an Ignite cluster.
Or better yet, there is an instance of Apache Ignite running somewhere in your local network, and you have set multicast discovery or other kind of too-broad discovery, so they're seeing each other.
Maybe gridgain-core.x.x.x.jar jar is miising from one of nodes' classpath. Check and add it if necessary.
I'm trying to use JDT SearchEngine to find references to a given object. But I'm getting a "NullPointerException" while invoking the "search" method of org.eclipse.jdt.core.search.SearchEngine.
Following is the error trace:
java.lang.NullPointerException at
org.eclipse.jdt.internal.core.search.BasicSearchEngine.findMatches(BasicSearchEngine.java:214)
at
org.eclipse.jdt.internal.core.search.BasicSearchEngine.search(BasicSearchEngine.java:515)
at
org.eclipse.jdt.core.search.SearchEngine.search(SearchEngine.java:582)
And following is the method I'm using to perform search:
private static void search(String elementName) { //elementName -> a method Name
try {
SearchPattern pattern = SearchPattern.createPattern(elementName, IJavaSearchConstants.METHOD,
IJavaSearchConstants.REFERENCES, SearchPattern.R_PATTERN_MATCH);
IJavaSearchScope scope = SearchEngine.createWorkspaceScope();
SearchRequestor requestor = new SearchRequestor() {
#Override
public void acceptSearchMatch(SearchMatch match) {
System.out.println("Element - " + match.getElement());
}
};
SearchEngine searchEngine = new SearchEngine();
SearchParticipant[] searchParticipants = new SearchParticipant[] { SearchEngine
.getDefaultSearchParticipant() };
searchEngine.search(pattern, searchParticipants, scope, requestor, null);
} catch (Exception e) {
e.printStackTrace();
}
}
Refer the "Variables" window of the following snapshot to check the values of the arguments passing to the "searchEngine.search()":
I think the the issue is because of the value of "scope" [Highlighted in 'BLACK' above].
Which means "SearchEngine.createWorkspaceScope()" doesn't return expected values in this case.
NOTE: Please note that this is a part of my program which runs as a stand-alone java program (not an eclipse plugin) using JDT APIs to parse a given source code (using JDT-AST).
Isn't it possible to use JDT SearchEngine in such case (non eclipse plugin program), or is this issue due to some other reason?
Really appreciate your answer on this.
No. You cannot use the search engine without openning a workspace. The reason is that the SearchEngine relies on the eclipse filesystem abstraction (IResource, IFile, IFolder, etc.). This is only available when the workspace is open.