perf4J with MDC - log4j

Does anyone know if perf4J has support for log4j MDC. All my log statements are appended with MDC values, however the perf4J log statements don't show the MDC value.
Please see below, I expect MDCMappedValue to be shown at the end of [TimingLogger] log statements as well.
18:35:48,038 INFO [LoginAction] Logging in user kermit into application - MDCMappedValue 18:35:48,749 INFO [PostAuthenticationHandler] doPostAuthenticate() started - MDCMappedValue 18:36:03,653 INFO [PostAuthenticationHandler] Profile Loaded for kermit - MDCMappedValue 18:36:08,224 INFO [TimingLogger] start[1300905347914] time[20310] tag[HTTP.Success] message[/csa/login.seam] -
18:36:09,142 INFO [TimingLogger] start[1300905368240] time[902] tag[HTTP.Success] message[/csa/home.seam] -

My test seems to produce the expected results. Notice that I use the Log4JStopWatch, not the LoggingStopWatch:
package test;
import org.apache.log4j.Logger;
import org.apache.log4j.MDC;
import org.perf4j.StopWatch;
import org.perf4j.log4j.Log4JStopWatch;
public class Perf4jMdcTest {
private Logger _ = Logger.getLogger(Perf4jMdcTest.class);
public static void main(String[] args) {
for (int i = 0; i < 3; i++) {
new Thread() {
#Override
public void run() {
MDC.put("id", getName());
Perf4jMdcTest perf4jMdcTest = new Perf4jMdcTest();
perf4jMdcTest.test1();
perf4jMdcTest.test2();
MDC.clear();
}
}.start();
}
}
private void test1() {
_.info("test1");
StopWatch stopWatch = new Log4JStopWatch();
stopWatch.start("a");
try { Thread.sleep(300); }
catch (InterruptedException e) { }
stopWatch.stop();
}
private void test2() {
_.info("test2");
StopWatch stopWatch = new Log4JStopWatch();
stopWatch.start("b");
try { Thread.sleep(600); }
catch (InterruptedException e) { }
stopWatch.stop();
}
}
My log4j.properties is as follows:
log4j.rootLogger=debug, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%d [%-5p] MDC:%X{id} - %m%n
And the output is:
2012-03-26 20:37:43,049 [INFO ] MDC:Thread-1 - test1
2012-03-26 20:37:43,050 [INFO ] MDC:Thread-3 - test1
2012-03-26 20:37:43,049 [INFO ] MDC:Thread-2 - test1
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-2 - start[1332787063053] time[300] tag[a]
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-2 - test2
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-1 - start[1332787063053] time[300] tag[a]
2012-03-26 20:37:43,354 [INFO ] MDC:Thread-1 - test2
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-3 - start[1332787063053] time[300] tag[a]
2012-03-26 20:37:43,354 [INFO ] MDC:Thread-3 - test2
2012-03-26 20:37:43,955 [INFO ] MDC:Thread-2 - start[1332787063354] time[600] tag[b]
2012-03-26 20:37:43,955 [INFO ] MDC:Thread-1 - start[1332787063354] time[601] tag[b]
2012-03-26 20:37:43,955 [INFO ] MDC:Thread-3 - start[1332787063354] time[601] tag[b]

Related

JHipster Integration Tests with LocalStack testcontainers

I have a JHipster (7.9.3) application with Kafka and Postgres TestContainers used in the integration tests. I want to integrate my application with S3 storage. For this purpose, I want to write some Integration tests using LocalStack testcontainer.
I have created a new annotation:
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
public #interface EmbeddedS3 {
}
and added localstack dependency to the project:
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>localstack</artifactId>
<scope>test</scope>
</dependency>
created a LocalStackTestContainer as:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.output.Slf4jLogConsumer;
import org.testcontainers.utility.DockerImageName;
public class LocalStackTestContainer implements InitializingBean, DisposableBean {
private LocalStackContainer localStackContainer;
private static final Logger log = LoggerFactory.getLogger(LocalStackTestContainer.class);
#Override
public void destroy() {
if (null != localStackContainer) {
localStackContainer.close();
}
}
#Override
public void afterPropertiesSet() {
if (null == localStackContainer) {
localStackContainer =
new LocalStackContainer(DockerImageName.parse("localstack/localstack:1.2.0"))
.withServices(LocalStackContainer.Service.S3)
.withLogConsumer(new Slf4jLogConsumer(log))
.withReuse(true)
;
}
if (!localStackContainer.isRunning()) {
localStackContainer.start();
}
}
public LocalStackContainer getLocalStackContainer() {
return localStackContainer;
}
}
and adjusted TestContainersSpringContextCustomizerFactory.createContextCustomizer method with:
EmbeddedS3 s3LocalStackAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedS3.class);
if (null != s3LocalStackAnnotation) {
log.debug("detected the EmbeddedS3 annotation on class {}", testClass.getName());
log.info("Warming up the localstack S3");
if (null == localStackTestContainer) {
localStackTestContainer = beanFactory.createBean(LocalStackTestContainer.class);
beanFactory.registerSingleton(LocalStackTestContainer.class.getName(), localStackTestContainer);
// ((DefaultListableBeanFactory) beanFactory).registerDisposableBean(LocalStackTestContainer.class.getName(), localStackTestContainer);
}
}
Added #EmbeddedS3 annotation to the #IntegrationTest annotation as:
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
#SpringBootTest(classes = {AgentMarlinApp.class, AsyncSyncConfiguration.class, TestSecurityConfiguration.class, TestLocalStackConfiguration.class})
#EmbeddedKafka
#EmbeddedSQL
#EmbeddedS3
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
#ActiveProfiles({"testdev", "it-test"})
public #interface IntegrationTest {
// 5s is the spring default https://github.com/spring-projects/spring-framework/blob/29185a3d28fa5e9c1b4821ffe519ef6f56b51962/spring-test/src/main/java/org/springframework/test/web/reactive/server/DefaultWebTestClient.java#L106
String DEFAULT_TIMEOUT = "PT5S";
String DEFAULT_ENTITY_TIMEOUT = "PT5S";
}
To initialize AmazonS3 client I have #TestConfiguration:
#Bean
public AmazonS3 amazonS3(LocalStackTestContainer localStackTestContainer) {
LocalStackContainer localStack = localStackTestContainer.getLocalStackContainer();
return AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
localStack.getEndpointOverride(LocalStackContainer.Service.S3).toString(),
localStack.getRegion()
)
)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(localStack.getAccessKey(), localStack.getSecretKey())
)
)
.build();
}
I have 2 integration classes (ending with *IT), when first class tests are executed I see that testcontainers are started
2022-10-30 14:34:42.031 DEBUG 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Starting container: testcontainers/ryuk:0.3.3
2022-10-30 14:34:42.032 DEBUG 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Trying to start container: testcontainers/ryuk:0.3.3 (attempt 1/1)
2022-10-30 14:34:42.033 DEBUG 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Starting container: testcontainers/ryuk:0.3.3
2022-10-30 14:34:42.033 INFO 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Creating container for image: testcontainers/ryuk:0.3.3
2022-10-30 14:34:42.371 INFO 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Container testcontainers/ryuk:0.3.3 is starting: 5505472cec1608db3383ebeeee99a8d02b48331a53f5d53613a0a53c1cd51986
2022-10-30 14:34:43.271 INFO 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Container testcontainers/ryuk:0.3.3 started in PT1.2706326S
2022-10-30 14:34:43.282 INFO 27208 --- [ main] o.t.utility.RyukResourceReaper : Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
2022-10-30 14:34:43.283 INFO 27208 --- [ main] org.testcontainers.DockerClientFactory : Checking the system...
2022-10-30 14:34:43.283 INFO 27208 --- [ main] org.testcontainers.DockerClientFactory : ✔︎ Docker server version should be at least 1.6.0
2022-10-30 14:34:43.284 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : HOSTNAME_EXTERNAL environment variable set to localhost (to match host-routable address for container)
2022-10-30 14:34:43.284 DEBUG 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Starting container: localstack/localstack:1.2.0
2022-10-30 14:34:43.285 DEBUG 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Trying to start container: localstack/localstack:1.2.0 (attempt 1/1)
2022-10-30 14:34:43.285 DEBUG 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Starting container: localstack/localstack:1.2.0
2022-10-30 14:34:43.285 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Creating container for image: localstack/localstack:1.2.0
2022-10-30 14:34:44.356 WARN 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Reuse was requested but the environment does not support the reuse of containers
To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at C:\Users\artjo\.testcontainers.properties
2022-10-30 14:34:44.581 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Container localstack/localstack:1.2.0 is starting: d09c4e105058444699a29338b85d7294efec29941857e581daf634d391395869
2022-10-30 14:34:48.321 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Container localstack/localstack:1.2.0 started in PT5.0370436S
2022-10-30 14:34:48.321 INFO 27208 --- [ main] ContainersSpringContextCustomizerFactory : Warming up the sql database
2022-10-30 14:34:48.327 DEBUG 27208 --- [ main] 🐳 [postgres:14.5] : Starting container: postgres:14.5
2022-10-30 14:34:48.327 DEBUG 27208 --- [ main] 🐳 [postgres:14.5] : Trying to start container: postgres:14.5 (attempt 1/1)
2022-10-30 14:34:48.327 DEBUG 27208 --- [ main] 🐳 [postgres:14.5] : Starting container: postgres:14.5
2022-10-30 14:34:48.327 INFO 27208 --- [ main] 🐳 [postgres:14.5] : Creating container for image: postgres:14.5
2022-10-30 14:34:48.328 WARN 27208 --- [ main] 🐳 [postgres:14.5] : Reuse was requested but the environment does not support the reuse of containers
To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at C:\Users\artjo\.testcontainers.properties
2022-10-30 14:34:48.415 INFO 27208 --- [ main] 🐳 [postgres:14.5] : Container postgres:14.5 is starting: 5e4fcf583b44345651aad9d5f939bc913d62eafe16a73017379c9e43f2028ff4
But when the second IT test is started the LocalStackTestContainer bean is not available anymore.
How do I need to configure this so localstack container bean remain available during all the tests execution?
######################## UPDATE 01.11.2022 ########################
Other testcontainers are configured the same way (no changes from the auto generated code)
public class TestContainersSpringContextCustomizerFactory implements ContextCustomizerFactory {
private Logger log = LoggerFactory.getLogger(TestContainersSpringContextCustomizerFactory.class);
private static LocalStackTestContainer localStackTestContainer;
private static KafkaTestContainer kafkaBean;
private static SqlTestContainer devTestContainer;
private static SqlTestContainer prodTestContainer;
#Override
public ContextCustomizer createContextCustomizer(Class<?> testClass, List<ContextConfigurationAttributes> configAttributes) {
return (context, mergedConfig) -> {
ConfigurableListableBeanFactory beanFactory = context.getBeanFactory();
TestPropertyValues testValues = TestPropertyValues.empty();
// EmbeddedS3 s3LocalStackAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedS3.class);
// if (null != s3LocalStackAnnotation) {
// log.debug("detected the EmbeddedS3 annotation on class {}", testClass.getName());
// log.info("Warming up the localstack S3");
//
// if (null == localStackTestContainer) {
// localStackTestContainer = beanFactory.createBean(LocalStackTestContainer.class);
// beanFactory.registerSingleton(LocalStackTestContainer.class.getName(), localStackTestContainer);
//// ((DefaultListableBeanFactory) beanFactory).registerDisposableBean(LocalStackTestContainer.class.getName(), localStackTestContainer);
// }
// }
EmbeddedSQL sqlAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedSQL.class);
if (null != sqlAnnotation) {
log.debug("detected the EmbeddedSQL annotation on class {}", testClass.getName());
log.info("Warming up the sql database");
if (
Arrays
.asList(context.getEnvironment().getActiveProfiles())
.contains("test" + JHipsterConstants.SPRING_PROFILE_DEVELOPMENT)
) {
if (null == devTestContainer) {
try {
Class<? extends SqlTestContainer> containerClass = (Class<? extends SqlTestContainer>) Class.forName(
this.getClass().getPackageName() + ".PostgreSqlTestContainer"
);
devTestContainer = beanFactory.createBean(containerClass);
beanFactory.registerSingleton(containerClass.getName(), devTestContainer);
// ((DefaultListableBeanFactory)beanFactory).registerDisposableBean(containerClass.getName(), devTestContainer);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
}
testValues =
testValues.and(
"spring.r2dbc.url=" + devTestContainer.getTestContainer().getJdbcUrl().replace("jdbc", "r2dbc") + ""
);
testValues = testValues.and("spring.r2dbc.username=" + devTestContainer.getTestContainer().getUsername());
testValues = testValues.and("spring.r2dbc.password=" + devTestContainer.getTestContainer().getPassword());
testValues = testValues.and("spring.liquibase.url=" + devTestContainer.getTestContainer().getJdbcUrl() + "");
}
if (
Arrays
.asList(context.getEnvironment().getActiveProfiles())
.contains("test" + JHipsterConstants.SPRING_PROFILE_PRODUCTION)
) {
if (null == prodTestContainer) {
try {
Class<? extends SqlTestContainer> containerClass = (Class<? extends SqlTestContainer>) Class.forName(
this.getClass().getPackageName() + ".PostgreSqlTestContainer"
);
prodTestContainer = beanFactory.createBean(containerClass);
beanFactory.registerSingleton(containerClass.getName(), prodTestContainer);
// ((DefaultListableBeanFactory)beanFactory).registerDisposableBean(containerClass.getName(), prodTestContainer);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
}
testValues =
testValues.and(
"spring.r2dbc.url=" + prodTestContainer.getTestContainer().getJdbcUrl().replace("jdbc", "r2dbc") + ""
);
testValues = testValues.and("spring.r2dbc.username=" + prodTestContainer.getTestContainer().getUsername());
testValues = testValues.and("spring.r2dbc.password=" + prodTestContainer.getTestContainer().getPassword());
testValues = testValues.and("spring.liquibase.url=" + prodTestContainer.getTestContainer().getJdbcUrl() + "");
}
}
EmbeddedKafka kafkaAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedKafka.class);
if (null != kafkaAnnotation) {
log.debug("detected the EmbeddedKafka annotation on class {}", testClass.getName());
log.info("Warming up the kafka broker");
if (null == kafkaBean) {
kafkaBean = beanFactory.createBean(KafkaTestContainer.class);
beanFactory.registerSingleton(KafkaTestContainer.class.getName(), kafkaBean);
// ((DefaultListableBeanFactory)beanFactory).registerDisposableBean(KafkaTestContainer.class.getName(), kafkaBean);
}
testValues =
testValues.and(
"spring.cloud.stream.kafka.binder.brokers=" +
kafkaBean.getKafkaContainer().getHost() +
':' +
kafkaBean.getKafkaContainer().getMappedPort(KafkaContainer.KAFKA_PORT)
);
}
testValues.applyTo(context);
};
}
}
######################## UPDATE 02.11.2022 ########################
Reproducible example project can be found here GITHUB

exception when verifying no-arg method

#EnableScheduling
#Configuration
#Slf4j
public class ScheduledTaskDemo {
#Scheduled(fixedRate = 6, timeUnit = TimeUnit.SECONDS)
public void task1() {
log.info("task1 ====== " + Thread.currentThread().getName());
}
//cron表达式使用,此处表示每10秒
#Scheduled(cron = "*/10 * * * * *")
public void task2() {
log.info("task2 ====== " + Thread.currentThread().getName());
}
}
#SpringBootTest
public class TSLSpringApplicationTest {
#SpyBean
ScheduledTaskDemo scheduledTaskDemo;
#Test
public void testScheduledTaskDemo() {
await().atMost(Duration.TEN_SECONDS).untilAsserted(() -> {
//这里竟把task2方法执行次数也算进去了? TODO
verify(scheduledTaskDemo, atLeast(5)).task1();
});
}
}
the task2 method execution times are included,why?thank!
the error message is as follows:
2022-07-31 16:09:37,493 - [spring] [d07fb4fbc687a9fc/d07fb4fbc687a9fc] [] INFO 11260 --- [scheduling-1] com.tsl.scheduledTask.ScheduledTaskDemo#task1(19) : task1 ====== scheduling-1
2022-07-31 16:09:40,009 - [spring] [5ade105b48f4e54f/5ade105b48f4e54f] [] INFO 11260 --- [scheduling-1] com.tsl.scheduledTask.ScheduledTaskDemo#task2(25) : task2 ====== scheduling-1
2022-07-31 16:09:43,505 - [spring] [74699fb14f368213/74699fb14f368213] [] INFO 11260 --- [scheduling-1] com.tsl.scheduledTask.ScheduledTaskDemo#task1(19) : task1 ====== scheduling-1
org.awaitility.core.ConditionTimeoutException: Assertion condition defined as a lambda expression in com.tsl.TSLSpringApplicationTest
scheduledTaskDemo.task1();
Wanted at least 5 times:
-> at com.tsl.TSLSpringApplicationTest.lambda$testScheduledTaskDemo$0(TSLSpringApplicationTest.java:34)
But was 3 times:
-> at com.tsl.scheduledTask.ScheduledTaskDemo$$FastClassBySpringCGLIB$$a0d49f8b.invoke()
-> at com.tsl.scheduledTask.ScheduledTaskDemo$$FastClassBySpringCGLIB$$a0d49f8b.invoke()
-> at com.tsl.scheduledTask.ScheduledTaskDemo$$FastClassBySpringCGLIB$$a0d49f8b.invoke()
within 10 seconds.

Spring IntegrationFlow CompositeFileListFilter Not Working

I have two filters regexFilter and lastModified.
return IntegrationFlows.from(Sftp.inboundAdapter(inboundSftp)
.localDirectory(this.getlocalDirectory(config.getId()))
.deleteRemoteFiles(true)
.autoCreateLocalDirectory(true)
.regexFilter(config.getRegexFilter())
.filter(new LastModifiedLsEntryFileListFilter())
.remoteDirectory(config.getInboundDirectory())
, e -> e.poller(Pollers.fixedDelay(60_000)
.errorChannel(MessageHeaders.ERROR_CHANNEL).errorHandler((ex) -> {
})))
By googling I understand I have to use CompositeFileListFilter for regex so change my code to
.filter(new CompositeFileListFilter().addFilter(new RegexPatternFileListFilter(config.getRegexFilter())))
Its compiled but on run time throws error and channel stooped and same error goes for
.filter(ftpPersistantFilter(config.getRegexFilter()))
.
.
.
public CompositeFileListFilter ftpPersistantFilter(String regexFilter) {
CompositeFileListFilter filters = new CompositeFileListFilter();
filters.addFilter(new FtpRegexPatternFileListFilter(regexFilter));
return filters;
}
I just want to filter on the basis of file name. There are 2 flows for same remote folder and both are polling with same cron but should pick their relevant file.
EDIT
adding last LastModifiedLsEntryFileListFilter. Its working fine but adding upon request.
public class LastModifiedLsEntryFileListFilter implements FileListFilter<LsEntry> {
private final Logger log = LoggerFactory.getLogger(LastModifiedLsEntryFileListFilter.class);
private static final long DEFAULT_AGE = 60;
private volatile long age = DEFAULT_AGE;
private volatile Map<String, Long> sizeMap = new HashMap<String, Long>();
public long getAge() {
return this.age;
}
public void setAge(long age) {
setAge(age, TimeUnit.SECONDS);
}
public void setAge(long age, TimeUnit unit) {
this.age = unit.toSeconds(age);
}
#Override
public List<LsEntry> filterFiles(LsEntry[] files) {
List<LsEntry> list = new ArrayList<LsEntry>();
long now = System.currentTimeMillis() / 1000;
for (LsEntry file : files) {
if (file.getAttrs()
.isDir()) {
continue;
}
String fileName = file.getFilename();
Long currentSize = file.getAttrs().getSize();
Long oldSize = sizeMap.get(fileName);
if(oldSize == null || currentSize.longValue() != oldSize.longValue() ) {
// putting size in map, will verify in next iteration of scheduler
sizeMap.put(fileName, currentSize);
log.info("[{}] old size [{}] increased to [{}]...", file.getFilename(), oldSize, currentSize);
continue;
}
int lastModifiedTime = file.getAttrs()
.getMTime();
if (lastModifiedTime + this.age <= now ) {
list.add(file);
sizeMap.remove(fileName);
} else {
log.info("File [{}] is still being uploaded...", file.getFilename());
}
}
return list;
}
}
PS : When I am testing filter for regex I have removed LastModifiedLsEntryFileListFilter just for simplicity. So my final Flow is
return IntegrationFlows.from(Sftp.inboundAdapter(inboundSftp)
.localDirectory(this.getlocalDirectory(config.getId()))
.deleteRemoteFiles(true)
.autoCreateLocalDirectory(true)
.filter(new CompositeFileListFilter().addFilter(new RegexPatternFileListFilter(config.getRegexFilter())))
//.filter(new LastModifiedLsEntryFileListFilter())
.remoteDirectory(config.getInboundDirectory()),
e -> e.poller(Pollers.fixedDelay(60_000)
.errorChannel(MessageHeaders.ERROR_CHANNEL).errorHandler((ex) -> {
try {
this.destroy(String.valueOf(config.getId()));
configurationService.removeConfigurationChannelById(config.getId());
// // logging here
} catch (Exception ex1) {
}
}))).publishSubscribeChannel(s -> s
.subscribe(f -> {
f.handle(Sftp.outboundAdapter(outboundSftp)
.useTemporaryFileName(false)
.autoCreateDirectory(true)
.remoteDirectory(config.getOutboundDirectory()), c -> c.advice(startup.deleteFileAdvice()));
})
.subscribe(f -> {
if (doArchive) {
f.handle(Sftp.outboundAdapter(inboundSftp)
.useTemporaryFileName(false)
.autoCreateDirectory(true)
.remoteDirectory(config.getInboundArchiveDirectory()));
} else {
f.handle(m -> {
});
}
})
.subscribe(f -> f
.handle(m -> {
// I am handling exception here
})
))
.get();
and here are exceptions
2020-01-27 21:36:55,731 INFO o.s.i.c.PublishSubscribeChannel - Channel
'application.2.subFlow#0.channel#0' has 0 subscriber(s).
2020-01-27 21:36:55,731 INFO o.s.i.e.EventDrivenConsumer - stopped 2.subFlow#2.org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2020-01-27 21:36:55,731 INFO o.s.i.c.DirectChannel - Channel 'application.2.subFlow#2.channel#0' has 0 subscriber(s).
2020-01-27 21:36:55,731 INFO o.s.i.e.EventDrivenConsumer - stopped 2.subFlow#2.org.springframework.integration.config.ConsumerEndpointFactoryBean#1
EDIT
After passing regex to LastModifiedLsEntryFileListFilter and handle there works for me. When I use any other RegexFilter inside CompositeFileListFilter it thorws error.
.filter(new CompositeFileListFilter().addFilter(new LastModifiedLsEntryFileListFilter(config.getRegexFilter())))
Show, please, your final flow. I don't see that you use LastModifiedLsEntryFileListFilter in your CompositeFileListFilter... You definitely can't use regexFilter() and filter() together - the last one wins. To avoid confusion we suggest to use a filter() and compose all those with CompositeFileListFilter or ChainFileListFilter.
Also what is an error you are mentioning, please.

log4j2 isThreadContextMapInheritable property usage

I am trying to log events of a Java application to separate log files based on a key set to the ThreadContext. But my key is not reaching the child thread (created on MouseEvent) even after setting "log4j2.isThreadContextMapInheritable" to "true" in system properties. Someone please help me to get this resolved.
My main method:
public class Application {
static {
System.setProperty("log4j2.isThreadContextMapInheritable","true");
}
private final static Logger LOGGER = LogManager.getLogger(Application.class);
public static void main(String[] args) throws Exception
{
ThreadContext.put("cfg","RLS");
LOGGER.info("New window opening!!!"+ThreadContext.get("cfg"));
newWindow();
}
private static void newWindow() throws Exception {
ButtonFrame buttonFrame = new ButtonFrame("Button Demo");
buttonFrame.setSize( 350, 275 );
buttonFrame.setVisible( true );
}
}
ButtonFrame class:
public class ButtonFrame extends JFrame{
private final static Logger LOGGER = LogManager.getLogger(NewWindow.class);
JButton bChange;
JFrame frame = new JFrame("Our JButton listener example");
public ButtonFrame(String title)
{
super( title );
setLayout( new FlowLayout() );
bChange = new JButton("Click Me!");
bChange.addMouseListener(new MouseListener() {
#Override
public void mouseClicked(MouseEvent e) {
try {
LOGGER.info("Mouse clicked!!!"+ThreadContext.get("cfg"));
JDialog d = new JDialog(frame, "HI", true);
d.setLocationRelativeTo(frame);
d.setVisible(true);
} catch (Exception e1) {
e1.printStackTrace();
}
}
#Override
public void mousePressed(MouseEvent e) {}
#Override
public void mouseReleased(MouseEvent e) {}
#Override
public void mouseEntered(MouseEvent e) {}
#Override
public void mouseExited(MouseEvent e) {}
});
add( bChange );
setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
}
}
log4j2.properties file:
appenders = rls,otr,routing
appender.rls.type = RollingFile
appender.rls.name = RollingFile_Rls
appender.rls.fileName = D:\\RLS\\rls_%d{MMdd}.log
appender.rls.filePattern = D:\\RLS\\rls_%d{MMdd}.log
appender.rls.layout.type = PatternLayout
appender.rls.layout.pattern = %d{ABSOLUTE} %level{length=1}
%markerSimpleName [%C{1}:%L] %m%n
appender.rls.policies.type = Policies
appender.rls.policies.time.type = TimeBasedTriggeringPolicy
appender.rls.policies.time.interval = 1
appender.rls.policies.time.modulate = true
appender.rls.policies.size.type = SizeBasedTriggeringPolicy
appender.rls.policies.size.size = 100MB
appender.rls.strategy.type = DefaultRolloverStrategy
appender.rls.strategy.max = 5
appender.otr.type = RollingFile
appender.otr.name = RollingFile_Otr
appender.otr.fileName = D:\\RLS\\otr_%d{MMdd}.log
appender.otr.filePattern = D:\\RLS\\otr_%d{MMdd}.log
appender.otr.layout.type = PatternLayout
appender.otr.layout.pattern = %d{ABSOLUTE} %level{length=1}
%markerSimpleName [%C{1}:%L] %m%n
appender.otr.policies.type = Policies
appender.otr.policies.time.type = TimeBasedTriggeringPolicy
appender.otr.policies.time.interval = 1
appender.otr.policies.time.modulate = true
appender.otr.policies.size.type = SizeBasedTriggeringPolicy
appender.otr.policies.size.size = 100MB
appender.otr.strategy.type = DefaultRolloverStrategy
appender.otr.strategy.max = 5
appender.routing.type = Routing
appender.routing.name = Route_Finder
appender.routing.routes.type = Routes
appender.routing.routes.pattern = $${ctx:cfg}
appender.routing.routes.route1.type = Route
appender.routing.routes.route1.ref = RollingFile_Rls
appender.routing.routes.route1.key = RLS
appender.routing.routes.route2.type = Route
appender.routing.routes.route2.ref = RollingFile_Otr
appender.routing.routes.route2.key = $${ctx:cfg}
loggers = rls,otr
logger.rls.name = logging
logger.rls.level = info
logger.rls.additivity = false
logger.rls.appenderRefs=rls
logger.rls.appenderRef.rls.ref = Route_Finder
logger.rls.name = logging
logger.rls.level = info
logger.rls.additivity = false
logger.rls.appenderRefs=rls
logger.rls.appenderRef.rls.ref = Route_Finder
logger.otr.name = other
logger.otr.level = info
logger.otr.additivity = false
logger.otr.appenderRefs=otr
logger.otr.appenderRef.otr.ref = Route_Finder
rootLogger.level = trace
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = stdout
You can put log4j2.component.properties file in the classpath to control various aspects of Log4j 2 behavior.
For example content of log4j2.component.properties:
# https://logging.apache.org/log4j/2.x/manual/configuration.html#SystemProperties
# If true use an InheritableThreadLocal to implement the ThreadContext map.
# Otherwise, use a plain ThreadLocal.
# (Maybe ignored if a custom ThreadContext map is specified.)
# Default is false
# Modern 2.10+
log4j2.isThreadContextMapInheritable=true
# Legacy for pre-2.10
isThreadContextMapInheritable=true
This has priority over system properties, but it can be overridden by environment variable LOG4J_IS_THREAD_CONTEXT_MAP_INHERITABLE as described in the
documentation.
Adding OP's comment as answer
The ThreadContext map can be configured to use an InheritableThreadLocal by setting system property isThreadContextMapInheritable to true.
Set the system property as -DisThreadContextMapInheritable=true when we start our application, or in application code using the following piece of code: System.setProperty("isThreadContextMapInheritable", "true");
https://logging.apache.org/log4j/2.x/manual/thread-context.html
https://logging.apache.org/log4j/2.x/manual/configuration.html#SystemProperties

how to check the number of entries on local member

my prime member
public static void main(String[] args) throws InterruptedException {
Config config = new Config();
config.setProperty(GroupProperty.ENABLE_JMX, "true");
config.setProperty(GroupProperty.BACKPRESSURE_ENABLED, "true");
config.setProperty(GroupProperty.SLOW_OPERATION_DETECTOR_ENABLED, "true");
config.getSerializationConfig().addPortableFactory(1, new MyPortableFactory());
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<Integer, Rule> ruleMap = hz.getMap("ruleMap");
// TODO generate rule map data ; more than 100,000 entries
generateRuleMapData(ruleMap);
logger.info("generate rule finised!");
// TODO rule map index
// health check
PartitionService partitionService = hz.getPartitionService();
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
while (true) {
logger.info("isClusterSafe:{},isLocalMemberSafe:{},number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
}
logs
2016-06-28 13:53:05,048 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
2016-06-28 13:53:06,049 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
2016-06-28 13:53:07,050 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
my slave member
public static void main(String[] args) throws InterruptedException {
Config config = new Config();
config.setProperty(GroupProperty.ENABLE_JMX, "true");
config.setProperty(GroupProperty.BACKPRESSURE_ENABLED, "true");
config.setProperty(GroupProperty.SLOW_OPERATION_DETECTOR_ENABLED, "true");
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<Integer, Rule> ruleMap = hz.getMap("ruleMap");
PartitionService partitionService = hz.getPartitionService();
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
while (true) {
logger.info("isClusterSafe:{},isLocalMemberSafe:{},number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
}
logs
2016-06-28 14:05:53,543 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:54,556 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:55,563 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:56,578 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
my question is :
why number of entries owned on prime member is not changed, after the cluster adds one slave member?
I should get statics per second.
while (true) {
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
logger.info(
"isClusterSafe:{},isLocalMemberSafe:{},rulemap.size:{}, number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(), ruleMap.size(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
Another option is to make use of localKeySet which returns the locally owned set of keys.
IMap::localKeySet.size()

Resources