I have a JHipster (7.9.3) application with Kafka and Postgres TestContainers used in the integration tests. I want to integrate my application with S3 storage. For this purpose, I want to write some Integration tests using LocalStack testcontainer.
I have created a new annotation:
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
public #interface EmbeddedS3 {
}
and added localstack dependency to the project:
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>localstack</artifactId>
<scope>test</scope>
</dependency>
created a LocalStackTestContainer as:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.output.Slf4jLogConsumer;
import org.testcontainers.utility.DockerImageName;
public class LocalStackTestContainer implements InitializingBean, DisposableBean {
private LocalStackContainer localStackContainer;
private static final Logger log = LoggerFactory.getLogger(LocalStackTestContainer.class);
#Override
public void destroy() {
if (null != localStackContainer) {
localStackContainer.close();
}
}
#Override
public void afterPropertiesSet() {
if (null == localStackContainer) {
localStackContainer =
new LocalStackContainer(DockerImageName.parse("localstack/localstack:1.2.0"))
.withServices(LocalStackContainer.Service.S3)
.withLogConsumer(new Slf4jLogConsumer(log))
.withReuse(true)
;
}
if (!localStackContainer.isRunning()) {
localStackContainer.start();
}
}
public LocalStackContainer getLocalStackContainer() {
return localStackContainer;
}
}
and adjusted TestContainersSpringContextCustomizerFactory.createContextCustomizer method with:
EmbeddedS3 s3LocalStackAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedS3.class);
if (null != s3LocalStackAnnotation) {
log.debug("detected the EmbeddedS3 annotation on class {}", testClass.getName());
log.info("Warming up the localstack S3");
if (null == localStackTestContainer) {
localStackTestContainer = beanFactory.createBean(LocalStackTestContainer.class);
beanFactory.registerSingleton(LocalStackTestContainer.class.getName(), localStackTestContainer);
// ((DefaultListableBeanFactory) beanFactory).registerDisposableBean(LocalStackTestContainer.class.getName(), localStackTestContainer);
}
}
Added #EmbeddedS3 annotation to the #IntegrationTest annotation as:
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
#SpringBootTest(classes = {AgentMarlinApp.class, AsyncSyncConfiguration.class, TestSecurityConfiguration.class, TestLocalStackConfiguration.class})
#EmbeddedKafka
#EmbeddedSQL
#EmbeddedS3
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
#ActiveProfiles({"testdev", "it-test"})
public #interface IntegrationTest {
// 5s is the spring default https://github.com/spring-projects/spring-framework/blob/29185a3d28fa5e9c1b4821ffe519ef6f56b51962/spring-test/src/main/java/org/springframework/test/web/reactive/server/DefaultWebTestClient.java#L106
String DEFAULT_TIMEOUT = "PT5S";
String DEFAULT_ENTITY_TIMEOUT = "PT5S";
}
To initialize AmazonS3 client I have #TestConfiguration:
#Bean
public AmazonS3 amazonS3(LocalStackTestContainer localStackTestContainer) {
LocalStackContainer localStack = localStackTestContainer.getLocalStackContainer();
return AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
localStack.getEndpointOverride(LocalStackContainer.Service.S3).toString(),
localStack.getRegion()
)
)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(localStack.getAccessKey(), localStack.getSecretKey())
)
)
.build();
}
I have 2 integration classes (ending with *IT), when first class tests are executed I see that testcontainers are started
2022-10-30 14:34:42.031 DEBUG 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Starting container: testcontainers/ryuk:0.3.3
2022-10-30 14:34:42.032 DEBUG 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Trying to start container: testcontainers/ryuk:0.3.3 (attempt 1/1)
2022-10-30 14:34:42.033 DEBUG 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Starting container: testcontainers/ryuk:0.3.3
2022-10-30 14:34:42.033 INFO 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Creating container for image: testcontainers/ryuk:0.3.3
2022-10-30 14:34:42.371 INFO 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Container testcontainers/ryuk:0.3.3 is starting: 5505472cec1608db3383ebeeee99a8d02b48331a53f5d53613a0a53c1cd51986
2022-10-30 14:34:43.271 INFO 27208 --- [ main] 🐳 [testcontainers/ryuk:0.3.3] : Container testcontainers/ryuk:0.3.3 started in PT1.2706326S
2022-10-30 14:34:43.282 INFO 27208 --- [ main] o.t.utility.RyukResourceReaper : Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
2022-10-30 14:34:43.283 INFO 27208 --- [ main] org.testcontainers.DockerClientFactory : Checking the system...
2022-10-30 14:34:43.283 INFO 27208 --- [ main] org.testcontainers.DockerClientFactory : ✔︎ Docker server version should be at least 1.6.0
2022-10-30 14:34:43.284 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : HOSTNAME_EXTERNAL environment variable set to localhost (to match host-routable address for container)
2022-10-30 14:34:43.284 DEBUG 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Starting container: localstack/localstack:1.2.0
2022-10-30 14:34:43.285 DEBUG 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Trying to start container: localstack/localstack:1.2.0 (attempt 1/1)
2022-10-30 14:34:43.285 DEBUG 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Starting container: localstack/localstack:1.2.0
2022-10-30 14:34:43.285 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Creating container for image: localstack/localstack:1.2.0
2022-10-30 14:34:44.356 WARN 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Reuse was requested but the environment does not support the reuse of containers
To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at C:\Users\artjo\.testcontainers.properties
2022-10-30 14:34:44.581 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Container localstack/localstack:1.2.0 is starting: d09c4e105058444699a29338b85d7294efec29941857e581daf634d391395869
2022-10-30 14:34:48.321 INFO 27208 --- [ main] 🐳 [localstack/localstack:1.2.0] : Container localstack/localstack:1.2.0 started in PT5.0370436S
2022-10-30 14:34:48.321 INFO 27208 --- [ main] ContainersSpringContextCustomizerFactory : Warming up the sql database
2022-10-30 14:34:48.327 DEBUG 27208 --- [ main] 🐳 [postgres:14.5] : Starting container: postgres:14.5
2022-10-30 14:34:48.327 DEBUG 27208 --- [ main] 🐳 [postgres:14.5] : Trying to start container: postgres:14.5 (attempt 1/1)
2022-10-30 14:34:48.327 DEBUG 27208 --- [ main] 🐳 [postgres:14.5] : Starting container: postgres:14.5
2022-10-30 14:34:48.327 INFO 27208 --- [ main] 🐳 [postgres:14.5] : Creating container for image: postgres:14.5
2022-10-30 14:34:48.328 WARN 27208 --- [ main] 🐳 [postgres:14.5] : Reuse was requested but the environment does not support the reuse of containers
To enable reuse of containers, you must set 'testcontainers.reuse.enable=true' in a file located at C:\Users\artjo\.testcontainers.properties
2022-10-30 14:34:48.415 INFO 27208 --- [ main] 🐳 [postgres:14.5] : Container postgres:14.5 is starting: 5e4fcf583b44345651aad9d5f939bc913d62eafe16a73017379c9e43f2028ff4
But when the second IT test is started the LocalStackTestContainer bean is not available anymore.
How do I need to configure this so localstack container bean remain available during all the tests execution?
######################## UPDATE 01.11.2022 ########################
Other testcontainers are configured the same way (no changes from the auto generated code)
public class TestContainersSpringContextCustomizerFactory implements ContextCustomizerFactory {
private Logger log = LoggerFactory.getLogger(TestContainersSpringContextCustomizerFactory.class);
private static LocalStackTestContainer localStackTestContainer;
private static KafkaTestContainer kafkaBean;
private static SqlTestContainer devTestContainer;
private static SqlTestContainer prodTestContainer;
#Override
public ContextCustomizer createContextCustomizer(Class<?> testClass, List<ContextConfigurationAttributes> configAttributes) {
return (context, mergedConfig) -> {
ConfigurableListableBeanFactory beanFactory = context.getBeanFactory();
TestPropertyValues testValues = TestPropertyValues.empty();
// EmbeddedS3 s3LocalStackAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedS3.class);
// if (null != s3LocalStackAnnotation) {
// log.debug("detected the EmbeddedS3 annotation on class {}", testClass.getName());
// log.info("Warming up the localstack S3");
//
// if (null == localStackTestContainer) {
// localStackTestContainer = beanFactory.createBean(LocalStackTestContainer.class);
// beanFactory.registerSingleton(LocalStackTestContainer.class.getName(), localStackTestContainer);
//// ((DefaultListableBeanFactory) beanFactory).registerDisposableBean(LocalStackTestContainer.class.getName(), localStackTestContainer);
// }
// }
EmbeddedSQL sqlAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedSQL.class);
if (null != sqlAnnotation) {
log.debug("detected the EmbeddedSQL annotation on class {}", testClass.getName());
log.info("Warming up the sql database");
if (
Arrays
.asList(context.getEnvironment().getActiveProfiles())
.contains("test" + JHipsterConstants.SPRING_PROFILE_DEVELOPMENT)
) {
if (null == devTestContainer) {
try {
Class<? extends SqlTestContainer> containerClass = (Class<? extends SqlTestContainer>) Class.forName(
this.getClass().getPackageName() + ".PostgreSqlTestContainer"
);
devTestContainer = beanFactory.createBean(containerClass);
beanFactory.registerSingleton(containerClass.getName(), devTestContainer);
// ((DefaultListableBeanFactory)beanFactory).registerDisposableBean(containerClass.getName(), devTestContainer);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
}
testValues =
testValues.and(
"spring.r2dbc.url=" + devTestContainer.getTestContainer().getJdbcUrl().replace("jdbc", "r2dbc") + ""
);
testValues = testValues.and("spring.r2dbc.username=" + devTestContainer.getTestContainer().getUsername());
testValues = testValues.and("spring.r2dbc.password=" + devTestContainer.getTestContainer().getPassword());
testValues = testValues.and("spring.liquibase.url=" + devTestContainer.getTestContainer().getJdbcUrl() + "");
}
if (
Arrays
.asList(context.getEnvironment().getActiveProfiles())
.contains("test" + JHipsterConstants.SPRING_PROFILE_PRODUCTION)
) {
if (null == prodTestContainer) {
try {
Class<? extends SqlTestContainer> containerClass = (Class<? extends SqlTestContainer>) Class.forName(
this.getClass().getPackageName() + ".PostgreSqlTestContainer"
);
prodTestContainer = beanFactory.createBean(containerClass);
beanFactory.registerSingleton(containerClass.getName(), prodTestContainer);
// ((DefaultListableBeanFactory)beanFactory).registerDisposableBean(containerClass.getName(), prodTestContainer);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
}
testValues =
testValues.and(
"spring.r2dbc.url=" + prodTestContainer.getTestContainer().getJdbcUrl().replace("jdbc", "r2dbc") + ""
);
testValues = testValues.and("spring.r2dbc.username=" + prodTestContainer.getTestContainer().getUsername());
testValues = testValues.and("spring.r2dbc.password=" + prodTestContainer.getTestContainer().getPassword());
testValues = testValues.and("spring.liquibase.url=" + prodTestContainer.getTestContainer().getJdbcUrl() + "");
}
}
EmbeddedKafka kafkaAnnotation = AnnotatedElementUtils.findMergedAnnotation(testClass, EmbeddedKafka.class);
if (null != kafkaAnnotation) {
log.debug("detected the EmbeddedKafka annotation on class {}", testClass.getName());
log.info("Warming up the kafka broker");
if (null == kafkaBean) {
kafkaBean = beanFactory.createBean(KafkaTestContainer.class);
beanFactory.registerSingleton(KafkaTestContainer.class.getName(), kafkaBean);
// ((DefaultListableBeanFactory)beanFactory).registerDisposableBean(KafkaTestContainer.class.getName(), kafkaBean);
}
testValues =
testValues.and(
"spring.cloud.stream.kafka.binder.brokers=" +
kafkaBean.getKafkaContainer().getHost() +
':' +
kafkaBean.getKafkaContainer().getMappedPort(KafkaContainer.KAFKA_PORT)
);
}
testValues.applyTo(context);
};
}
}
######################## UPDATE 02.11.2022 ########################
Reproducible example project can be found here GITHUB
Related
#EnableScheduling
#Configuration
#Slf4j
public class ScheduledTaskDemo {
#Scheduled(fixedRate = 6, timeUnit = TimeUnit.SECONDS)
public void task1() {
log.info("task1 ====== " + Thread.currentThread().getName());
}
//cron表达式使用,此处表示每10秒
#Scheduled(cron = "*/10 * * * * *")
public void task2() {
log.info("task2 ====== " + Thread.currentThread().getName());
}
}
#SpringBootTest
public class TSLSpringApplicationTest {
#SpyBean
ScheduledTaskDemo scheduledTaskDemo;
#Test
public void testScheduledTaskDemo() {
await().atMost(Duration.TEN_SECONDS).untilAsserted(() -> {
//这里竟把task2方法执行次数也算进去了? TODO
verify(scheduledTaskDemo, atLeast(5)).task1();
});
}
}
the task2 method execution times are included,why?thank!
the error message is as follows:
2022-07-31 16:09:37,493 - [spring] [d07fb4fbc687a9fc/d07fb4fbc687a9fc] [] INFO 11260 --- [scheduling-1] com.tsl.scheduledTask.ScheduledTaskDemo#task1(19) : task1 ====== scheduling-1
2022-07-31 16:09:40,009 - [spring] [5ade105b48f4e54f/5ade105b48f4e54f] [] INFO 11260 --- [scheduling-1] com.tsl.scheduledTask.ScheduledTaskDemo#task2(25) : task2 ====== scheduling-1
2022-07-31 16:09:43,505 - [spring] [74699fb14f368213/74699fb14f368213] [] INFO 11260 --- [scheduling-1] com.tsl.scheduledTask.ScheduledTaskDemo#task1(19) : task1 ====== scheduling-1
org.awaitility.core.ConditionTimeoutException: Assertion condition defined as a lambda expression in com.tsl.TSLSpringApplicationTest
scheduledTaskDemo.task1();
Wanted at least 5 times:
-> at com.tsl.TSLSpringApplicationTest.lambda$testScheduledTaskDemo$0(TSLSpringApplicationTest.java:34)
But was 3 times:
-> at com.tsl.scheduledTask.ScheduledTaskDemo$$FastClassBySpringCGLIB$$a0d49f8b.invoke()
-> at com.tsl.scheduledTask.ScheduledTaskDemo$$FastClassBySpringCGLIB$$a0d49f8b.invoke()
-> at com.tsl.scheduledTask.ScheduledTaskDemo$$FastClassBySpringCGLIB$$a0d49f8b.invoke()
within 10 seconds.
I have a DSL (shown below) that ends on "log", so the json produced from the jdbc source should be logged and it's not.
The Supplier reads a database queue and produce a json array for the rows.
If I turn on logging, the SybaseSupplierConfiguration.this.logger.debug("Json: {}", json); is outputted.
Why is it not flowing to "log" ?
So far I have tried:
Downgrade spring boot to 2.2.9 (using 2.3.2)
Fixed the return result of jsonSupplier (to a json string)
Disabled prometheus / grafana
Explicitly configured poll spring.cloud.stream.poller.fixed-delay=10
Used rabbitmq binder and docker image
Offered some booz to the spring cloud dataflow god.
None worked.
the docker:
export DATAFLOW_VERSION=2.6.0
export SKIPPER_VERSION=2.5.0
docker-compose -f ./docker-compose.yml -f ./docker-compose-prometheus.yml up -d
the pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>company-cloud-dataflow-apps</artifactId>
<groupId>br.com.company.cloud.dataflow.apps</groupId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>jdbc-sybase-supplier</artifactId>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-test-support</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud.stream.app</groupId>
<artifactId>app-starters-micrometer-common</artifactId>
<version>${app-starters-micrometer-common.version}</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer.prometheus</groupId>
<artifactId>prometheus-rsocket-spring</artifactId>
<version>${prometheus-rsocket.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>net.sourceforge.jtds</groupId>
<artifactId>jtds</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-integration</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-jdbc</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jdk8</artifactId>
<version>${jackson.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
<version>${jackson.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-parameter-names</artifactId>
<version>${jackson.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-jaxb-annotations</artifactId>
<version>${jackson.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
The configuration:
....
spring.cloud.stream.function.bindings.jsonSupplier-out-0=output
spring.cloud.function.definition=jsonSupplier
The implementation:
#SpringBootApplication
#EnableConfigurationProperties(SybaseSupplierProperties.class)
public class SybaseSupplierConfiguration {
private final DataSource dataSource;
private final SybaseSupplierProperties properties;
private final ObjectMapper objectMapper;
private final JdbcTemplate jdbcTemplate;
private final Logger logger = LoggerFactory.getLogger(getClass());
#Autowired
public SybaseSupplierConfiguration(DataSource dataSource,
JdbcTemplate jdbcTemplate,
SybaseSupplierProperties properties) {
this.dataSource = dataSource;
this.jdbcTemplate = jdbcTemplate;
this.properties = properties;
objectMapper = new ObjectMapper().registerModule(new ParameterNamesModule())
.registerModule(new Jdk8Module())
.registerModule(new JavaTimeModule())
.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
}
public static void main(String[] args) {
SpringApplication.run(SybaseSupplierConfiguration.class, args);
}
#Value
static class IntControle {
long cdIntControle;
String deTabela;
}
#Bean
public MessageSource<Object> jdbcMessageSource() {
String query = "select cd_int_controle, de_tabela from int_controle rowlock readpast " +
"where id_status = 0 order by cd_int_controle";
JdbcPollingChannelAdapter adapter =
new JdbcPollingChannelAdapter(dataSource, query) {
#Override
protected Object doReceive() {
Object object = super.doReceive();
if (object == null) {
return null;
}
#SuppressWarnings("unchecked")
List<IntControle> ints = (List<IntControle>) object;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
try (JsonGenerator jen = objectMapper.createGenerator(out)) {
jen.writeStartArray();
for (IntControle itm : ints) {
String qry = String.format("select * from vw_integ_%s where cd_int_controle = %d",
itm.getDeTabela(), itm.getCdIntControle());
List<Map<String, Object>> maps = jdbcTemplate.queryForList(qry);
for (Map<String, Object> l : maps) {
jen.writeStartObject();
for (Map.Entry<String, Object> entry : l.entrySet()) {
String k = entry.getKey();
Object v = entry.getValue();
jen.writeFieldName(k);
if (v == null) {
jen.writeNull();
} else {
//caso necessário um item específico, ver em:
// https://stackoverflow.com/questions/6514876/most-efficient-conversion-of-resultset-to-json
jen.writeObject(v);
}
}
jen.writeEndObject();
}
}
jen.writeEndArray();
}
String json = out.toString();
SybaseSupplierConfiguration.this.logger.debug("Json: {}", json);
return json;
} catch (IOException e) {
throw new IllegalArgumentException("Erro ao converter json", e);
}
}
};
adapter.setMaxRows(properties.getPollSize());
adapter.setUpdatePerRow(true);
adapter.setRowMapper((RowMapper<IntControle>) (rs, i) -> new IntControle(rs.getLong(1), rs.getString(2)));
adapter.setUpdateSql("update int_controle set id_status = 1 where cd_int_controle = :cdIntControle");
return adapter;
}
#Bean
public Supplier<Message<?>> jsonSupplier() {
return jdbcMessageSource()::receive;
}
}
the shell setup:
app register --name jdbc-postgresql-sink --type sink --uri maven://br.com.company.cloud.dataflow.apps:jdbc-postgresql-sink:1.0.0-SNAPSHOT --force
app register --name jdbc-sybase-supplier --type source --uri maven://br.com.company.cloud.dataflow.apps:jdbc-sybase-supplier:1.0.0-SNAPSHOT --force
stream create --name sybase_to_pgsql --definition "jdbc-sybase-supplier | log "
stream deploy --name sybase_to_pgsql
the log:
....
2020-08-02 00:40:18.644 INFO 81 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 0 endpoint(s) beneath base path '/actuator'
2020-08-02 00:40:18.793 INFO 81 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2020-08-02 00:40:18.793 INFO 81 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application.errorChannel' has 1 subscriber(s).
2020-08-02 00:40:18.793 INFO 81 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started bean '_org.springframework.integration.errorLogger'
2020-08-02 00:40:18.793 INFO 81 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {router} as a subscriber to the 'jsonSupplier_integrationflow.channel#0' channel
2020-08-02 00:40:18.793 INFO 81 --- [ main] o.s.integration.channel.DirectChannel : Channel 'application.jsonSupplier_integrationflow.channel#0' has 1 subscriber(s).
2020-08-02 00:40:18.794 INFO 81 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started bean 'jsonSupplier_integrationflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#0'
2020-08-02 00:40:18.795 INFO 81 --- [ main] o.s.c.s.binder.DefaultBinderFactory : Creating binder: kafka
2020-08-02 00:40:19.235 INFO 81 --- [ main] o.s.c.s.binder.DefaultBinderFactory : Caching the binder: kafka
2020-08-02 00:40:19.235 INFO 81 --- [ main] o.s.c.s.binder.DefaultBinderFactory : Retrieving cached binder: kafka
2020-08-02 00:40:19.362 INFO 81 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Using kafka topic for outbound: sybase_to_pgsql.jdbc-sybase-supplier
2020-08-02 00:40:19.364 INFO 81 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [PLAINTEXT://kafka-broker:9092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2020-08-02 00:40:19.572 INFO 81 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.0
2020-08-02 00:40:19.572 INFO 81 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 66563e712b0b9f84
2020-08-02 00:40:19.572 INFO 81 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1596328819571
2020-08-02 00:40:20.403 INFO 81 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [PLAINTEXT://kafka-broker:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2020-08-02 00:40:20.477 INFO 81 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.0
2020-08-02 00:40:20.477 INFO 81 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 66563e712b0b9f84
2020-08-02 00:40:20.477 INFO 81 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1596328820477
2020-08-02 00:40:20.573 INFO 81 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: um9lJtXTQUmURh9cwOkqxA
2020-08-02 00:40:20.574 INFO 81 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms.
2020-08-02 00:40:20.622 INFO 81 --- [ main] o.s.c.s.m.DirectWithAttributesChannel : Channel 'application.output' has 1 subscriber(s).
2020-08-02 00:40:20.625 INFO 81 --- [ main] o.s.i.e.SourcePollingChannelAdapter : started bean 'jsonSupplier_integrationflow.org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean#0'
2020-08-02 00:40:20.654 INFO 81 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 20031 (http) with context path ''
2020-08-02 00:40:20.674 INFO 81 --- [ main] b.c.c.d.a.s.SybaseSupplierConfiguration : Started SybaseSupplierConfiguration in 12.982 seconds (JVM running for 14.55)
2020-08-02 00:40:21.160 INFO 81 --- [ask-scheduler-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [PLAINTEXT://kafka-broker:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id = producer-2
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2020-08-02 00:40:21.189 INFO 81 --- [ask-scheduler-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.0
2020-08-02 00:40:21.189 INFO 81 --- [ask-scheduler-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 66563e712b0b9f84
2020-08-02 00:40:21.189 INFO 81 --- [ask-scheduler-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1596328821189
2020-08-02 00:40:21.271 INFO 81 --- [ad | producer-2] org.apache.kafka.clients.Metadata : [Producer clientId=producer-2] Cluster ID: um9lJtXTQUmURh9cwOkqxA
If you are using function-based applications in SCDF, you will have to supply an extra configuration when deploying streams. Please have a look at the recipe that walks through the function-based deployment scenario.
Specifically, look at the application-specific function bindings and the property override for the time-source and the log-sink applications.
app.time-source.spring.cloud.stream.function.bindings.timeSupplier-out-0=output
app.log-sink.spring.cloud.stream.function.bindings.logConsumer-in-0=input
The input/output channel bindings require an explicit mapping to the function-binding that you have in your custom source. You will have to override the custom-sources' function binding to the output channel, and everything should come together then.
In v2.6, we are attempting to automate this explicit binding in SCDF, so there will be one less thing to configure in the future.
I have 3 flows defined in a file, to poll tif files and send it to a channel. The channel is linked to another flow that converts and copies into a pdf file in the same location. Then the third flow ftp's the pdf file. An advice in linked to ftp flow, where both tif and pdf file are to be deleted after successexpression:
#Bean
public IntegrationFlow rtwInflow() {
return IntegrationFlows
.from(rtwTifFileSharePoller()
, e -> e.poller(Pollers.fixedDelay(15000)))
.channel(tifToPdfConverterChannel())
.get();
}
#Bean
public IntegrationFlow rtwTransformFlow() {
return IntegrationFlows
.from(tifToPdfConverterChannel())
.transform(pdfTransfomer)
.log()
.get();
}
#Bean
public IntegrationFlow rtwFtpFlow() {
return IntegrationFlows
.from(rtwPdfFileSharePoller()
, e -> e.poller(Pollers.fixedDelay(15000)))
.handle(ftpOutboundHandler(), out -> out.advice(after()))
.get();
}
The advice looks something like:
#Bean
public ExpressionEvaluatingRequestHandlerAdvice after() {
logger.debug("Evaluating expression advice. ");
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnFailureExpressionString("#root");
advice.setOnSuccessExpressionString("#root");
advice.setSuccessChannel(rtwSourceDeletionChannel());
advice.setFailureChannel(rtwFtpFailureHandleChannel());
advice.setPropagateEvaluationFailures(true);
return advice;
}
The flow upon successful ftp of pdf file, diverts to rtwSourceDeletionChannel(), who does the following:
#Bean
#SuppressWarnings("unchecked")
public IntegrationFlow rtwSourceDeleteAfterFtpFlow() {
return IntegrationFlows
.from(this.rtwSourceDeletionChannel())
.handle(msg -> {
logger.info("Deleting files at source and transformed objects. ");
Message<File> requestedMsg = (Message<File>) msg.getPayload();
String fileName = (String) requestedMsg.getHeaders().get(FileHeaders.FILENAME);
String fileNameWithoutExtn = fileName.substring(0, fileName.lastIndexOf("."));
logger.info("payload: " + msg.getPayload());
logger.info("fileNameWithoutExtn: " + fileNameWithoutExtn);
// delete both pdf and tif files.
File tifFile = new File(rtwSharedPath + File.separator + fileNameWithoutExtn + ".tif");
File pdfFile = new File(rtwSharedPath + File.separator + fileNameWithoutExtn + ".pdf");
while (!tifFile.isDirectory() && tifFile.exists()) {
logger.info("Tif Delete status: " + tifFile.delete());
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
}
}
while (!pdfFile.isDirectory() && pdfFile.exists()) {
logger.info("PDF Delete status: " + pdfFile.delete());
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
}
}
})
.get();
}
I am getting output as below, where tif file is locked. Using Files.delete() gave me Exception that file is in use by another process.
2019-02-14 21:06:48.882 INFO 972 --- [ask-scheduler-1] nsfomer$$EnhancerBySpringCGLIB$$c667e8e1 : transformed path: \\localhost\atala-capture-upload\45937.pdf
2019-02-14 21:06:48.898 INFO 972 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : GenericMessage [payload=145937.pdf, headers={file_originalFile=\\localhost\atala-capture-upload\145937.tif, id=077ad304-efe5-7af5-ed07-17f909f9b0e1, file_name=145937.tif, file_relativePath=145937.tif, timestamp=1550178408898}]
2019-02-14 21:06:53.765 INFO 972 --- [ask-scheduler-2] TWFlows$$EnhancerBySpringCGLIB$$4905989f : Tif Delete status: false
2019-02-14 21:06:58.774 INFO 972 --- [ask-scheduler-2] TWFlows$$EnhancerBySpringCGLIB$$4905989f : Tif Delete status: false
2019-02-14 21:07:03.782 INFO 972 --- [ask-scheduler-2] TWFlows$$EnhancerBySpringCGLIB$$4905989f : Tif Delete status: false
2019-02-14 21:07:08.791 INFO 972 --- [ask-scheduler-2] TWFlows$$EnhancerBySpringCGLIB$$4905989f : Tif Delete status: false
2019-02-14 21:07:13.800 INFO 972 --- [ask-scheduler-2] TWFlows$$EnhancerBySpringCGLIB$$4905989f : Tif Delete status: false
Please help me understand why I am facing this problem. Also, in the pdfTransformer there is no leak, as I have tested the code and I was able to acquire and close FileInputStream on both tif and pdf files.
Also, please guide me if the solution needs to be improved by design.
Thanks in advance....
Well, seems wierd. According to other question on stack overflow
adding System.gc() before delete fixed the problem!
my prime member
public static void main(String[] args) throws InterruptedException {
Config config = new Config();
config.setProperty(GroupProperty.ENABLE_JMX, "true");
config.setProperty(GroupProperty.BACKPRESSURE_ENABLED, "true");
config.setProperty(GroupProperty.SLOW_OPERATION_DETECTOR_ENABLED, "true");
config.getSerializationConfig().addPortableFactory(1, new MyPortableFactory());
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<Integer, Rule> ruleMap = hz.getMap("ruleMap");
// TODO generate rule map data ; more than 100,000 entries
generateRuleMapData(ruleMap);
logger.info("generate rule finised!");
// TODO rule map index
// health check
PartitionService partitionService = hz.getPartitionService();
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
while (true) {
logger.info("isClusterSafe:{},isLocalMemberSafe:{},number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
}
logs
2016-06-28 13:53:05,048 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
2016-06-28 13:53:06,049 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
2016-06-28 13:53:07,050 INFO [main] b.PrimeMember (PrimeMember.java:41) - isClusterSafe:true,isLocalMemberSafe:true,number of entries owned on this node = 997465
my slave member
public static void main(String[] args) throws InterruptedException {
Config config = new Config();
config.setProperty(GroupProperty.ENABLE_JMX, "true");
config.setProperty(GroupProperty.BACKPRESSURE_ENABLED, "true");
config.setProperty(GroupProperty.SLOW_OPERATION_DETECTOR_ENABLED, "true");
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<Integer, Rule> ruleMap = hz.getMap("ruleMap");
PartitionService partitionService = hz.getPartitionService();
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
while (true) {
logger.info("isClusterSafe:{},isLocalMemberSafe:{},number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
}
logs
2016-06-28 14:05:53,543 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:54,556 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:55,563 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
2016-06-28 14:05:56,578 INFO [main] b.SlaveMember (SlaveMember.java:31) - isClusterSafe:false,isLocalMemberSafe:false,number of entries owned on this node = 412441
my question is :
why number of entries owned on prime member is not changed, after the cluster adds one slave member?
I should get statics per second.
while (true) {
LocalMapStats mapStatistics = ruleMap.getLocalMapStats();
logger.info(
"isClusterSafe:{},isLocalMemberSafe:{},rulemap.size:{}, number of entries owned on this node = {}",
partitionService.isClusterSafe(), partitionService.isLocalMemberSafe(), ruleMap.size(),
mapStatistics.getOwnedEntryCount());
Thread.sleep(1000);
}
Another option is to make use of localKeySet which returns the locally owned set of keys.
IMap::localKeySet.size()
Does anyone know if perf4J has support for log4j MDC. All my log statements are appended with MDC values, however the perf4J log statements don't show the MDC value.
Please see below, I expect MDCMappedValue to be shown at the end of [TimingLogger] log statements as well.
18:35:48,038 INFO [LoginAction] Logging in user kermit into application - MDCMappedValue 18:35:48,749 INFO [PostAuthenticationHandler] doPostAuthenticate() started - MDCMappedValue 18:36:03,653 INFO [PostAuthenticationHandler] Profile Loaded for kermit - MDCMappedValue 18:36:08,224 INFO [TimingLogger] start[1300905347914] time[20310] tag[HTTP.Success] message[/csa/login.seam] -
18:36:09,142 INFO [TimingLogger] start[1300905368240] time[902] tag[HTTP.Success] message[/csa/home.seam] -
My test seems to produce the expected results. Notice that I use the Log4JStopWatch, not the LoggingStopWatch:
package test;
import org.apache.log4j.Logger;
import org.apache.log4j.MDC;
import org.perf4j.StopWatch;
import org.perf4j.log4j.Log4JStopWatch;
public class Perf4jMdcTest {
private Logger _ = Logger.getLogger(Perf4jMdcTest.class);
public static void main(String[] args) {
for (int i = 0; i < 3; i++) {
new Thread() {
#Override
public void run() {
MDC.put("id", getName());
Perf4jMdcTest perf4jMdcTest = new Perf4jMdcTest();
perf4jMdcTest.test1();
perf4jMdcTest.test2();
MDC.clear();
}
}.start();
}
}
private void test1() {
_.info("test1");
StopWatch stopWatch = new Log4JStopWatch();
stopWatch.start("a");
try { Thread.sleep(300); }
catch (InterruptedException e) { }
stopWatch.stop();
}
private void test2() {
_.info("test2");
StopWatch stopWatch = new Log4JStopWatch();
stopWatch.start("b");
try { Thread.sleep(600); }
catch (InterruptedException e) { }
stopWatch.stop();
}
}
My log4j.properties is as follows:
log4j.rootLogger=debug, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%d [%-5p] MDC:%X{id} - %m%n
And the output is:
2012-03-26 20:37:43,049 [INFO ] MDC:Thread-1 - test1
2012-03-26 20:37:43,050 [INFO ] MDC:Thread-3 - test1
2012-03-26 20:37:43,049 [INFO ] MDC:Thread-2 - test1
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-2 - start[1332787063053] time[300] tag[a]
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-2 - test2
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-1 - start[1332787063053] time[300] tag[a]
2012-03-26 20:37:43,354 [INFO ] MDC:Thread-1 - test2
2012-03-26 20:37:43,353 [INFO ] MDC:Thread-3 - start[1332787063053] time[300] tag[a]
2012-03-26 20:37:43,354 [INFO ] MDC:Thread-3 - test2
2012-03-26 20:37:43,955 [INFO ] MDC:Thread-2 - start[1332787063354] time[600] tag[b]
2012-03-26 20:37:43,955 [INFO ] MDC:Thread-1 - start[1332787063354] time[601] tag[b]
2012-03-26 20:37:43,955 [INFO ] MDC:Thread-3 - start[1332787063354] time[601] tag[b]