How to create second, separate log4j2 logger? - log4j
I am trying to create a completely separate second Context/Configuration/Logger - not a logger within an existing config/context.
Log messages are going to STDOUT.
Current code
ConfigurationBuilder<BuiltConfiguration> _configurationBuilder = ConfigurationBuilderFactory.newConfigurationBuilder();
_configurationBuilder.setConfigurationName("SMDR_DEBUG_" + pName);
LoggerContext _loggerContext = new LoggerContext("SMDR_DEBUG_" + pName);
_configurationBuilder.setLoggerContext(_loggerContext);
_configurationBuilder.setStatusLevel(Level.TRACE);
// Create the appender
AppenderComponentBuilder log4jFileAppenderBuilder = _configurationBuilder.
newAppender(pName + "_SmdrDailyRollingFileAppender", "RollingFile");
log4jFileAppenderBuilder.addAttribute("filename", pLogFilename);
log4jFileAppenderBuilder.addAttribute("filePattern", pLogFilenamePattern);
// Setup roll-over
ComponentBuilder triggeringPolicy = _configurationBuilder.newComponent("Policies")
.addComponent(_configurationBuilder.newComponent("TimeBasedTriggeringPolicy").
addAttribute("interval", "1"));
log4jFileAppenderBuilder.addComponent(triggeringPolicy);
// Configure the PatternLayout
LayoutComponentBuilder layoutComponentBuilder = _configurationBuilder.newLayout("PatternLayout").
addAttribute("pattern", DEBUG_PATTERN_LAYOUT_STRING);
log4jFileAppenderBuilder.add(layoutComponentBuilder);
// Add it back into configuration
_configurationBuilder.add(log4jFileAppenderBuilder);
// https://logging.apache.org/log4j/2.x/manual/customconfig.html
RootLoggerComponentBuilder loggerBuilder = _configurationBuilder.newRootLogger(Level.DEBUG);
loggerBuilder.add(_configurationBuilder.newAppenderRef(pName + "_SmdrDailyRollingFileAppender"));
loggerBuilder.addAttribute("additivity", false);
_configurationBuilder.add(loggerBuilder);
LoggerContext _lc = Configurator.initialize(_configurationBuilder.build());
System.out.println("***** SRJ SRJ SMDR context from initialize is " + _lc);
Logger _g = _loggerContext.getRootLogger();
System.out.println("***** SRJ SRJ SMDR rootlogger from context is " + _g);
_g.error("***** SRJ SRJ ROOT LOGGER IN SMDR_DEBUG.txt");
Logger _gg = _loggerContext.getLogger(pName);
System.out.println("***** SRJ SRJ SMDR logger "+pName+" from context is " + _gg);
_gg.error("***** SRJ SRJ "+pName+" LOGGER IN SMDR_DEBUG.txt");
The .error() calls above go to STDOUT. Note that I have tried using reconfigure() instead of initialize(), but that messes up my original configuration.
The loggers seem wrong, as I print them out and they seem like the name and the context are right, but at error level. And things to go stdout and not the appender.
***** SRJ SRJ SMDR rootlogger from context is :ERROR in SMDR_DEBUG_Global
16:23:59.989 [main] ERROR - ***** SRJ SRJ ROOT LOGGER IN SMDR_DEBUG.txt <-- should be in log file
***** SRJ SRJ SMDR logger Global from context is Global:ERROR in SMDR_DEBUG_Global
16:23:59.990 [main] ERROR Global - ***** SRJ SRJ Global LOGGER IN SMDR_DEBUG.txt <-- should be in log file
XML generated from builder:
<?xml version="1.0" ?>
<Configuration name="SMDR_DEBUG_Global" status="TRACE">
<Appenders>
<RollingFile name="Global_SmdrDailyRollingFileAppender" filename="ps/debug/SMDR_DEBUG.txt"
filePattern="ps/debug/SMDR_DEBUG_%d{yyyyMMdd}.txt.gz">
<Policies>
<TimeBasedTriggeringPolicy interval="1"/>
</Policies>
<PatternLayout pattern="%d{MM.DD.yy-HH:mm:ss} %m%n"/>
</RollingFile>
</Appenders>
<Loggers>
<Root level="DEBUG" additivity="false">
<AppenderRef ref="Global_SmdrDailyRollingFileAppender"/>
</Root>
</Loggers>
</Configuration>
Trace from builder:
2022-05-19 16:23:59,921 main DEBUG PluginManager 'Converter' found 45 plugins
2022-05-19 16:23:59,922 main DEBUG Starting OutputStreamManager SYSTEM_OUT.false.false-3
2022-05-19 16:23:59,940 main INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.
2022-05-19 16:23:59,941 main DEBUG Apache Log4j Core 2.17.1 initializing configuration org.apache.logging.log4j.core.config.builder.impl.BuiltConfiguration#3a3e78f
2022-05-19 16:23:59,942 main DEBUG Installed 1 script engine
2022-05-19 16:23:59,963 Thread Context Data Task DEBUG Initializing Thread Context Data Service Providers
2022-05-19 16:23:59,964 Thread Context Data Task DEBUG Thread Context Data Service Provider initialization complete
2022-05-19 16:23:59,969 main DEBUG Oracle Nashorn version: 1.8.0_252, language: ECMAScript, threading: Not Thread Safe, compile: true, names: [nashorn, Nashorn, js, JS, JavaScript, javascript, ECMAScript, ecmascript], factory class: jdk.nashorn.api.scripting.NashornScriptEngineFactory
2022-05-19 16:23:59,969 main DEBUG PluginManager 'Core' found 127 plugins
2022-05-19 16:23:59,969 main DEBUG PluginManager 'Level' found 0 plugins
2022-05-19 16:23:59,970 main DEBUG PluginManager 'Lookup' found 16 plugins
2022-05-19 16:23:59,970 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2022-05-19 16:23:59,971 main DEBUG createAppenderRef(ref="Global_SmdrDailyRollingFileAppender", level="null", Filter=null)
2022-05-19 16:23:59,971 main DEBUG Building Plugin[name=root, class=org.apache.logging.log4j.core.config.LoggerConfig$RootLogger].
2022-05-19 16:23:59,972 main DEBUG createLogger(additivity="false", level="DEBUG", includeLocation="null", ={Global_SmdrDailyRollingFileAppender}, ={}, Configuration(SMDR_DEBUG_Global), Filter=null)
2022-05-19 16:23:59,972 main DEBUG Building Plugin[name=loggers, class=org.apache.logging.log4j.core.config.LoggersPlugin].
2022-05-19 16:23:59,973 main DEBUG createLoggers(={root})
2022-05-19 16:23:59,973 main DEBUG Building Plugin[name=TimeBasedTriggeringPolicy, class=org.apache.logging.log4j.core.appender.rolling.TimeBasedTriggeringPolicy].
2022-05-19 16:23:59,975 main DEBUG TimeBasedTriggeringPolicy$Builder(interval="1", modulate="null", maxRandomDelay="null")
2022-05-19 16:23:59,975 main DEBUG Building Plugin[name=Policies, class=org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy].
2022-05-19 16:23:59,975 main DEBUG createPolicy(={TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=false)})
2022-05-19 16:23:59,975 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2022-05-19 16:23:59,976 main DEBUG PatternLayout$Builder(pattern="%d{MM.DD.yy-HH:mm:ss} %m%n", PatternSelector=null, Configuration(SMDR_DEBUG_Global), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2022-05-19 16:23:59,976 main DEBUG PluginManager 'Converter' found 45 plugins
2022-05-19 16:23:59,982 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.RollingFileAppender].
2022-05-19 16:23:59,983 main DEBUG RollingFileAppender$Builder(fileName="ps/debug/SMDR_DEBUG.txt", filePattern="ps/debug/SMDR_DEBUG_%d{yyyyMMdd}.txt.gz", append="null", locking="null", Policies(CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=false)])), Strategy=null, advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{MM.DD.yy-HH:mm:ss} %m%n), name="Global_SmdrDailyRollingFileAppender", Configuration(SMDR_DEBUG_Global), Filter=null, ={})
2022-05-19 16:23:59,984 main TRACE New file 'ps/debug/SMDR_DEBUG.txt' created = true
2022-05-19 16:23:59,984 main DEBUG Returning file creation time for /opt/SecureLogix/ETM/ps/debug/SMDR_DEBUG.txt
2022-05-19 16:23:59,984 main DEBUG Starting RollingFileManager ps/debug/SMDR_DEBUG.txt
2022-05-19 16:23:59,985 main DEBUG PluginManager 'FileConverter' found 2 plugins
2022-05-19 16:23:59,985 main DEBUG Setting prev file time to 2022-05-19T16:23:59.000+0100
2022-05-19 16:23:59,985 main DEBUG Initializing triggering policy CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=false)])
2022-05-19 16:23:59,986 main DEBUG Initializing triggering policy TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=false)
2022-05-19 16:23:59,987 main TRACE PatternProcessor.getNextTime returning 2022/05/20-00:00:00.000, nextFileTime=2022/05/19-00:00:00.000, prevFileTime=1970/01/01-01:00:00.000, current=2022/05/19-16:23:59.986, freq=DAILY
2022-05-19 16:23:59,988 main TRACE PatternProcessor.getNextTime returning 2022/05/20-00:00:00.000, nextFileTime=2022/05/19-00:00:00.000, prevFileTime=2022/05/19-00:00:00.000, current=2022/05/19-16:23:59.988, freq=DAILY
2022-05-19 16:23:59,988 main DEBUG Building Plugin[name=appenders, class=org.apache.logging.log4j.core.config.AppendersPlugin].
2022-05-19 16:23:59,988 main DEBUG createAppenders(={Global_SmdrDailyRollingFileAppender})
2022-05-19 16:23:59,989 main DEBUG Configuration org.apache.logging.log4j.core.config.builder.impl.BuiltConfiguration#3a3e78f initialized
Your problem comes from this line:
LoggerContext _lc = Configurator.initialize(_configurationBuilder.build());
The Configurator.initialize method retrieves the logger context associated to the current code (see ContextSelector) and configures it with the given configuration. This is not the same context as _loggerContext, which remains uninitialized and uses the default configuration (ERRORs on the console).
To configure _loggerContext use:
_loggerContext.setConfiguration(_configurationBuilder.build());
Related
How to consume 2 Azure Event Hubs in Spring Cloud Stream
I want to consume messages from following 2 connection strings Endpoint=sb://region1.servicebus.windows.net/;SharedAccessKeyName=abc;SharedAccessKey=123;EntityPath=my-request Endpoint=sb://region2.servicebus.windows.net/;SharedAccessKeyName=def;SharedAccessKey=456;EntityPath=my-request It is very simple to use Java API EventHubConsumerAsyncClient client = new EventHubClientBuilder() .connectionString("Endpoint=sb://region1.servicebus.windows.net/;SharedAccessKeyName=abc;SharedAccessKey=123;EntityPath=my-request") .buildAsyncConsumerClient(); However, how to make this work in yaml file using Spring Cloud Stream (equivalent to the Java code above)? Tried all tutorials found online and none of them works. spring: cloud: stream: function: definition: consumeRegion1;consumeRegion2 bindings: consumeRegion1-in-0: destination: my-request binder: eventhub1 consumeRegion2-in-0: destination: my-request binder: eventhub2 binders: eventhub1: type: eventhub default-candidate: false environment: spring: cloud: azure: eventhub: connection-string: Endpoint=sb://region1.servicebus.windows.net/;SharedAccessKeyName=abc;SharedAccessKey=123;EntityPath=my-request eventhub2: type: eventhub default-candidate: false environment: spring: cloud: azure: eventhub: connection-string: Endpoint=sb://region2.servicebus.windows.net/;SharedAccessKeyName=def;SharedAccessKey=456;EntityPath=my-request #Bean public Consumer<Message<String>> consumeRegion1() { return message -> { System.out.printf(message.getPayload()); }; } #Bean public Consumer<Message<String>> consumeRegion2() { return message -> { System.out.printf(message.getPayload()); }; } <dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-stream-binder-eventhubs</artifactId> <version>2.5.0</version> </dependency> error log 2021-10-14 21:12:26.760 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.integration.config.IntegrationManagementConfiguration' of type [org.springframework.integration.config.IntegrationManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2021-10-14 21:12:26.882 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationChannelResolver' of type [org.springframework.integration.support.channel.BeanFactoryChannelResolver] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2021-10-14 21:12:26.884 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationDisposableAutoCreatedBeans' of type [org.springframework.integration.config.annotation.Disposables] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2021-10-14 21:12:29.587 WARN 1 --- [ main] a.s.c.a.e.AzureEventHubAutoConfiguration : Can't construct the EventHubConnectionStringProvider, namespace: null, connectionString: null 2021-10-14 21:12:29.611 INFO 1 --- [ main] a.s.c.a.e.AzureEventHubAutoConfiguration : No event hub connection string provided. 2021-10-14 21:12:30.290 INFO 1 --- [ main] c.a.s.i.eventhub.impl.EventHubTemplate : Started EventHubTemplate with properties: {checkpointConfig=CheckpointConfig{checkpointMode=RECORD, checkpointCount=0, checkpointInterval=null}, startPosition=LATEST} 2021-10-14 21:12:32.934 INFO 1 --- [ main] c.f.c.c.BeanFactoryAwareFunctionRegistry : Can't determine default function definition. Please use 'spring.cloud.function.definition' property to explicitly define it.
As log says use spring.cloud.function.definition property. Refer to docs
Metrics with Prometheus issue using JHipster 6.0.1 (The elements [jhipster.metrics.prometheus.enabled] were left unbound.)
I'm getting an error running my JHipster application with Prometheus configuration for metrics. I use the configuration from the official website : https://www.jhipster.tech/monitoring/ In my application-dev.yml I have : metrics: prometheus: enabled: true And my class for auth is : #Configuration #Order(1) #ConditionalOnProperty(prefix = "jhipster", name = "metrics.prometheus.enabled") public class BasicAuthConfiguration extends WebSecurityConfigurerAdapter { #Override protected void configure(HttpSecurity http) throws Exception { http .antMatcher("/management/prometheus/**") .authorizeRequests() .anyRequest().hasAuthority(AuthoritiesConstants.ADMIN) .and() .httpBasic().realmName("jhipster") .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and().csrf().disable(); } } 2019-06-25 12:22:52.693 INFO 13260 --- [ restartedMain] com.ex.App : The following profiles are active: dev,swagger 2019-06-25 12:22:55.170 WARN 13260 --- [ restartedMain] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'undertowServletWebServerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/ServletWebServerFactoryConfiguration$EmbeddedUndertow.class]: Initialization of bean failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webConfigurer' defined in file [/home/eclipse-workspace/back_docker/target/classes/com/ex/config/WebConfigurer.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'io.github.jhipster.config.JHipsterProperties': Could not bind properties to 'JHipsterProperties' : prefix=jhipster, ignoreInvalidFields=false, ignoreUnknownFields=false; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'jhipster' to io.github.jhipster.config.JHipsterProperties 2019-06-25 12:22:55.188 ERROR 13260 --- [ restartedMain] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPLICATION FAILED TO START *************************** Description: Binding to target [Bindable#7585af55 type = io.github.jhipster.config.JHipsterProperties, value = 'provided', annotations = array<Annotation>[#org.springframework.boot.context.properties.ConfigurationProperties(ignoreInvalidFields=false, ignoreUnknownFields=false, value=jhipster, prefix=jhipster)]] failed: Property: jhipster.metrics.prometheus.enabled Value: true Origin: class path resource [config/application-dev.yml]:128:22 Reason: The elements [jhipster.metrics.prometheus.enabled] were left unbound. Action: Update your application's configuration [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.679 s [INFO] Finished at: 2019-06-25T12:22:55+02:00 [INFO] ------------------------------------------------------------------------
I changed my JHipster project from microservice application to microservice gateway and it solved this issue.
Log4j2: How to define multiple loggers
I have defined the multiple loggers as seen below(com.xyz and org.xyz). Log4j2 is ignoring the first logger definition and loads only the second one. In the example, the org.xyz is not loaded. { "configuration": { "name": "Default", "appenders": { "Console": { "name": "Console-Appender", "target": "SYSTEM_OUT", "PatternLayout": {"pattern": "[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n"} } }, "loggers": { "logger": { "name": "org.xyz", "level": "info", "appender-ref": [{"ref": "Console-Appender","level": "info"}] }, "logger": { "name": "com.xyz", "level": "debug", "appender-ref": [{"ref": "Console-Appender", "level": "debug"}] }, "root": { "level": "warn", "appender-ref": {"ref": "Console-Appender","level": "warn"} } } } } Find below the log4j2 debug messages. DEBUG StatusLogger Processing node for object loggers DEBUG StatusLogger Processing node for object logger DEBUG StatusLogger Node name is of type STRING DEBUG StatusLogger Node level is of type STRING DEBUG StatusLogger Node additivity is of type STRING DEBUG StatusLogger Processing node for array appender-ref DEBUG StatusLogger Processing appender-ref[0] DEBUG StatusLogger Returning logger with parent loggers of type logger:class org.apache.logging.log4j.core.config.LoggerConfig DEBUG StatusLogger Processing node for object root DEBUG StatusLogger Node level is of type STRING DEBUG StatusLogger Processing node for object appender-ref DEBUG StatusLogger Node ref is of type STRING DEBUG StatusLogger Node level is of type STRING DEBUG StatusLogger Returning appender-ref with parent root of type appender-ref:class org.apache.logging.log4j.core.config.AppenderRef DEBUG StatusLogger Returning root with parent loggers of type root:class org.apache.logging.log4j.core.config.LoggerConfig$RootLogger DEBUG StatusLogger Returning loggers with parent root of type loggers:class org.apache.logging.log4j.core.config.LoggersPlugin DEBUG StatusLogger Completed parsing configuration DEBUG StatusLogger Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig]. DEBUG StatusLogger createLogger(additivity="false", level="DEBUG", name="com.xyz", includeLocation="null", ={Console-Appender}, ={}, Configuration(Default), Filter=null) DEBUG StatusLogger Building Plugin[name=appender-ref, class=org.apache.logging.log4j.core.config.AppenderRef]. DEBUG StatusLogger createAppenderRef(ref="Console-Appender", level="WARN", Filter=null) DEBUG StatusLogger Building Plugin[name=root, class=org.apache.logging.log4j.core.config.LoggerConfig$RootLogger]. DEBUG StatusLogger createLogger(additivity="null", level="WARN", includeLocation="null", ={Console-Appender}, ={}, Configuration(Default), Filter=null) DEBUG StatusLogger Building Plugin[name=loggers, class=org.apache.logging.log4j.core.config.LoggersPlugin]. DEBUG StatusLogger createLoggers(={com.xyz, root}) Is my configuration correct?
For defining multiple loggers in log4j2 JSON configuration file, you should declare logger as array. With logger array, your configuration file would change to below - { "configuration": { "name": "Default", "appenders": { "Console": { "name": "Console-Appender", "target": "SYSTEM_OUT", "PatternLayout": {"pattern": "[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n"} } }, "loggers": { "logger": [ { "name": "org.xyz", "level": "info", "appender-ref": [{"ref": "Console-Appender","level": "info"}] }, { "name": "com.xyz", "level": "debug", "appender-ref": [{"ref": "Console-Appender", "level": "debug"}] } ], "root": { "level": "warn", "appender-ref": {"ref": "Console-Appender","level": "warn"} } } } }
Suppress OutputRedirector text from Apache Spark log
In my spark application I am logging like this: Logger log = spark.log(); log.info("**************************************************"); log.info("Pi is roughly " + 4.0 * count / n); log.info("**************************************************"); However, my logs currently look like this: Apr 24, 2017 10:54:38 PM org.apache.spark.launcher.OutputRedirector redirect INFO: [CJC]17/04/24 22:54:38 INFO SparkSession: ************************************************** Apr 24, 2017 10:54:38 PM org.apache.spark.launcher.OutputRedirector redirect INFO: [CJC]17/04/24 22:54:38 INFO SparkSession: Pi is roughly 3.138936 Apr 24, 2017 10:54:38 PM org.apache.spark.launcher.OutputRedirector redirect INFO: [CJC]17/04/24 22:54:38 INFO SparkSession: ************************************************** I would like it to clean this up so it looks like this: INFO: [CJC]17/04/24 22:54:38 INFO SparkSession: ************************************************** INFO: [CJC]17/04/24 22:54:38 INFO SparkSession: Pi is roughly 3.138936 INFO: [CJC]17/04/24 22:54:38 INFO SparkSession: ************************************************** In this case the log4j.properties file doesn't seem to help me as it seems all logging is going through the OutputRedirector class. Is there a way around this?
tldr; To suppress the header Apr 24, 2017 10:54:38 PM org.apache.spark.launcher.OutputRedirector redirect: Add this to your code System.setProperty("java.util.logging.SimpleFormatter.format","%5$s%6$s%n"); Running Spark in clientmode against a standalone spark cluster uses 2 logging frameworks. java.util.logger used by org.apache.spark.launcher.SparkLauncher the embedded driver's logging framework bound to slf4j. In my case it was log4j. To control the format of the embbed driver's logging, use the following System properties as mentioned in an earlier answer: spark.driver.extraJavaOptions=-Dlog4j.configuration=file:<path>/log4j.xml spark.executor.extraJavaOptions=-Dlog4j.configuration=file:<path>/log4j.xml To suppress the Apr 24, 2017 10:54:38 PM org.apache.spark.launcher.OutputRedirector redirect: Redirect the embedded driver logs to a logger sparkLauncher.redirectToLog(createLogger("spark-pi")); private java.util.logging.Logger createLogger(String appName) throws IOException { final java.util.logging.Logger logger = getRootLogger(); final FileHandler handler = new FileHandler("./" + appName + "-%u-%g.log", 10_000_000, 5, true); handler.setFormatter(new SimpleFormatter()); logger.addHandler(handler); logger.setLevel(Level.INFO); return logger; } private java.util.logging.Logger getRootLogger() { final java.util.logging.Logger logger = java.util.logging.Logger.getLogger(java.util.logging.Logger.GLOBAL_LOGGER_NAME); Arrays.stream(logger.getHandlers()).forEach(logger::removeHandler); //Without this the logging will go to the Console and to a file. logger.setUseParentHandlers(false); return logger; } System.setProperty("java.util.logging.SimpleFormatter.format","%5$s%6$s%n"); This changes SimpleFormatter's default logging pattern. See jdk docs.
File Tail Inbound Channel Adapter stderr and stdout
I am trying to a tail file using spring integration and it is working as the code below but i have two questions #Configuration public class RootConfiguration { #Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata defaultPoller() { PollerMetadata pollerMetadata = new PollerMetadata(); pollerMetadata.setTrigger(new PeriodicTrigger(10)); return pollerMetadata; } #Bean public MessageChannel input() { return new QueueChannel(50); } #Bean public FileTailInboundChannelAdapterFactoryBean tailInboundChannelAdapterParser() { FileTailInboundChannelAdapterFactoryBean x = new FileTailInboundChannelAdapterFactoryBean(); x.setAutoStartup(true); x.setOutputChannel(input()); x.setTaskExecutor(taskExecutor()); x.setNativeOptions("-F -n +0"); x.setFile(new File("/home/shahbour/Desktop/file.txt")); return x; } #Bean #ServiceActivator(inputChannel = "input") public LoggingHandler loggingHandler() { return new LoggingHandler("info"); } #Bean public TaskExecutor taskExecutor() { ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor(); taskExecutor.setCorePoolSize(4); taskExecutor.afterPropertiesSet(); return taskExecutor; } } Per below log i have 4 threads used for tailing the file. Do i need all of them or i can disable some. Why do i have a thread for Monitoring process java.lang.UNIXProcess#b37e761 , Reading stderr ,Reading stdout . I am asking this because i am going to run the program on voip switch and i want to use the minimal resources possible. 2016-12-10 13:22:55.666 INFO 14862 --- [ taskExecutor-1] t.OSDelegatingFileTailingMessageProducer : Starting tail process 2016-12-10 13:22:55.665 INFO 14862 --- [ main] t.OSDelegatingFileTailingMessageProducer : started tailInboundChannelAdapterParser 2016-12-10 13:22:55.682 INFO 14862 --- [ main] o.s.i.endpoint.PollingConsumer : started rootConfiguration.loggingHandler.serviceActivator 2016-12-10 13:22:55.682 INFO 14862 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647 2016-12-10 13:22:55.701 INFO 14862 --- [ main] c.t.SonusbrokerApplication : Started SonusbrokerApplication in 3.84 seconds (JVM running for 4.687) 2016-12-10 13:22:55.703 DEBUG 14862 --- [ taskExecutor-2] t.OSDelegatingFileTailingMessageProducer : Monitoring process java.lang.UNIXProcess#b37e761 2016-12-10 13:22:55.711 DEBUG 14862 --- [ taskExecutor-3] t.OSDelegatingFileTailingMessageProducer : Reading stderr 2016-12-10 13:22:55.711 DEBUG 14862 --- [ taskExecutor-4] t.OSDelegatingFileTailingMessageProducer : Reading stdout My Second question is , is it possible to start reading the file from the begging and continue to tail , i was thinking in using native options for this -n 1000 note: the real code will be to monitor folder for new files as they are created and then start the tail process
The process monitor is needed to waitFor() the process - it doesn't use any resources except a bit of memory. The stdout reader is needed to actually handle the data that is produced by the tail command. The starting thread (in your case taskExecutor-1 exits after it has done its work of starting the other threads). There is not currently an option to disable the stderr reader, but it would be easy for us to add one so there are only 2 threads at runtime. Feel free to open a JIRA 'improvement' Issue and, of course, contributions are welcome.