log4j - DailyRollingFileAppender, file not roll hourly - log4j

I have the following simple Test class for DailyRollingFileAppender to rolls the log file every hour. The problem I am facing is that, it doesn't seem to roll over to new log file every hour even though I have set that to '.'yyyy-MM-dd-HH. Any idea where in the code I did wrongly?
public class Test {
static Logger logger = Logger.getLogger(Test.class);
public static void main(String args[]) throws Exception {
String pattern = "%-20d{dd MMM yyyy HH:mm:ss} [%-5p] - %m%n";
PatternLayout patternLayout = new PatternLayout(pattern);
//CREATE APPENDER.
DailyRollingFileAppender myAppender = new DailyRollingFileAppender(patternLayout, "TestOrig.log", "'.'yyyy-MM-dd-HH");
//ADD APPENDER & LEVEL.
logger.addAppender(myAppender);
logger.setLevel ((Level) Level.DEBUG);
//WRITE MESSAGES.
logger.debug("Successful");
logger.info ("Failed" );
logger.warn ("Failed" );
logger.error("Successful");
logger.fatal("Failed");
while(true)
{
Thread.sleep(1000);
}
}
}

Use #Singleton and #Schedule to create an ejb cron like schedule for your timer service.
import javax.ejb.Schedule;
import javax.ejb.Singleton;
#Singleton
public class Cron {
static Logger logger = Logger.getLogger(Test.class);
#Schedule(second="0", minute="0", hour="0", dayOfWeek="*", persistent=false)
public void rollLogs() {
logger.info("midnight");
}
}

I don't see any error here. I could see this is creating files when I tried for minutes.
DailyRollingFileAppender myAppender = new DailyRollingFileAppender(patternLayout, "TestOrig.log", "'.'yyyy-MM-dd-HH-mm");
Did you see any error on the console??
Possible reason of error for you could be , you are trying to run the same program multiple times, without ending the previously started program and that is resulting in file access permission issue.

Mike, you are correct in your comment above. You will not get a new file unless there is logging activity during that time. If you are required to force the issue, you'll need to start a thread with a runnable that posts a line to the log after the start of each new hour.
The goal is to make one post to your log every 59.5 minutes, starting from minute 1.
Basic standard knowledge of how to use Runnable and Thread are required for his solution. I am assuming you are running a standard application and not within a managed server environment
create a class that implements Runnable
overwrite the run() method with a while loop to a true boolean variable (isAlive) your app can set to false when your app shuts down.
during the loop call the .info("Logger Text") method of a static Logger logger = Logger.getLogger(YourClassName.class); with a loop wait time of 60 minutes.
Toss the Runnable into a new Thread() object at application start-up
make one info() post to your log at startup.
start the thread object when you start your app.
run() method of Runnable can be
public void run() {
while (isAlive) { // isAlive is a global-level (static, even) boolean
// you declared earlier as true, your app should set it to false
// if your app decides to exit
try {
logger.info("Rollover Log Text");
Thread.currentThread().sleep(1000 * 60 * 60); // 60 minutes
} catch (InterruptedException ignore) {
}
}
}
Remember to set isAlive to true before you start your Thread, set it to false on shutdown or error/exception close, and call the the interrupt() method of your thread after setting to false. This should log one time an hour.

Related

how to do something when liferay module stop

i am making cron job like loop to do something using new thread.
when module stop, this thread keeps running, so when i deployed updated module, i'm afraid it will make duplicate thread doing similar task
#Component(immediate = true, service = ExportImportLifecycleListener.class)
public class StaticUtils extends Utils{
private StaticUtils() {}
private static class SingletonHelper{
private static final StaticUtils INSTANCE = new StaticUtils();
}
public static StaticUtils getInstance() {
return SingletonHelper.INSTANCE;
}
}
public class Utils extends BaseExportImportLifecycleListener{
public Utils() {
startTask();
}
protected Boolean CRON_START = true;
private void startTask() {
new Thread(new Runnable() {
public void run() {
while (CRON_START) {
System.out.println("test naon bae lah ");
}
}
}).start();
}
#Deactivate
protected void deactivate() {
CRON_START = false;
System.out.println(
"cron stop lah woooooooooooooooooy");
}
}
i'm using liferay 7
I have populated task that i store from db, so this thread is checking is there a task that it must do, then if it exist execute it.
I'm quite new in osgi and liferay. i've try to use scheduler and failed and also exportimportlifecycle listener but dont really get it yet
think again: Do you really need something to run all the time in the background, or do you just need some asynchronous processing in the background, when triggered? It might be better to start a background task as a one-off, that automatically terminates
Liferay provides an internal MessageBus, that you can utilize to listen to events and implement background processing, without the need for a custom thread
You're in the OSGi world, so you can utilize #Activate, #Modified, #Deactivate (from org.osgi.service.component.annotations) or use a org.osgi.framework.BundleActivator.
But, in general, it's preferable if you don't start your own thread

Serilog Seq Sink not always capturing events

I'm running Seq on an Azure instance (Windows Server 2012 R2 Datacenter) and logging with Serilog from a console application running on my local workstation. I have 3 sinks configured - File, Console and Seq. I'm also running on dnxcore50 (just in case you were thinking my setup wasn't dodgy enough). All my events are showing up in console and the file 100% of the time. Seq is only capturing event about 1 in 5 or more runs, that is it will either capture all the events for the run or none of them. I am using the free single user license to evaluate Seq, and I haven't found anything to suggest there are any limitations that would cause this behavior.
I've set up SelfLog.Out on the loggers, but that logs nothing at all, other than my test line which I added to make sure the self logging could at least write to the specified file.
public abstract class SerilogBaseLogger<T> : SerilogTarget
{
protected SerilogBaseLogger(SerilogOptions options, string loggerType)
{
LoggerConfiguration = new LoggerConfiguration();
Options = options;
LevelSwitch.MinimumLevel = options.LogLevel.ToSerilogLevel();
var file = File.CreateText(Path.Combine(options.LogFilePath, string.Format("SelfLog - {0}.txt", loggerType)));
file.AutoFlush = true;
SelfLog.Out = file;
SelfLog.WriteLine("Testing self log.");
}
// ... snip ....
}
public class SerilogSeqTarget<T> : SerilogBaseLogger<T>
{
public string Server => Options.Seq.Server;
public SerilogSeqTarget(SerilogOptions options)
: base(options, string.Format("SerilogSeqTarget[{0}]", typeof(T)))
{
LoggerConfiguration
.WriteTo
.Seq(Server);
InitializeLogger();
}
public override string ToString()
{
return string.Format("SerilogSeqTarget<{0}>", typeof(T).Name);
}
}
public class SerilogLoggerFactory<TType> : LoggerFactory<SerilogTarget>
{
// .... snip ....
public override IMyLoggerInterface GetLogger()
{
var myLogger = new SerilogDefaultLogger()
.AddTarget(SerilogTargetFactory<TType>.GetFileTarget(Options))
.AddTarget(SerilogTargetFactory<TType>.GetConsoleTarget(Options))
.AddTarget(SerilogTargetFactory<TType>.GetSeqTarget(Options));
myLogger.Info("Logger initialized with {#options} and targets: {targets}", Options, ((SerilogDefaultLogger)myLogger).Targets);
return myLogger;
}
}
public class SerilogTargetFactory<TType>
{
// ... snip ...
public static SerilogTarget GetSeqTarget(SerilogOptions options)
{
return !string.IsNullOrEmpty(options.Seq.Server)
? new SerilogSeqTarget<TType>(options)
: null;
}
}
Any suggestions? Is this just a side-effect of being on the bleeding edge, working with pre-release everything (although in that case I'd expect things to fail somewhat consistently)?
when targeting dnxcore50/CoreCLR, Serilog can't hook into AppDomain shutdown to guarantee that any buffered messages are always flushed. (AppDomain doesn't exist in that profile :-)
There are a couple of options:
Dispose the loggers (especially the Seq one) on shutdown:
(logger as IDisposable)?.Dispose();
Use the static Log class and call its CloseAndFlush() method on shutdown:
Log.CloseAndFlush();
The latter - using the static Log class instead of the various individual ILogger instances - is probably the quickest and easiest to get going with, but it has exactly the same effect as disposing the loggers so either approach should do it.

"Manually" rolling a log4net RollingFileAppender log file?

Rather than roll a log after a date/time or specified maximum size, I want to be able to call a method like "ResetLog" that copies my "log.txt" to "log.txt.1" and then clears log.txt.
I've tried to implement that by doing something like this with a FileAppender, rather than a RollingFileAppender:
var appenders = log4net.LogManager.GetRepository().GetAppenders();
foreach (var appender in appenders) {
var fa = appender as log4net.Appender.FileAppender;
if (fa != null) {
string logfile = fa.File;
fa.Close();
string new_file_path = CreateNextLogFile(logfile);
fa.File = new_file_path;
fa.ActivateOptions();
}
}
The file is closed and CreateNextLogFile() renames it. I then create a new log file and set the FileAppender to use it. However, I thought that ActivateOptions would go ahead and reconfigure the FileAppender with the desired settings. I've looked over the log4net documentation and don't see any other public methods that allow me to reopen the FileAppender after closing it. Can anyone recommend a way to implement the rollover? It would be nice if the RollingFileAppender had something like this, but I didn't see anything useful it its documentation, either.
If we look at the RollingFileAppender we can see that the mechanism for rolling over consists of closing the file, renaming existing files (optional) and opening it again:
// log4net.Appender.RollingFileAppender
protected void RollOverSize()
{
base.CloseFile();
// debug info removed
this.RollOverRenameFiles(this.File);
if (!this.m_staticLogFileName && this.m_countDirection >= 0)
{
this.m_curSizeRollBackups++;
}
this.SafeOpenFile(this.m_baseFileName, false);
}
Unfortunately the CloseFile/SafeOpenFile methods are protected which means you cannot access it from the outside (not easily). So your best bet would be to write an appender inheriting from RollingFileAppender, and to override the virtual AdjustFileBeforeAppend which is called before any logging event is added to the appender.
There you can decide what are the conditions of the roll if any must occur. An idea would be to create a static event that your custom rolling appender suscribes to: when the event is triggered the appender makes a note of it (rollBeforeNextAppend = true;). As soon as you try and log next entry the appender will roll.
public class CustomRollingAppender: RollingFileAppender
{
public CustomRollingAppender()
{
MyStaticCommandCenter.RollEvent += Roll;
}
public void Roll()
{
rollBeforeNextAppend = true;
}
public bool rollBeforeNextAppend {get; set;}
public override void AdjustFileBeforeAppend()
{
if (rollBeforeNextAppend) {
CloseFile();
RollOverRenameFiles(File);
SafeOpenFile(Filename, false);
rollBeforeNextAppend = false;
}
}
}

WindowsEventLogs not logged on Azure

I have an Azure WebRole with the following code:
public override bool OnStart()
{
setDiagnostics();
TestClass test = new TestClass();
return base.OnStart();
}
private void setDiagnostics()
{
string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));
DeploymentDiagnosticManager deploymentDiagnosticManager = new DeploymentDiagnosticManager(cloudStorageAccount, RoleEnvironment.DeploymentId);
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = cloudStorageAccount.CreateRoleInstanceDiagnosticManager(
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
DiagnosticMonitorConfiguration diagConfig = roleInstanceDiagnosticManager.GetCurrentConfiguration();
if (diagConfig == null)
diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
diagConfig.WindowsEventLog.DataSources.Add("Application!*");
diagConfig.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);
roleInstanceDiagnosticManager.SetCurrentConfiguration(diagConfig);
DiagnosticMonitor.Start(wadConnectionString, diagConfig);
}
In the constructor of my TestClass is the following code:
EventLog.WriteEntry("TestClass", "Before the try", EventLogEntryType.Information);
try
{
EventLog.WriteEntry("TestClass", "In the try", EventLogEntryType.Information);
Int32.Parse("abc");
}
catch (Exception ex)
{
EventLog.WriteEntry("TestClass", ex.Message, EventLogEntryType.Error);
}
For some reason this code works well if I run it in debug mode with a break point on the OnStart method and running through the code with F11. However, I do not see any EventLog entries in my WADWindowsEventLogsTable if I remove all breakpoints and just run it. So this seems like a timing issue to me... Does anyone know why my code is performing this behaivor?
Thanks in advance!
The problem was the EventLog.WriteEntry() method. I used the source "TestClass" as EventLog source. However I never created this source with a startup task and due to insufficient privileges it failed to log my entries.
So the solution: create an own source with a startup task or use trace messages.
Try moving your TestClass creation from RoleEntryPoint.OnStart() to RoleEntryPoint.Run().
public override void Run() {
TestClass test = new TestClass();
base.Run();
}
When it is deployed to Azure, does the role actually startup or does it just cycle between initialising and busy?
If it does the problem is to do with time. The items are transferred from the instance to Azure Storage on a timed basis:
diagConfig.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);
if the instance restarts before it hits that timer, then nothing gets transferred to your table.
You'll need to make sure that you keep the application running for more than 1 minute.
You'll need to verify that the following setting is turned off (this might cause the logs to end up in a different place than you would expect them to, more information: http://michaelcollier.wordpress.com/2012/04/02/where-is-my-windows-azure-diagnostics-data/):

Seeking C# threading clarification

I'm new to threading; in fact I'm not even trying to multi- thread the Windows Forms app I'm working on, but all of my searches on this issue lead me to the topic of multithreading. When debugging in Visual Studio 2010 Express, it seems to "jump around" to use the term I've seen others use to describe the same problem. When I let it run, sometimes it runs as expected, other times it just seems to keep running, getting hung up.
In trying to hone my question, I think I need to figure out:
If the timer class calls a method on a different thread, and there isn't an obvious danger of unpredictable instance values/ state corruption in the executing code (there aren't any conditional checks of instance variables etc), why would that method called by the timer appear to behave unpredictably? To me it seems that the code should run synchronously, and if a different thread is used for part of the process, so be it. I can't see where there is opportunity for thread corruption.
When the program starts, it prompts for the timer to be set to run a data download process. After the procedure runs, the timer is set again to a default time, at the end of the procedure. Consistently, the initial timer setting works, and fires as expected, running the data download process... it's that data download method, somewhere within it it goes awry. The last line of code is what sets the timer again, but I can't tell if it's getting hit while debugging it. (jumping around)..
I've added relevant code below... and I stepped into every procedure in my code from the beginning... they all show current thread id 10. This is up to an including the timer firing off, and stopping at a breakpoint at the very next line to execute, which is the data download process. The current thread at that point: 14. I've built the solution before running it/ trying to debug btw. Any ideas?
public partial class frmTradingAppMain : Form
{
private TradingAppDataRunManager drm;
private void frmTradingAppMain_Shown(object sender, EventArgs e)
{
drm = new TradingAppDataRunManager();
drm.StatusChanged += new DataRunManager.DRMStatusChangeHandler(UpdateFormData);
drm.InitializeOrScheduleDataRun();
}
private void UpdateFormData()
{
this.Invoke(new DataRunManager.DRMStatusChangeHandler(UpdateFormDataImpl));
}
private void UpdateFormDataImpl()
{
lblDataDwnLoadManagerStatus.Text = Convert.ToString(drm.Status);
if (drm.Status == DataRunManager.DRMStatus.Inactive)
{
lblNextScheduledDataDownloadDate.Text = "Date not set.";
lblNextScheduledDataDownloadTime.Text = "Time not set.";
}
else
{
lblNextScheduledDataDownloadDate.Text = drm.DateTimeOfNextScheduledDataRun.ToShortDateString();
lblNextScheduledDataDownloadTime.Text = drm.DateTimeOfNextScheduledDataRun.ToShortTimeString();
}
}
}
public abstract class DataRunManager
{
protected DataRunTimer dataRuntimer;
public delegate void DRMStatusChangeHandler();
public event DRMStatusChangeHandler StatusChanged;
public DRMStatusChangeHandler statusChanged;
public void InitializeOrScheduleDataRun()
{
if (DataRunIsAvailable() && UserWouldLikeToPerformDataRun())
RunMainDataProcedure(null);
else
ScheduleDataRun();
}
public void RunMainDataProcedure(object state)
{
start = DateTime.Now;
Status = DRMStatus.Running;
StatusChanged();
GetDataCollections();
foreach (DataCollection dcl in dataCollectionList)
{
dcl.RunDataCollection();
dcl.WriteCollectionToDatabase();
}
PerformDBServerSideProcs();
stop = DateTime.Now;
WriteDataRunStartStopTimesToDB(start, stop);
SetDataRunTimer(DateTimeOfNextAvailableDR());
}
public void ScheduleDataRun()
{
FrmSetTimer frmSetTimer = new FrmSetTimer(DateTimeOfNextAvailableDataRun);
DateTime currentScheduledTimeOfNextDataRun = DateTimeOfNextScheduledDataRun;
DRMStatus currentStatus= Status;
try
{
frmSetTimer.ShowDialog();
DateTimeOfNextScheduledDataRun = (DateTime)frmSetTimer.Tag;
SetDataRunTimer(DateTimeOfNextScheduledDataRun);
}
catch
{
Status = currentStatus;
DateTimeOfNextScheduledDataRun = currentScheduledTimeOfNextDataRun;
}
}
}
public class DataRunTimer
{
System.Threading.Timer timer;
public DataRunTimer(){}
public void SetNextDataRunTime(TimerCallback timerCallback, DateTime timeToSet)
{
if (timer == null)
timer = new System.Threading.Timer(timerCallback);
TimeSpan delayTime = new TimeSpan(timeToSet.Day - DateTime.Now.Day, timeToSet.Hour - DateTime.Now.Hour, timeToSet.Minute - DateTime.Now.Minute,
timeToSet.Second - DateTime.Now.Second);
TimeSpan intervalTime = new TimeSpan(0, 0, 10);
timer.Change(delayTime, intervalTime);
}
public void DataRunTimerCancel()
{
if (timer != null)
timer.Dispose();
}
}

Resources