Custom Logback Appender - Prepending file header and making it rollover - layout

The functionality that I need is writing a header line at the beginning of the configured log file. The log file should, in addition, get rolled over based on a time pattern (I'm talking logback 1.0.7).
So, I'm thinking of writing an Appender - although I'm not sure whether it's not a custom Layout that I actually need.
1) Appender
Per logback's documentation, the right approach is to extend AppenderSkeleton, but then how would I combine this with the RollingFileAppender (to make the file rollover?)
On the other hand, if I extend RollingFileAppender, what method do I override to just decorate the existing functionality? How do I tell it to write that particular String only at the beginning of the file?
2) Layout
Analogously, the approach seems to be extending LayoutBase, and providing an implementation for doLayout(ILoggingEvent event).
But again, I don't know how to just decorate the behaviour - just adding a new line in the file, rather than disrupting its functionality (because I still want the rest of the logs to show up properly).
The getFileHeader() in LayoutBase looks promising, but how do I use it? Is it even intended to be overridden by custom layouts? (probably yes, since it's part of the Layout interface, but then how?)
Thank you!

Here I am answering my own question, just in case someone else comes across the same problem.
This is how I eventually did it (don't know however if it's the orthodox way):
Instead of extending AppenderSkeleton, I extended RollingFileAppender (to keep the rollover functionality), and overrode its openFile() method. In here I could manipulate the log file and write the header in it, after letting it do whatever it needed to do by default. Like this:
public void openFile(String fileName) throws IOException {
super.openFile(fileName);
File activeFile = new File(getFile());
if (activeFile.exists() && activeFile.isFile() && activeFile.length() == 0) {
FileUtils.writeStringToFile(activeFile, header);
}
}
I configured the header in logback.xml, as simple as this: <header> value </header>. This injects it in the header field of my new appender.
Seems to work without problems, but please do post if you know a better way!

Your solution has a problem: it deletes the first line of log of each new file. I think this is because you write the header whereas the file is open by logback.
I've found another solution that does not have this problem:
public void openFile(String fileName) throws IOException
{
super.openFile(fileName);
File activeFile = new File(getFile());
if (activeFile.exists() && activeFile.isFile() && activeFile.length() == 0)
{
lock.lock();
try
{
new PrintWriter(new OutputStreamWriter(getOutputStream(), StandardCharsets.UTF_8), true).println("your header");
}
finally
{
lock.unlock();
}
}
}

Related

MapUtils with Logger

I am using MapUtils.verbosePrint(System.out, "", map) to dump the contents of a map in Java. They (management) do not like us using System.out.println().
We are using log4j. They made the logger into a variable "l" so we can say something like l.debug("This is going to the logfile in debug mode).
I would like to get the output buffer(s) from l so I could pass it into verbosePrint() instead of System.out. I looked at all the methods and members of the logger and did things like getAppenders() and tried all those elements but I could not find anything that helped.
Has anyone else done this? I know the logger may write to > 1 output.
You can use Log4j IOStreams to create PrintStreams that will send everything to a logger. This is mostly useful to log debug output from legacy APIs like JDBC or Java Mail that do not have a proper logging system. I wouldn't advise it in other cases, since your messages might be merged or split into several log messages.
I would rather use one of these approaches:
simply log the map using Logger#debug(Object). This will lazily create an ObjectMessage (only if debug is enabled), which is usually formatted using the map's toString() method. Some layouts might format it differently (like the JSON Template Layout).
eagerly create a MapMessage or StringMapMessage:
if (l.isDebugEnabled()) {
l.debug(new MapMessage(map));
}
This gives you more formatting options. For example the layout pattern %m{JSON} will format your message as JSON.
if your are set on the format provided by MapUtils#verbosePrint, you can extend ObjectMessage and overwrite its getFormattedMessage() and formatTo() methods.
public String getFormattedMessage() {
final ByteArrayOutputStream os = new ByteArrayOutputStream();
MapUtils.verbosePrint(new PrintStream(os), "", );
return new String(os.toByteArray());
}

Azure Logic App, Cant get data from CreateFile Function

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

How to get feature file name/path in cucumber step implementation java file

In cucumber framework, is there a way I can get the currently executing feature file's name or even better it's folder path in the step definition file?
My project is implemented in java and I'm using intelliJ idea. I've already tried using before hook which helps me fetch the scenario instance. But, I can't find a way to get the feature file info.
The only solution I could come up with is by mentioning the feature file's name in the feature file Title and then in the #Before hook get this title using scenario.getId().split(";")[0]
And then parse this Title to fetch the feature file name and store it in a variable which I can later on use in the #After hook to pass it to the Custom Formatter to parse my feature file and save it's data in the database.
Long story short: you are not supposed to. Ask yourself what you are really trying to achieve, why you need the path in the first place. Is it because of some external file? Do you really need an external file or can the content be sufficiently represented in you feature files? If you really need an external file why not have it as a resource? And so on.
Not supposed to?
The reason you would want it is for traceability and explainability.
Very helpful for debugging, too.
Especially when you have more than 20 gherkin files (with up to 200 steps) and more than 20 step definition files.
I put one of these at the top of every Java step definition file:
#Before
public void printScenarioName(Scenario scenario) {
this.scenario = scenario;
this.featureName = CukeUtils.getFeatureName(scenario);
String result = "#Before:\n*************Setting Feature: " + this.featureName +
"\n*************Setting Scenario: " + this.scenario.getName();
log.info(result);
}
where in CukeUtils I have defined:
public static String getFeatureName(Scenario scenario) {
String featureName = "";
System.out.println("scenario.getId(): " + scenario.getId());
// Usually the scenario Id is doctored version of the lines following
// the Feature: and the Scenario: keywords.
// Eg.: scenario.getId(): a-long-(20-minute)-non-invasive-smoke-test-that-
//comfirms-that-i-can-login-to-area51-via-the-nasa-portal;as-a-superuser-i-
//must-be-able-to-login-to-area51-via-the-nasa-portal-so-that-i-can-access-
//all-the-secret-files
String rawFeatureName = scenario.getId().split(";")[0]
.replace("-i-", "-I-").replace("-"," ");
featureName = featureName + rawFeatureName.substring(0, 1).toUpperCase() +
rawFeatureName.substring(1).replace("nasa", "NASA");
return featureName;
}

Need a working example of configuring log4j RollingFileAppender via properties

I am using log4j for logging, and a property file for configuration. Currently, my log files are too big (3.5 GB is too large for a log file). So think I need to use RollingFileAppender - but when I do so the log file continues to grow overly large. I believe I have just misconfigured it; does anyone have a working example of configuring RollingFileAppender?
For the record, my current configuration looks like this:
log4j.appender.MAIN_LOG.File=${catalina.base}/logs/webtop.log
log4j.appender.MAIN_LOG=org.apache.log4j.RollingFileAppender
log4j.appender.MAIN_LOG.layout=com.mycompany.util.log.Log4JSimpleLayout
log4j.appender.MAIN_LOG.DatePattern='.'yyyy-MM-dd
log4j.appender.MAIN_LOG.MaxFileSize=10MB
log4j.appender.MAIN_LOG.MaxBackupIndex=99
log4j.appender.MAIN_LOG.append=true
log4j.rootCategory=ALL, MAIN_LOG
An alternative to RollingFileAppender would also be a fine solution.
I believe I have just misconfigured it; does anyone have a working example of configuring RollingFileAppender?
This seems to work fine for me #mcherm. See below.
Are you sure that you are using the log4j.properties that you think you are? Try changing the .File to another path to see if log output goes to the new file. What version of log4j are you using? I'm running 1.2.15.
Hope this helps.
I created the following test program:
package com.j256.ormlite;
import org.apache.log4j.Logger;
public class Foo {
private static Logger logger = Logger.getLogger(Foo.class);
public static void main(String[] args) {
for (int x = 0; x < 10000000; x++) {
logger.error("goodness this shouldn't be happening to us right here!!!!");
}
}
}
My log4j.properties file holds:
log4j.appender.MAIN_LOG=org.apache.log4j.RollingFileAppender
log4j.appender.MAIN_LOG.File=${catalina.base}/logs/webtop.log
log4j.appender.MAIN_LOG.layout=com.j256.ormlite.Log4JSimpleLayout
log4j.appender.MAIN_LOG.MaxFileSize=10MB
log4j.appender.MAIN_LOG.MaxBackupIndex=5
log4j.appender.MAIN_LOG.append=true
log4j.rootCategory=ALL, MAIN_LOG
Notice that I removed the DatePattern which wasn't valid for my RollingFileAppender. My layout is:
package com.j256.ormlite;
import org.apache.log4j.spi.LoggingEvent;
public class Log4JSimpleLayout extends org.apache.log4j.Layout {
#Override
public String format(LoggingEvent event) {
return "log message = " + event.getMessage().toString() + "\n";
}
#Override
public boolean ignoresThrowable() {
return true;
}
public void activateOptions() {
}
}
Running with -Dcatalina.base=/tmp/ I get files in /tmp/logs/ which go up to index #5 and are 10mb in size. If I tune the MaxFileSize or the MaxBackupIndex, it adjusts appropriately.
Your issue might be with the fact that you are specifying a DatePattern.
The DatePattern is meant to be used with the DailyRollingFileAppender to specify the date that the log file should roll.
I don't believe it can be used in conjunction with the MaxFileSize and MaxBackupIndex attributes.
Log4j lets you roll files based on file size or date but not both.
When we need log files to be rolled on a daily basis, we should be using DailyRollingFileAppender instead of RollingFileAppender.
You do not need to specify the MaxFileSize limit instead only DatePattern is enough for rolling files based on frequency.
I have tried the below configuration in log4j.properties file for rolling log files every minute.
log4j.appender.infoAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.infoAppender.Threshold=INFO
log4j.appender.infoAppender.DatePattern='.' yyyy-MM-dd HH-mm
log4j.appender.infoAppender.File=C:/logs/info.log
Start by setting the -Dlog4j.debug JVM parameter. That prints out a few useful lines of debug information showing which config file it's found and is using, etc. That should give you some clues to what's going wrong.
See http://logging.apache.org/log4j/1.2/manual.html

Using log4net as a logging mechanism for SSIS?

Does anyone know if it is possible to do logging in SSIS (SQL Server Integration Services) via log4net? If so, any pointers and pitfalls to be aware of? How's the deployment story?
I know the best solution to my problem is to not use SSIS. The reality is that as much as I hate this POS technology, the company I work with encourages the use of these apps instead of writing code. Meh.
So to answer my own question: it is possible. I'm not sure how our deployment story will be since this will be done in a few weeks from now.
I pretty much took the information from these sources and made it work. This one explains how to make referencing assemblies work with SSIS, click here. TLDR version: place it in the GAC and also copy the dll to the folder of your targetted framework. In my case, C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727. To programmatically configure log4net I ended up using this link as reference.
This is how my logger configuration code looks like for creating a file with the timestamp on it:
using log4net;
using log4net.Config;
using log4net.Layout;
using log4net.Appender;
public class whatever
{
private ILog logger;
public void InitLogger()
{
PatternLayout layout = new PatternLayout("%date [%level] - %message%newline");
FileAppender fileAppenderTrace = new FileAppender();
fileAppenderTrace.Layout = layout;
fileAppenderTrace.AppendToFile = false;
// Insert current date and time to file name
String dateTimeStr = DateTime.Now.ToString("yyyyddMM_hhmm");
fileAppenderTrace.File = string.Format("c:\\{0}{1}", dateTimeStr.Trim() ,".log");
// Configure filter to accept log messages of any level.
log4net.Filter.LevelMatchFilter traceFilter = new log4net.Filter.LevelMatchFilter();
traceFilter.LevelToMatch = log4net.Core.Level.All;
fileAppenderTrace.ClearFilters();
fileAppenderTrace.AddFilter(traceFilter);
fileAppenderTrace.ImmediateFlush = true;
fileAppenderTrace.ActivateOptions();
// Attach appender into hierarchy
log4net.Repository.Hierarchy.Logger root = ((log4net.Repository.Hierarchy.Hierarchy)LogManager.GetRepository()).Root;
root.AddAppender(fileAppenderTrace);
root.Repository.Configured = true;
logger = log4net.LogManager.GetLogger("root");
}
}
Hopefully this might help someone in the future or at least serve as a reference if I ever need to do this again.
Sorry, you didn't dig deep enough. There are 5 different destinations that you can log to, and 7 columns you can choose to include or not include in your logging as well as between 18 to 50 different events that you can capture logging on. You appear to have chosen the default logging, and dismissed it because it didn't work for you out of the box.
Check these two blogs for more information on what can be done with SSIS logging:
http://consultingblogs.emc.com/jamiethomson/archive/2005/06/11/SSIS_3A00_-Custom-Logging-Using-Event-Handlers.aspx
http://www.sqlservercentral.com/blogs/michael_coles/archive/2007/10/09/3012.aspx

Resources