Using log4net as a logging mechanism for SSIS? - log4net

Does anyone know if it is possible to do logging in SSIS (SQL Server Integration Services) via log4net? If so, any pointers and pitfalls to be aware of? How's the deployment story?
I know the best solution to my problem is to not use SSIS. The reality is that as much as I hate this POS technology, the company I work with encourages the use of these apps instead of writing code. Meh.

So to answer my own question: it is possible. I'm not sure how our deployment story will be since this will be done in a few weeks from now.
I pretty much took the information from these sources and made it work. This one explains how to make referencing assemblies work with SSIS, click here. TLDR version: place it in the GAC and also copy the dll to the folder of your targetted framework. In my case, C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727. To programmatically configure log4net I ended up using this link as reference.
This is how my logger configuration code looks like for creating a file with the timestamp on it:
using log4net;
using log4net.Config;
using log4net.Layout;
using log4net.Appender;
public class whatever
{
private ILog logger;
public void InitLogger()
{
PatternLayout layout = new PatternLayout("%date [%level] - %message%newline");
FileAppender fileAppenderTrace = new FileAppender();
fileAppenderTrace.Layout = layout;
fileAppenderTrace.AppendToFile = false;
// Insert current date and time to file name
String dateTimeStr = DateTime.Now.ToString("yyyyddMM_hhmm");
fileAppenderTrace.File = string.Format("c:\\{0}{1}", dateTimeStr.Trim() ,".log");
// Configure filter to accept log messages of any level.
log4net.Filter.LevelMatchFilter traceFilter = new log4net.Filter.LevelMatchFilter();
traceFilter.LevelToMatch = log4net.Core.Level.All;
fileAppenderTrace.ClearFilters();
fileAppenderTrace.AddFilter(traceFilter);
fileAppenderTrace.ImmediateFlush = true;
fileAppenderTrace.ActivateOptions();
// Attach appender into hierarchy
log4net.Repository.Hierarchy.Logger root = ((log4net.Repository.Hierarchy.Hierarchy)LogManager.GetRepository()).Root;
root.AddAppender(fileAppenderTrace);
root.Repository.Configured = true;
logger = log4net.LogManager.GetLogger("root");
}
}
Hopefully this might help someone in the future or at least serve as a reference if I ever need to do this again.

Sorry, you didn't dig deep enough. There are 5 different destinations that you can log to, and 7 columns you can choose to include or not include in your logging as well as between 18 to 50 different events that you can capture logging on. You appear to have chosen the default logging, and dismissed it because it didn't work for you out of the box.
Check these two blogs for more information on what can be done with SSIS logging:
http://consultingblogs.emc.com/jamiethomson/archive/2005/06/11/SSIS_3A00_-Custom-Logging-Using-Event-Handlers.aspx
http://www.sqlservercentral.com/blogs/michael_coles/archive/2007/10/09/3012.aspx

Related

Azure Logic App, Cant get data from CreateFile Function

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

CodedUI- Best way to create and use UIObject Repository( that requires minimum effort when UI changes)

I started working with CodedUI few months before to automate a desktop Application(WPF).
Just checking out for the best ways to create a framework for my Application.
As, I have seen in other automation tools, I feel the heart of an automation framework using any tool(UI Based) is the way it's object Repository is created i.e. how well the UI objects are defined. A Cleaner and well defined Object Repository always proves to be very helpful when it comes to updating your tests.
I am trying to discover the best way to store my UIObjects so that in case of any UI changes in my Application, I have to put minimum effort to update my automation test.
Also, If an Object changes in application, updating it only at one place should solve the problem.
This can be any kind of change like :
->change in just a property(This I feel would be very easy to update in automation Test. The best and Easiet way I feel is to simply update the .uitest file(the xml file) if possible.)
->change in hierarchy and position
->entirely new object added
For the 2nd and 3rd changes, updating scripts become a difficult job, esp if the UIObject is being referred at may places, in many TestMethods, or Modules.
Also, I have generally seen that in Test Methods, Variable Declarations are done to create a reference to the UIMap objects and those variables are further used in the TestMethod Code.
So, in this case If the UI of my application changes, I will have to update the variable decalaration in each of the Test Methods. I want to reduce this effort to changing the variable decalaration only at one place. OfCourse, I cannot have all the code inside only one Test Method. One way that came to my mind is as:
Can't I have simply one common place for all these Variable decalarations. We can give a unique and understandable name to each UIObject e.g.: The decalratoions will look like:
UITabPage UITabPage = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage;
WpfRow UIRow = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow;
WpfText UIEquipmentTagText = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow.UITagCell.UIEquipmentTagText;
WpfCheckBox UIEquipmentCheckBox = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow.UICheckBoxCell.UICheckBox;
....
....
and use these variables wherever required. Hence, In case of any chnages also, there will be only one place where you will need to update thse objects.
But for this, These varaibles must be made STATIC. What can be problem with making these Object Variables static?
Please provide your suggestion on this topic. May be what I am thinking is not possible or practical. I just want to choose the best way to start with before I go too far with the automation scripts and realize later that my approach wasn't a good one.
Thanks in Advance,
Shruti
Look into using descriptive programming instead of using the UIMaps.
Make a static class with generic functions to assist. Going to give you some examples of how to set it up.
For example:
public WinWindow parentwin(string ParentControlName)
{
var parentwin = new WinWindow();
parentwin.SearchProperties.Add("Control Name", ParentControlName);
return parentwin;
}
public WinWindow childwin(string ChildWinControlName, string ParentControlName)
{
var childwin = new WinWindow(parentwin(ParentControlName));
childwin.SearchProperties.Add("Control Name", ChildWinControlName);
return childwin;
}
public WinButton button(string ButtonName,string ChildWinControlName, string ParentControlName)
{
var childwin = childwin(ChildWinControlName,ParentControlName);
var button = new WinButton(childwin);
button.SearchProperties.Add("Name", ButtonName);
}
public void ClickButton(string ButtonName,string ChildWinControlName, string ParentControlName)
{
var button = button(ButtonName,ChildWinControlName,ParentControlName);
Mouse.Click(button);
}
public void ChangeFocus(WinWindow NewFocus)
{
var NewFocus = new NewFocus();
NewFocus.SetFocus();
}
public void ChangeFocus(WinWindow NewFocusChild, string c)
{
var a = new NewFocus();
a.SetFocus();
}
ChangeFocus(childwin("WelcomeForm", "MainForm");
ClickButton("&OK", "WelcomeForm", "MainForm");

How to create predictable logging locations with sbt-native-packager

I am using sbt-native-packager with the experimental Java Server archetype. I am trying to identify a conventional way to access my log files, and I'm wondering if anyone knows of a common approach here. Since I am using the Java Server archetype, I am getting a symlink /var/log/$app -> install_dir/$app/log, but it feels a little dirty and less portable to just have log4j open /var/log/$app/error.log directly.
[Update]
I ended up creating an object with run time path information:
object MakaraPaths {
def getLogPath = new File(getJarPath, "../logs").getPath
def getConfigPath = new File(getJarPath, "../conf").getPath
def getJarPath = {
val regex = new Regex("(/.+/)*")
val jarPath = Makara.getClass.getProtectionDomain.getCodeSource.getLocation.getPath
(regex findAllIn jarPath).mkString("")
}
}
In my main method, I established a system property based on the new MakaraPaths object:
System.setProperty("logPath", MakaraPaths.getLogPath)
I also used this for my config file:
val config = ConfigFactory.parseFile(new File(MakaraPaths.getConfigPath, "application.conf"))
Ultimately, to load the log file, I used a System Property lookup:
<RollingFile name="fileAppender" fileName="${sys:logPath}/server.log" filePattern="${sys:logPath}/server_%d{yyMMdd}.log">
This gets me most of the way where I needed to be. It's not completely portable, but it does technically support my use case. (Deploying to Ubuntu)
You could use relative path in log4j configuration. Just write logs in logs/filename.log.
During installation symlink install_dir/$app/logs -> /var/log/$app will be created, and all logs will be written in /var/log/$app/filename.log

Usage of log4net to Always Log a Value

I have select few places in my application where I'd like to always log values. I could simply use Log.Info() and leave it at that, but I'd prefer a solution that can't be disabled by an accidental change to the level configuration. In this case, as long is log4net is not disabled, I want these log statements to fire.
What's the best approach?
Just looking at some information it looks like one option is to create a custom level with a value set above Emergency, but I don't know if that's a brutally awful hack with side effects I'm not realizing or a legitimate option. I couldn't find any clear guidance in the documentation.
I am not a log4net expert, but something like this might do what you want:
This code will get a named logger from the LogManager and will programmatically set its level to "ALL". If you retrieve this logger later in your code, it will always log, even if the log level is set to OFF in the config file.
To test, set the root log level to "OFF" in the config file, then use the code below:
log4net.ILog log = log4net.LogManager.GetLogger("abc");
log.Info("this won't log because root logger is OFF");
//Reset the level
log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)log.Logger;
l.Level = l.Hierarchy.LevelMap["ALL"];
//Try to log again
log.Info("this will log because we just reset abc's level to ALL");
I tested it and it does seem to work.
I found this information here and here.

Code Contracts and Auto Generated Files

When I enabled code contracts on my WPF control project I ran into a problem with an auto generated file which was created at compile time (XamlNamespace.GeneratedInternalTypeHelper). Note, the generated file is called GeneratedInternalTypeHelper.g.cs and is not the same as the GeneratedInternalTypeHelper.g.i.cs which there are several obsolete blog posts about.
I'm not exactly sure what its purpose is, but I am assuming it is important for some internal reflection to resolve XAML. The problem is that it does not have code contracts, nor is the code contract system smart enough to recognize it as an auto generated file. This leads to a bunch of errors from the static checker.
I tried searching for a solution to this problem, but it seems like nobody is developing WPF controls and using code contracts. I did come across an interesting attribute, ContractVerificationAttribute, which takes a boolean value to set whether the assembly or class is to be verified. This allows you to decorate a class as not verified. Sadly the GeneratedInternalTypeHelper is regenerated with every compile, so it is not possible to exclude just this one class. The inverse scenario is possible though, decorate the assembly as not verified and then opt in for every class.
To mitigate the obvious hack I wanted to create a test that would at least verify that the exposed classes have code contract verification with a test like the following to ensure that own classes were at least being verified:
[Fact]
public void AllAssemblyTypesAreDecoratedWithContractVerificationTrue()
{
var assembly = typeof(someType).Assembly;
var exposedTypes = assembly.GetTypes().Where(t=>!string.IsNullOrWhiteSpace(t.Namespace) && t.Namespace.StartsWith("MyNamespace") && !t.Name.StartsWith("<>"));
var areAnyNotContractVerified = exposedTypes.Any(t =>
{
var verificationAttribute = t.GetCustomAttributes(typeof(ContractVerificationAttribute), true).OfType<ContractVerificationAttribute>();
return verificationAttribute.Any() && verificationAttribute.First().Value;
});
Assert.False(areAnyNotContractVerified);
}
As you can see it takes all classes in the controls assembly and finds the one from the company namespace which are not also auto generated anonymous types (<>WeirdClassName).
(I also need to exclude Resources and settings, but I hope you get the idea).
I'm not loving the solution since there are ways of avoiding contract verification, but currently it's the best I can come up with. If anyone has a better solution, please let me know.
So you can treat this class exactly like you would treat any other "3rd party" class or library. I'm sure certain assumptions would hold with the interaction with this generated class so at the interaction points, decorate your own code with Contract.Assume(result != null) or similar.
var result = new GennedClass().GetSomeValue();
Contract.Assume(result != null);
What this does is translate into an assertion that is checked at run time, but it allows the static analyzer to reason about the rest of the code that you do control.

Resources