Azure Logic App, Cant get data from CreateFile Function - azure

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?

So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!

We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

Related

Custom Module is being used by batch classes and cannot be removed

I would like to remove my custom module from the Kofax administration module but I can't because I get the following error
Using the module multiple times increases the amount of batch classes listed there. But there is only one batch class so this can't be.
I removed the module from the batch class queue, stopped all background services and have no forms app running. The only way to remove this module is to export the batch class, delete it in the administration module, delete the custom module and reimport the batch class.
Maybe I don't exit the application properly?
My session management:
public void LoginToRuntimeSession()
{
login = new Login();
login.EnableSecurityBoost = true;
login.Login();
login.ApplicationName = Resources.CUSTOM_MODULE_ID;
login.Version = "1.0";
login.ValidateUser($"{Resources.CUSTOM_MODULE_ID}.exe", false);
session = login.RuntimeSession;
}
public void Logout()
{
session.Dispose();
login.Logout();
}
I get a new active batch with this code
public IBatch GetNextBatch()
{
return session.NextBatchGet(login.ProcessID);
}
and this is how I process the batch after polling for new ones
public void ProcessBatch(IBatch batch)
{
// ... IACDataElement stuff
batch.BatchClose(KfxDbState.KfxDbBatchReady, KfxDbQueue.KfxDbQueueNext, 0, "");
}
Any ideas how to fix this "bug"? Please let me know if you need more information!
The message you are seeing is only referring to the configuration in the Administration module. Therefore it is not related to what your module actually does when it is running or closing (no problem in your code can cause this).
If you are using Kofax Capture 11, previous published versions of the batch class remain in the system, so these probably still count as references to the module. If you go to the Publish dialog window, you can click the "Versions..." button to see and delete older versions. Try to remove your module again after you have deleted all the older versions that were still using it.
Additionally, you can look through the batch class properties to make sure that this module isn't set in one of the other settings, such as the module to start foldering on the Foldering tab, or the module to start Partial Batch Export on the Advanced tab.
If neither of those suggestions work, then you may want to open a case with Kofax Technical Support. One thing that either they or you can do is open the admin.xml file in the exported batch class cab file and see where your module ID is found. That will give context for finding out what is still referencing the module.

How to get FileInfo objects from the Orchard Media folder?

I'm trying to create a custom ImageFilter that requires me to temporarily write the image to disk, because I'm using a third party library that only takes FileInfo objects as parameters. I was hoping I could use IStorageProvider to easily write and get the file but I can't seem to find a way to either convert an IStorageFile to FileInfo or get the full path to the Media folder of the current tenant to retrieve the file myself.
public class CustomFilter: IImageFilterProvider {
public void ApplyFilter(FilterContext context)
{
if (context.Media.CanSeek)
{
context.Media.Seek(0, SeekOrigin.Begin);
}
// Save temporary image
var fileName = context.FilePath.Split(new char[] { '\\' }, StringSplitOptions.RemoveEmptyEntries).LastOrDefault();
if (!string.IsNullOrEmpty(fileName))
{
var tempFilePath = string.Format("tmp/tmp_{0}", fileName);
_storageProvider.TrySaveStream(tempFilePath, context.Media);
IStorageFile temp = _storageProvider.GetFile(tempFilePath);
FileInfo tempFile = ???
// Do all kinds of things with the temporary file
// Convert back to Stream and pass along
context.Media = tempFile.OpenRead();
}
}
}
FileSystemStorageProvider does a ton of heavy lifting to construct paths to the Media folder so it's a shame that they aren't publicly accessible. I would prefer not to have to copy all of that initialization code. Is there an easy way to directly access files in the Media folder?
I'm not using multitenancy, so forgive me if this is inaccurate, but this is the method I use for retrieving the full storage path and then selecting FileInfo objects from that:
_storagePath = HostingEnvironment.IsHosted
? HostingEnvironment.MapPath("~/Media/") ?? ""
: Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Media");
files = Directory.GetFiles(_storagePath, "*", SearchOption.AllDirectories).AsEnumerable().Select(f => new FileInfo(f));
You can, of course, filter down the list of files using either Path.Combine with subfolder names, or a Where clause on that GetFiles call.
This is pretty much exactly what FileSystemStorageProvider uses, but I haven't had need of the other calls it makes outside of figuring out what _storagePath should be.
In short, yes, you will likely have to re-implement whatever private functions of FileSystemStorageProvider you need for the task. But you may not need all of them.
I was struggling with a similar issue too and i can say that the IStorageProvider stuff is pretty much restricted.
You can see this when viewing the code of FileSystemStorageFile. The class already uses FileInfo to return data but the struct itself isn't accessible and other code is based on this. Therefore you would have to basically reimplement everything from scratch (own implementation of IStorageProvider). The easiest option is to simply call
FileInfo fileInfo = new FileInfo(tempFilePath);
but this would break setups where no file system based storage provider is used like AzureBlobStorageProvider.
The proper way for this task would be to get your hands dirty and extend the storage provider interfaces and update all the code that is based on it. But as far as i can remember the issue here is that you need to update the Azure stuff also and then things get really messy. Due to this fact i aborted that approach when trying to do this heavy stuff on my project.

Azure functions structure from a Git repository

I have a C# class that does an address lookup. I want to expose this as an Azure function. I've been going through the documentation but can't see how I can/if it's possible to do the following:
I have a Git repository in Team Services that contains a class library of my AddressLookup. Can my Function reference this project?
If I look at the folder structure of the site I can see it has copied over all the source files from the Git repository, can I get it to build the solution or does it literally just pull all the files?
Where in the solution do I put the function? Do I create a solution folder of the name of the function and place the relevant files in there?
My AddressLookup class returns an object that is defined in the class library. Will the function be able to use and return this?
Thanks
Alex
Follow-up to Q1: Are you trying to setup CI? For continuous integration with Azure Functions, you may reference the following:
https://azure.microsoft.com/en-us/documentation/articles/functions-continuous-deployment/#setting-up-continuous-deployment
http://flgmwt.github.io/azure/azure-functions/c-sharp/2016/04/04/azure-fns-with-ci.html
--Update 10/17--
Specific to Team Services, here are the steps:
Make sure that your VSTS account is linked to your Azure Subscription. Follow the instructions in this article.
Navigate to the Functions Portal for your Function App and click on Function app Settings -> Configure continuous integration.
In the Deployements blade, click on Setup and configure your Deployment source information (see sample snapshot below). Click on the OK button. Wait for the sync to succeed. Close the Deployments blade.
Give it a minute and refresh your Functions Portal session. You should now see the function added to your Function site. The snapshot below is my AddressLookup function that was synced from my Team Services project named MyFirstProject.
Note the disclaimer message above the Code editor. If you hook up CI for your Function, you will not be able to edit it in the Functions Portal. Since this particular example requires a request body, you will need to test it using Postman.
--End of update 10/17--
Answer to Q2:
Here's a good documentation describing the folder structure of Azure Functions:
https://azure.microsoft.com/en-us/documentation/articles/functions-reference/
I also recommend the follow-up documentation specific to C# development for Azure Functions:
https://azure.microsoft.com/en-us/documentation/articles/functions-reference-csharp/
Answer to Q3 & Q4: I will attempt to answer these by providing a sample implementation. I don't have any context on the implementation of your AddressLookup library, however, in the interest of providing an example, I am going to take a wild leap and assume that it is a library that will perform some Geocoding operations. Assuming again that you want to use this library in an HTTP-triggered Function, you may begin by first generating the AddressLookup.dll and then uploading it to the bin folder inside your Function. You may then reference that DLL from your Function script.
For instance, using this article as a reference, I generated a AddressLookup.dll library in Visual Studio that has the following implementation. This DLL will serve as a proxy for your AddressLookup library so that I can demonstrate how we can use it in a Function.
using System;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Net;
using System.Web;
using System.Xml.Linq;
namespace AddressLookup
{
public class GeoLocation
{
public double Longitude { get; set; }
public double Latitude { get; set; }
}
public class GeoCoder
{
private const string geoCodeLookupUrlPattern =
"https://maps.googleapis.com/maps/api/geocode/xml?address={0}&key={1}";
private const string addressLookupUrlPattern =
"https://maps.googleapis.com/maps/api/geocode/xml?latlng={0},{1}&key={2}";
private string _apiKey = null;
public GeoCoder(string apiKey)
{
if (string.IsNullOrEmpty(apiKey))
{
throw new ArgumentNullException("apiKey");
}
_apiKey = apiKey;
}
public GeoLocation GetGeoLocation(string address)
{
GeoLocation loc = null;
string encodedAddress = HttpUtility.UrlEncode(address);
string url = string.Format(geoCodeLookupUrlPattern, encodedAddress, _apiKey);
WebRequest request = WebRequest.Create(url);
using (WebResponse response = request.GetResponse())
{
using (Stream stream = response.GetResponseStream())
{
if (stream != null)
{
XDocument document = XDocument.Load(new StreamReader(stream));
XElement longitudeElement = document.Descendants("lng").FirstOrDefault();
XElement latitudeElement = document.Descendants("lat").FirstOrDefault();
if (longitudeElement != null && latitudeElement != null)
{
loc = new GeoLocation
{
Longitude = Double.Parse(longitudeElement.Value, CultureInfo.InvariantCulture),
Latitude = Double.Parse(latitudeElement.Value, CultureInfo.InvariantCulture)
};
}
}
}
}
return loc;
}
public string GetAddress(GeoLocation loc)
{
string address = null;
string url = string.Format(addressLookupUrlPattern, loc.Latitude, loc.Longitude, _apiKey);
WebRequest request = WebRequest.Create(url);
using (WebResponse response = request.GetResponse())
{
using (Stream stream = response.GetResponseStream())
{
if (stream != null)
{
XDocument document = XDocument.Load(new StreamReader(stream));
XElement element = document.Descendants("formatted_address").FirstOrDefault();
if (element != null)
{
address = element.Value;
}
}
}
}
return address;
}
}
}
Now, let's create an HTTP-Triggered Function by performing the following steps:
Go to the Functions Portal. Create a Function using the HTTP
Trigger - C# template.
Fill in the name (e.g., AddressLookup) and authorization level (e.g., Anonymous). You should now see a Function named AddressLookup created with some pre-populated code.
On the left pane, click on the Function app settings button.
Optional: Click on Configure app Settings. Under the "App settings" section, add a value for the key GoogleMapsAPIKey with your api key, then click on the Save button. Note: If you skip this step, then you will need to hard-code the key in your function code later.
Next, use the Kudu console to upload your DLL. Click on the Go to Kudu button. This will launch a new browser window
with a cmd console. Type the following to navigate to your
Function directory,
cd site\wwwroot\AddressLookup
Create a bin folder by typing mkdir bin at the command prompt as follows,
Double-click on the bin folder and upload (see "Add files") the AddressLookup.dll into the folder. When you are done, you should a similar snapshot below,
Go back to the Functions Portal. In your Function's editor, at the bottom of the Code section, click on View Files. You should now see the newly created bin folder as follows,
Replace the contents of the pre-populated Function script with the following code
#r "AddressLookup.dll"
using System;
using AddressLookup;
using System.Net;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
log.Info($"C# HTTP trigger function processed a request. RequestUri={req.RequestUri}");
// Reading environment variable from App Settings, replace with hardcoded value if not using App settings
string apiKey = System.Environment.GetEnvironmentVariable("GoogleMapsAPIKey", EnvironmentVariableTarget.Process);
// Get request body
dynamic data = await req.Content.ReadAsAsync<object>();
string address = data?.address;
string name = data?.name;
GeoCoder geoCoder = new GeoCoder(apiKey);
GeoLocation loc = geoCoder.GetGeoLocation(address);
string formattedAddress = geoCoder.GetAddress(loc);
HttpResponseMessage message = null;
if (name == null)
{
message = req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name in the request body");
}
else
{
var msg = $"Hello {name}. Lon: '{loc.Longitude}', Lat: '{loc.Latitude}', Formatted address: '{formattedAddress}'";
message = req.CreateResponse(HttpStatusCode.OK, msg);
}
return message;
}
Click on the Save button.
In the "Run" section, supply the following request body,
{
"name": "Azure",
"address": "One Microsoft Way Redmond WA 98052"
}
Click on the Run button.
You should see some log entries similar to the following,
2016-10-15T03:54:31.538 C# HTTP trigger function processed a request. RequestUri=https://myfunction.azurewebsites.net/api/addresslookup
2016-10-15T03:54:31.773 Function completed (Success, Id=e4308c0f-a615-4d43-8b16-3a6afc017f73)
and the following HTTP response message,
"Hello Azure. Lon: '-122.1283833', Lat: '47.6393225', Formatted address: '1 Microsoft Way, Redmond, WA 98052, USA'"
Since this is a HTTP-triggered Function, you may also test your Function using Postman. See snapshot below,
If you upload your own DLL in step 5 and edit the Function code to call your library, the Function should work just as well.
Is the assembly you want to reference frozen, or do you want to see updates? If you don't want to see updates, see the answer by Ling Toh.
But if you want to see updates when the assembly updates:
Link your function to a some form of continuous delivery. The official documentation explains how to do this. At the moment, you seem to need a separate git repository, VSTS project (or whatever) to do this easily. (It is possible to edit the deployment process in Kudu, but I would avoid that if you possibly can).
The functions project should contain only the functions code themselves. So it should contain something like the followling:
global.json
host.json
packages.config
Web.config
function1/
function1/run.csx
function1/project.json
function1/function.json
Where you should replace function1 by the name of your function.
Once you've configured this to push to your functions host, you're most of the way to where you want to be. Next, add a function1/run subdirectory where you place yourAssembly.dll. This should be automatically copied here on a successful build of the assembly project. I don't have experience of VSTS to know exactly the best way to do this, so you might need to ask another question.
You've now placed the assembly in the right place. You now refer to it by adding the assembly reference line to the top of your run.csx:
#r "yourAssembly.dll"
Note that all of the references have to be before everything else at the top of the run.csx. So you can put it after other references, but it has to be before anythign else, including load specifications.
Note that for custom assemblies you include the .dll and quote the file, whereas for framework assemblies that aren't referenced by default, you don't
In an ideal world, this will be enough.
But functions doesn't trigger a rebuild if the assembly is updated at the moment, only if project.json, run.csx or function.json is updated. So as part of this process, I then look at the end of the file and add or remove a blank line. It needs to be meaningless (so that you're not changing something important), but enough for your version control tool to think that the file has changed. Clearly, this step is not necessary if you've also made other changes to the file.
If you're using git, you'd now commit and push the changes up. Functions will see that function has changed, and recompile. You should see this in the log pane in the function itself.
Note that if you have two functions using the same dependent assembly, you need to copy it into both function folders; this can't be shared.

How does Database.SetInitializer actually work? (EF code-first create database and apply migrations using several connection strings)

I am trying to write a method to create a database and run migrations on it, given the connection string.
I need the multiple connections because I record an audit log in a separate database.
I get the connection strings out of app.config using code like
ConfigurationManager.ConnectionStrings["Master"].ConnectionString;
The code works with the first connection string defined in my app.config but not others, which leads me to think that somehow it is getting the connection string from app.config in some manner I don't know.
My code to create the database if it does not exist is
private static Context MyCreateContext(string ConnectionString)
{
// put the connection string where the factory method can get it
AppDomain.CurrentDomain.SetData("ConnectionString", ConnectionString );
var factory = new ContextFactory();
// I know I need this line - but I cant see how what follows actually uses it
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context,DataLayer.Migrations.Configuration>());
var context = factory.Create();
context.Database.CreateIfNotExists();
return context
}
The code in the Migrations.Configuration is
Public sealed class Configuration : DbMigrationsConfiguration<DataLayer.Context>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
}
The context factory code is
public class ContextFactory : IDbContextFactory<Context>
{
public Context Create()
{
var s = (string)AppDomain.CurrentDomain.GetData("ConnectionString");
return new Context(s);
}
}
Thus I am setting the connection string before creating the context.
Where can I be going wrong, given that the connection strings are all the same except the database name, and the migration code runs with one connection string, but doesnt run with others?
I wonder if my problem is to do with understanding how How does Database.SetInitializer actually works. I am guessing something about reflection or generics. How do i make the call to SetInitializer tie tie to my actual context?
I have tried the following code but the migrations do not run
private static Context MyCreateContext(string ConnectionString)
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context, DataLayer.Migrations.Configuration>());
var context = new Context(ConnectionString);
context.Database.CreateIfNotExists();
}
This question appears to be related
UPDATE:
I can get the migrations working if I refer to the connection string using
public MyContext() : base("MyContextConnection") - which points to in the config
I was also able to get migrations working on using different instances of the context, if I created a ContextFactory class and passed the connection to it by referencing a global. ( See my answer to the related question link )
Now I am wondering why it has to be so hard.
I'm not sure exactly as to what the problems are you facing, but let me try
The easiest way to provide connection - and be sure it works that way...
1) Use your 'DbContext' class name - and define a connection in the app.config (or web.config). That's easiest, you should have a connection there that matches your context class name,
2) If you put it into the DbContext via constructor - then be consistent and use that one. I'd also suggest to 'read' from config connections - and again name it 'the same' as your context class (use the connection 'name', not the actual string),
3) if none is present - EF/CF makes the 'default' one - based on your provider - and your context's class name - which usually isn't what you want,
You shouldn't customize with initializers for that reason -
initializers should be agnostic and serve other purpose - setup
connection in the .config - or directly on your DbContext
Also check this Entity Framework Code First - How do I tell my app to NOW use the production database once development is complete instead of creating a local db?
Always check 'where your data' goes - before doing anything.
For how the initializer actually works - check this other post of mine, I made a thorough example
How to create initializer to create and migrate mysql database?
Notes: (from the comments)
Connection shouldn't be very dynamic - config is the right place for it to be, unless you have a good reason.
Constructor should work fine too.
CreateDbIfNotExists doesn't work well together with the 'migration' initializer. You can just use the MigrateDatabaseToLatestVersion initializer. Don't 'mix' it
Or - put something like public MyContext() : base("MyContextConnection") - which points to <connectionStrings> in the config
To point to connection - just use its 'name' and put that into constructor.
Or use somehting like ConfigurationManager.ConnectionStrings["CommentsContext"].ConnectionString
Regarding entertaining 'multiple databases' with migrations (local and remote from one app) - not exactly related - but this link - Migration not working as I wish... Asp.net EntityFramework
Update:
(further discussion here - Is adding a class that inherits from something a violation of the solid principles if it changes the behavior of code?)
It is getting interesting here. I did manage to reproduce the problems you're facing actually. Here is a short breakdown on what I think it's happening:
First, this worked 'happily':
Database.SetInitializer(new CreateAndMigrateDatabaseInitializer<MyContext, MyProject.Migrations.Configuration>());
for (var flip = false; true; flip = !flip)
{
using (var db = new MyContext(flip ? "Name=MyContext" : "Name=OtherContext"))
{
// insert some records...
db.SaveChanges();
}
}
(I used custom initializer from my other post, which controls migration/creation 'manually')
That worked fine w/o an Initializer. Once I switched that on, I ran into some curious problems.
I deleted Db-s (two, for each connection). I expected to either not work, or create one db, then another in the next pass (like it did, w/o migrations, just 'Create' initializer).
What happened, to my surprise - is it actually created both databases on the first
pass ??
Then, being a curious person:), I put breakpoints on the MyContext ctor, and debugged through the migrator/initializer. Again empty/no db-s etc.
It created first instance on my call within the flip. Then on the first access to 'model', it invoked the initializer. Migrator took over (having had no db-s). During the migrator.Update(); it actually constructs the MyContext (I'm guessing via generic param in Configuration) - and calls the 'default' empty ctor. That had the 'other connection/name' by default - and creates the other Db all as well.
So, I think this explains what you're experiencing. And why you had to create the 'Factory' to support the Context creation. That seems to be the only way. And setting some 'AppDomain' wide 'connection string' (which you did well actually) which isn't 'overriden' by default ctor call.
Solution that I see is - you just need to run everything through factory - and 'flip' connections in there (no need for static connection, as long as your factory is a singleton.
You can supply a configuration in the MigrateDatabaseToLatestVersion constructor.
If you set the initializer in the DbContext you can also pass a 'true' to use the current connection string.

Using log4net as a logging mechanism for SSIS?

Does anyone know if it is possible to do logging in SSIS (SQL Server Integration Services) via log4net? If so, any pointers and pitfalls to be aware of? How's the deployment story?
I know the best solution to my problem is to not use SSIS. The reality is that as much as I hate this POS technology, the company I work with encourages the use of these apps instead of writing code. Meh.
So to answer my own question: it is possible. I'm not sure how our deployment story will be since this will be done in a few weeks from now.
I pretty much took the information from these sources and made it work. This one explains how to make referencing assemblies work with SSIS, click here. TLDR version: place it in the GAC and also copy the dll to the folder of your targetted framework. In my case, C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727. To programmatically configure log4net I ended up using this link as reference.
This is how my logger configuration code looks like for creating a file with the timestamp on it:
using log4net;
using log4net.Config;
using log4net.Layout;
using log4net.Appender;
public class whatever
{
private ILog logger;
public void InitLogger()
{
PatternLayout layout = new PatternLayout("%date [%level] - %message%newline");
FileAppender fileAppenderTrace = new FileAppender();
fileAppenderTrace.Layout = layout;
fileAppenderTrace.AppendToFile = false;
// Insert current date and time to file name
String dateTimeStr = DateTime.Now.ToString("yyyyddMM_hhmm");
fileAppenderTrace.File = string.Format("c:\\{0}{1}", dateTimeStr.Trim() ,".log");
// Configure filter to accept log messages of any level.
log4net.Filter.LevelMatchFilter traceFilter = new log4net.Filter.LevelMatchFilter();
traceFilter.LevelToMatch = log4net.Core.Level.All;
fileAppenderTrace.ClearFilters();
fileAppenderTrace.AddFilter(traceFilter);
fileAppenderTrace.ImmediateFlush = true;
fileAppenderTrace.ActivateOptions();
// Attach appender into hierarchy
log4net.Repository.Hierarchy.Logger root = ((log4net.Repository.Hierarchy.Hierarchy)LogManager.GetRepository()).Root;
root.AddAppender(fileAppenderTrace);
root.Repository.Configured = true;
logger = log4net.LogManager.GetLogger("root");
}
}
Hopefully this might help someone in the future or at least serve as a reference if I ever need to do this again.
Sorry, you didn't dig deep enough. There are 5 different destinations that you can log to, and 7 columns you can choose to include or not include in your logging as well as between 18 to 50 different events that you can capture logging on. You appear to have chosen the default logging, and dismissed it because it didn't work for you out of the box.
Check these two blogs for more information on what can be done with SSIS logging:
http://consultingblogs.emc.com/jamiethomson/archive/2005/06/11/SSIS_3A00_-Custom-Logging-Using-Event-Handlers.aspx
http://www.sqlservercentral.com/blogs/michael_coles/archive/2007/10/09/3012.aspx

Resources