I'm trying to create a custom ImageFilter that requires me to temporarily write the image to disk, because I'm using a third party library that only takes FileInfo objects as parameters. I was hoping I could use IStorageProvider to easily write and get the file but I can't seem to find a way to either convert an IStorageFile to FileInfo or get the full path to the Media folder of the current tenant to retrieve the file myself.
public class CustomFilter: IImageFilterProvider {
public void ApplyFilter(FilterContext context)
{
if (context.Media.CanSeek)
{
context.Media.Seek(0, SeekOrigin.Begin);
}
// Save temporary image
var fileName = context.FilePath.Split(new char[] { '\\' }, StringSplitOptions.RemoveEmptyEntries).LastOrDefault();
if (!string.IsNullOrEmpty(fileName))
{
var tempFilePath = string.Format("tmp/tmp_{0}", fileName);
_storageProvider.TrySaveStream(tempFilePath, context.Media);
IStorageFile temp = _storageProvider.GetFile(tempFilePath);
FileInfo tempFile = ???
// Do all kinds of things with the temporary file
// Convert back to Stream and pass along
context.Media = tempFile.OpenRead();
}
}
}
FileSystemStorageProvider does a ton of heavy lifting to construct paths to the Media folder so it's a shame that they aren't publicly accessible. I would prefer not to have to copy all of that initialization code. Is there an easy way to directly access files in the Media folder?
I'm not using multitenancy, so forgive me if this is inaccurate, but this is the method I use for retrieving the full storage path and then selecting FileInfo objects from that:
_storagePath = HostingEnvironment.IsHosted
? HostingEnvironment.MapPath("~/Media/") ?? ""
: Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Media");
files = Directory.GetFiles(_storagePath, "*", SearchOption.AllDirectories).AsEnumerable().Select(f => new FileInfo(f));
You can, of course, filter down the list of files using either Path.Combine with subfolder names, or a Where clause on that GetFiles call.
This is pretty much exactly what FileSystemStorageProvider uses, but I haven't had need of the other calls it makes outside of figuring out what _storagePath should be.
In short, yes, you will likely have to re-implement whatever private functions of FileSystemStorageProvider you need for the task. But you may not need all of them.
I was struggling with a similar issue too and i can say that the IStorageProvider stuff is pretty much restricted.
You can see this when viewing the code of FileSystemStorageFile. The class already uses FileInfo to return data but the struct itself isn't accessible and other code is based on this. Therefore you would have to basically reimplement everything from scratch (own implementation of IStorageProvider). The easiest option is to simply call
FileInfo fileInfo = new FileInfo(tempFilePath);
but this would break setups where no file system based storage provider is used like AzureBlobStorageProvider.
The proper way for this task would be to get your hands dirty and extend the storage provider interfaces and update all the code that is based on it. But as far as i can remember the issue here is that you need to update the Azure stuff also and then things get really messy. Due to this fact i aborted that approach when trying to do this heavy stuff on my project.
Related
So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}
I currently have a project in a loose ES6 module format and my database connection is hard coded. I am wanting to turn this into an npm module and am now facing the issue of how to best allow the end user to configure the code. My first attempt was to rewrite it as classes to be instantiated but it is making the use of the code more convoluted than before so am looking at alternatives. I am exploring my configuration options. It looks like writing to the process env would be the way but I am pondering potential issues, no-nos and other options I have not considered.
Is having the user write config to process env an acceptable method of configuring an npm module? It's a bit like a global write so am dealing with namespace considerations for one. I have also considered using package.json but that's not going to work for things like credentials. Likewise using an rc file is cumbersome. I have not found any docs on the proper methodology if any.
process.env['MY_COOL_MODULE_DB'] = ...
There are basically 5ish options as I see it:
hardcode - not an option
create a configured scope such as classes - what I have now and bleh
use a config such as node-config - not really a user friendly option for npm
store as globals/env. As suggested in comment I can wrap that process in an exported function and thereby ensure that I have a complex non collisive namespace while abstracting that from end user
Ask user to create some .rc file - I would if I was big time like AWS but not in this case.
I mention this npm use case but this really applies to the general challenge of configuring code that is exported as functions. I have use cases for classes but when the only need is creating a configured scope at the expense (in my case) of more complex code I am not sure its worth it.
Update I realize this is a bit of a discussion question but it's helped me wrap my brain around options. I think something like this:
// options.js
let options = {}
export function setOptions(o) { options = o }
export function getOptions(o) { return options }
Then have the user call setOptions() and call this getOptions internally. I realize that since Node requires the module just once that my options object will be kept configured as I pass it around.
NPM modules should IMO be agnostic as to where configuration is stored. That should be left up to the developer, and they may pick their favorite method (env vars, rc files, JSON files, whatever).
The configuration can be passed to your module in various ways. A common way is to export a function that takes an options object:
export default options => {
let db = database.connect(options.database);
...
}
From there, it really depends on what exactly your module provides. If it's just a bunch of loosely coupled functions, you can just return an object:
export default options => {
let db = database.connect(options.database);
return {
getUsers() { return db.getUsers() }
}
}
If you want to allow multiple versions of that object to exist simultaneously, you can use classes:
class MyClass {
constructor(options) {
...
}
...
}
export default options => {
return new MyClass(options)
}
Or export the entire class itself.
If the number of configuration options is limited (say 3 or less), you can also allow them to be passed as separate arguments, instead of passing an object.
I started working with CodedUI few months before to automate a desktop Application(WPF).
Just checking out for the best ways to create a framework for my Application.
As, I have seen in other automation tools, I feel the heart of an automation framework using any tool(UI Based) is the way it's object Repository is created i.e. how well the UI objects are defined. A Cleaner and well defined Object Repository always proves to be very helpful when it comes to updating your tests.
I am trying to discover the best way to store my UIObjects so that in case of any UI changes in my Application, I have to put minimum effort to update my automation test.
Also, If an Object changes in application, updating it only at one place should solve the problem.
This can be any kind of change like :
->change in just a property(This I feel would be very easy to update in automation Test. The best and Easiet way I feel is to simply update the .uitest file(the xml file) if possible.)
->change in hierarchy and position
->entirely new object added
For the 2nd and 3rd changes, updating scripts become a difficult job, esp if the UIObject is being referred at may places, in many TestMethods, or Modules.
Also, I have generally seen that in Test Methods, Variable Declarations are done to create a reference to the UIMap objects and those variables are further used in the TestMethod Code.
So, in this case If the UI of my application changes, I will have to update the variable decalaration in each of the Test Methods. I want to reduce this effort to changing the variable decalaration only at one place. OfCourse, I cannot have all the code inside only one Test Method. One way that came to my mind is as:
Can't I have simply one common place for all these Variable decalarations. We can give a unique and understandable name to each UIObject e.g.: The decalratoions will look like:
UITabPage UITabPage = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage;
WpfRow UIRow = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow;
WpfText UIEquipmentTagText = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow.UITagCell.UIEquipmentTagText;
WpfCheckBox UIEquipmentCheckBox = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow.UICheckBoxCell.UICheckBox;
....
....
and use these variables wherever required. Hence, In case of any chnages also, there will be only one place where you will need to update thse objects.
But for this, These varaibles must be made STATIC. What can be problem with making these Object Variables static?
Please provide your suggestion on this topic. May be what I am thinking is not possible or practical. I just want to choose the best way to start with before I go too far with the automation scripts and realize later that my approach wasn't a good one.
Thanks in Advance,
Shruti
Look into using descriptive programming instead of using the UIMaps.
Make a static class with generic functions to assist. Going to give you some examples of how to set it up.
For example:
public WinWindow parentwin(string ParentControlName)
{
var parentwin = new WinWindow();
parentwin.SearchProperties.Add("Control Name", ParentControlName);
return parentwin;
}
public WinWindow childwin(string ChildWinControlName, string ParentControlName)
{
var childwin = new WinWindow(parentwin(ParentControlName));
childwin.SearchProperties.Add("Control Name", ChildWinControlName);
return childwin;
}
public WinButton button(string ButtonName,string ChildWinControlName, string ParentControlName)
{
var childwin = childwin(ChildWinControlName,ParentControlName);
var button = new WinButton(childwin);
button.SearchProperties.Add("Name", ButtonName);
}
public void ClickButton(string ButtonName,string ChildWinControlName, string ParentControlName)
{
var button = button(ButtonName,ChildWinControlName,ParentControlName);
Mouse.Click(button);
}
public void ChangeFocus(WinWindow NewFocus)
{
var NewFocus = new NewFocus();
NewFocus.SetFocus();
}
public void ChangeFocus(WinWindow NewFocusChild, string c)
{
var a = new NewFocus();
a.SetFocus();
}
ChangeFocus(childwin("WelcomeForm", "MainForm");
ClickButton("&OK", "WelcomeForm", "MainForm");
I'm fairly new to Golang; previously used Python.
I am having difficult time to apply bufio in the object.
type fout struct {
filename string
fo File
bfo Writer
}
func (a *fout) init() {
a.fo,_:=os.Open(a.filename)
a.bfo:=bufio.NewWriter(fo)
}
Basically, I like to create objects; each will have it's filename, and bufio will be used.
Can anyone help me please?
Thank you
Few things in the code sample:
Every use of a name from another package needs to be prefixed with the package name--so fo File has to be fo *os.File.
You normally declare *bufio.Writer and *os.File as pointers (see the bufio and file docs at http://golang.org/pkg)
You want plain =, not :=, for assigning to attributes like a.fo and a.bfo.
Don't throw away errors, particularly if you're used to exceptions, or you'll have impossible-to-debug problems. (For a trivial script for learning you can if err != nil { panic(err) }, but for real use, you almost always want to return them.)
It could also help to review the tour, pick up some tricks/advice from the various talks and blog posts, maybe walk through Go By Example (I admit I haven't persionally used it but sounds like it could be useful when getting started), look at some open-source Go code (projects on Github, the stdlib, anything), and run through the surprisingly readable spec once you're at the level where you want to know how the language really works.
I want to change the reference to the working directory to a different place using LibGit2Sharp in a Visual C++ project. it seems to me that Repository::Init() with RepositoryOptions can set the working directory to a non-default place. What I am trying to do here, however, is to change the reference to the working directory for the repo AFTER it is created by Repository::Init(). Repository::Info::WorkingDirectory seems to a be read-only property, so I can't change it through this route.
Any thoughts on how to accomplish this? or the equivalent of git_repository_set_workdir() is not exposed in LibGit2Sharp.
What I am trying to do here, however, is to change the reference to the working directory for the repo AFTER it is created by Repository::Init().
Repository.Init() puts a folder under source control by creating a new repository on the filesystem. It returns an instance of the created repository.
The constructor of the Repository type gives you acces to an existing repository. This constructor accepts an optional RepositoryOptions parameter to override some options.
In order to fulfill your request I'd go with something like this
var path ="D:\path\to\your\repo";
using (var repo = Repository.Init(path)
{
// Do nothing here
}
var newWorkdir ="D:\path\to\your\other\working\directory";
var options = new RepositoryOptions { WorkingDirectoryPath = newWorkdir };
using (var repo = new Repository(path, options))
{
// Do YOUR amzing stuff ;-)
}
Update:
The config file of the repo does not have the worktree attribute set to the right place, and the new working directory does not have a .git file pointing to the repo folder, as what you would expect.
This is the expected behavior. Passing a RepositoryOptions type to the constructor temporarily overrides some settings of the Repository. Once the repo is disposed, those temporary settings are lost.
I check the libgit2sharp source code for repository.cs and happen to notice that when it calls git_repository_set_workdir, it only has two arguments, as opposed to three
The bound libgit2 method is invoked with three params, the third one being set to false, because we do not want to persist the temporary settings when instanciating a repository.
Back to your original question: "What I am trying to do here, however, is to change the reference to the working directory for the repo AFTER it is created by Repository::Init()."
This is currently not possible with LibGit2Sharp. However, it might be possible to make this happen DURING the call to Repository.Init() through the addition of an optional parameter. If this looks like something may fit your need, I'd suggest you to open an issue regarding this topic, or even better, send a Pull Request ;-)
Another option would be for you to manually set the core.worktree config variable to the expected location (You'd have to deal with the creation of the gitlink by yourself, though):
The code below demonstrates this last option:
var path ="D:\path\to\your\repo";
// Note the use of forward slashes here
var newWorkdir ="D:/path/to/your/other/working/directory";
using (var repo = Repository.Init(path)
{
repo.Config.Set("core.worktree", newWorkdir);
}
using (var repo = new Repository(path))
{
// Do YOUR amzing stuff ;-)
}
Update 2:
LibGit2Sharp has just been updated with PR #453.
Among other things, this makes repo.Init() accept a separate git directory, which should fit your requirements.