I want to change the reference to the working directory to a different place using LibGit2Sharp in a Visual C++ project. it seems to me that Repository::Init() with RepositoryOptions can set the working directory to a non-default place. What I am trying to do here, however, is to change the reference to the working directory for the repo AFTER it is created by Repository::Init(). Repository::Info::WorkingDirectory seems to a be read-only property, so I can't change it through this route.
Any thoughts on how to accomplish this? or the equivalent of git_repository_set_workdir() is not exposed in LibGit2Sharp.
What I am trying to do here, however, is to change the reference to the working directory for the repo AFTER it is created by Repository::Init().
Repository.Init() puts a folder under source control by creating a new repository on the filesystem. It returns an instance of the created repository.
The constructor of the Repository type gives you acces to an existing repository. This constructor accepts an optional RepositoryOptions parameter to override some options.
In order to fulfill your request I'd go with something like this
var path ="D:\path\to\your\repo";
using (var repo = Repository.Init(path)
{
// Do nothing here
}
var newWorkdir ="D:\path\to\your\other\working\directory";
var options = new RepositoryOptions { WorkingDirectoryPath = newWorkdir };
using (var repo = new Repository(path, options))
{
// Do YOUR amzing stuff ;-)
}
Update:
The config file of the repo does not have the worktree attribute set to the right place, and the new working directory does not have a .git file pointing to the repo folder, as what you would expect.
This is the expected behavior. Passing a RepositoryOptions type to the constructor temporarily overrides some settings of the Repository. Once the repo is disposed, those temporary settings are lost.
I check the libgit2sharp source code for repository.cs and happen to notice that when it calls git_repository_set_workdir, it only has two arguments, as opposed to three
The bound libgit2 method is invoked with three params, the third one being set to false, because we do not want to persist the temporary settings when instanciating a repository.
Back to your original question: "What I am trying to do here, however, is to change the reference to the working directory for the repo AFTER it is created by Repository::Init()."
This is currently not possible with LibGit2Sharp. However, it might be possible to make this happen DURING the call to Repository.Init() through the addition of an optional parameter. If this looks like something may fit your need, I'd suggest you to open an issue regarding this topic, or even better, send a Pull Request ;-)
Another option would be for you to manually set the core.worktree config variable to the expected location (You'd have to deal with the creation of the gitlink by yourself, though):
The code below demonstrates this last option:
var path ="D:\path\to\your\repo";
// Note the use of forward slashes here
var newWorkdir ="D:/path/to/your/other/working/directory";
using (var repo = Repository.Init(path)
{
repo.Config.Set("core.worktree", newWorkdir);
}
using (var repo = new Repository(path))
{
// Do YOUR amzing stuff ;-)
}
Update 2:
LibGit2Sharp has just been updated with PR #453.
Among other things, this makes repo.Init() accept a separate git directory, which should fit your requirements.
Related
A NixOS configuration is built using the /etc/nixos/configuration.nix file. This configuration has a nix.package property.
In an NixOS instance, I want to print the version/hash (i.e., unique identifier) of the nix.package object that has been used in building the current instance. Ideally, this should be stored inside a lockfile, but I don't believe the current version of nixos-rebuild uses those.
Should this not be possible, can I explicitly store this hash somewhere during the build process by modifying my /etc/nixos/configuration.nix?
Yes, you can access this attribute via NixOS' config parameter and use it in your configuration, or as part of a package.
For example, this module causes the version and the store path to be written to files in /etc upon activation.
{ config, lib, ... }:
{
config = {
environment.etc."x-nix-version".text =
config.nix.package.version;
environment.etc."x-nix-path".text =
"${config.nix.package}";
};
}
Alternatively, you can extract it from a potentially not yet built configuration using the nixos-option command or nix repl '<nixpkgs/nixos>'.
So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}
I have already created a test blueprint that works, so I kinda got the idea, but I would like to make sure that I am approaching this correctly.
I want to extend the field type prompt to offer custom types alongside String, int, boolean etc.
This means I need to modify the templates, like templates/src/main/java/package/domain/Entity.java.ejs
My blueprint only had generators/client and generators/entity-client, so I guess I have to:
create generators/entity-server
create index.js
create files.js (can I copy that from here https://github.com/jhipster/generator-jhipster/blob/master/generators/entity-server/files.js ?)
create the templates in entity-server/templates
create generators/entity
copy and modify generators/entity/prompts.js: do I have to just write a new prompts.js, or do I have to copy over everything in generators/entity and only change what I would like to change ?
For the templates, can I copy them from the JHipster repo ?
Should I ? If not, why not and what is the alternative ?
If copying them is the right move, do I have to copy everything ? Or just the ones I want to modify ? (I haven't checked yet if I will need to modify everything)
When JHipster is updated, I suppose either I manually merge the new files, or I risk that slowly my code will differ more and more from the JHipster code ?
Is there a simpler method to achieve what I am trying to do ?
It would be nice if I could just say I want to add TypeX and TypeY to that prompt and provide limited templates that only cover those types, like a template for the import, one for the field, and one for the setter and getter, and if only the import is provided, a generic template is used.
I'll try to answer to all your questions.
First to create Blueprint I suggest to use https://github.com/jhipster/generator-jhipster-blueprint even in another folder and copy all you need for your current project. I think it's easier and you could choose which generator you want to add e.g. entity-server and entity.
Prompts phase
If you want to modify prompt phase you can merge your phase with the JHipster one like that
get prompting() {
const phaseFromJHipster = super._prompting();
const phaseFromMe = {
askForTheme: prompts.askForTheme,
setMySharedConfigOptions() {
this.configOptions.theme = this.theme;
}
};
return { ...phaseFromJHipster, ...phaseFromMe };
}
(source: https://github.com/avdev4j/samSuffit/tree/master/generators/client)
But by doing this you can't modify existing questions, for this case you should copy all existing questions into your blueprint.
Templates management
Your blueprint is linked with a JHipster version. As I used to say (in my talks) is that you should copy and modify templates from JHipster except for configuration files because it's a bit tricky to handle. For them, I prefer to use JHipster API like 'replaceContent()' or the needle API which allowed you to insert some code into some files (like a dependency in the pom.xml file).
Of course you can use the way you want, but with experiences I prefer to control my templates and merge them when I upgrade the JHipster version.
You should only copy the templates you want to modify, merge JHipster and your writing phase. JHipster use yeoman, which use memfs to handle file generation. Every files are created in memory and dumped at the final step. You can easily override a file without performance compromise.
get writing() {
const phaseFromJHipster = super._writing();
/* eslint-disable */
const phaseFromSam = {
writeSamFiles() {
if (this.clientFramework === 'angularX') {
return writeAngularFiles.call(this);
}
}
};
/* eslint-enable */
return { ...phaseFromJHipster, ...phaseFromSam };
}
JHipster upgrade
I suggest you to check templates when upgrading JHipster and apply modifications if needed. Otherwise, you could have bugs. Also, I suggest to set a definitive (like 6.1.0) version of JHipster in your blueprint package.json.
As far I know there is no way to do what you want to do. I'm thinking of a way to modify prompts easily without copying all other questions, if you want to contribute ;).
You can check my blueprint sample I use to show in my talks:
https://github.com/avdev4j/samSuffit/
I hope It can help you can, feel free to ask more.
regards,
I'm trying to create a custom ImageFilter that requires me to temporarily write the image to disk, because I'm using a third party library that only takes FileInfo objects as parameters. I was hoping I could use IStorageProvider to easily write and get the file but I can't seem to find a way to either convert an IStorageFile to FileInfo or get the full path to the Media folder of the current tenant to retrieve the file myself.
public class CustomFilter: IImageFilterProvider {
public void ApplyFilter(FilterContext context)
{
if (context.Media.CanSeek)
{
context.Media.Seek(0, SeekOrigin.Begin);
}
// Save temporary image
var fileName = context.FilePath.Split(new char[] { '\\' }, StringSplitOptions.RemoveEmptyEntries).LastOrDefault();
if (!string.IsNullOrEmpty(fileName))
{
var tempFilePath = string.Format("tmp/tmp_{0}", fileName);
_storageProvider.TrySaveStream(tempFilePath, context.Media);
IStorageFile temp = _storageProvider.GetFile(tempFilePath);
FileInfo tempFile = ???
// Do all kinds of things with the temporary file
// Convert back to Stream and pass along
context.Media = tempFile.OpenRead();
}
}
}
FileSystemStorageProvider does a ton of heavy lifting to construct paths to the Media folder so it's a shame that they aren't publicly accessible. I would prefer not to have to copy all of that initialization code. Is there an easy way to directly access files in the Media folder?
I'm not using multitenancy, so forgive me if this is inaccurate, but this is the method I use for retrieving the full storage path and then selecting FileInfo objects from that:
_storagePath = HostingEnvironment.IsHosted
? HostingEnvironment.MapPath("~/Media/") ?? ""
: Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Media");
files = Directory.GetFiles(_storagePath, "*", SearchOption.AllDirectories).AsEnumerable().Select(f => new FileInfo(f));
You can, of course, filter down the list of files using either Path.Combine with subfolder names, or a Where clause on that GetFiles call.
This is pretty much exactly what FileSystemStorageProvider uses, but I haven't had need of the other calls it makes outside of figuring out what _storagePath should be.
In short, yes, you will likely have to re-implement whatever private functions of FileSystemStorageProvider you need for the task. But you may not need all of them.
I was struggling with a similar issue too and i can say that the IStorageProvider stuff is pretty much restricted.
You can see this when viewing the code of FileSystemStorageFile. The class already uses FileInfo to return data but the struct itself isn't accessible and other code is based on this. Therefore you would have to basically reimplement everything from scratch (own implementation of IStorageProvider). The easiest option is to simply call
FileInfo fileInfo = new FileInfo(tempFilePath);
but this would break setups where no file system based storage provider is used like AzureBlobStorageProvider.
The proper way for this task would be to get your hands dirty and extend the storage provider interfaces and update all the code that is based on it. But as far as i can remember the issue here is that you need to update the Azure stuff also and then things get really messy. Due to this fact i aborted that approach when trying to do this heavy stuff on my project.
I am using the botbuilder framework. I have defined several namespaces for the dialogs I have created, such as help or default. For all of these I have also created json files in my locale/en/ directory, and all is well.
However, I have a few sentences that are very common, and I don't feel like copying those over to each of the individual namespaces. I have tried using index.json as a 'fallback' in case the namespace file doesn't define the string. But it doesn't work for me. Contrary to what the documentation seems to suggest.
/locale
/en
/help.json
/default.json
/index.json <-- Doesn't work
/dialogs
/help.js
/default.js
bot.js
Say I have the following library in help.js, that :
lib = new builder.Library('help')
lib.dialog('/', (session) => {
session.send('custom_cancel')
}
module.exports = lib
The library is used in bot.js:
bot.library(require('./dialogs/help'))
And index.json has this content:
{
"custom_cancel": "My custom cancel"
}
Whereas help.json is empty:
{}
Because help.json does not have custom_cancel, the bot will actually send custom_cancel as the string.
Again. I can copy paste the strings to both locations and there is no more problem. But that seems like an ugly solution to me.
I have tried the more explicit version, which seems to help in more cases, but I am not fully convinced yet.
session.localizer.gettext(session.preferredLocale(), 'custom_cancel')
You can use the third argument for the namespace. It seems that '' will point to the index.json file.