MLFlow - How to migrate or copy a run from one experiment to other? - mlflow

I am trying to move a run in MLflow from one experiment to another. Does anybody know if its possible? If yes, how? (I use Python API)

https://github.com/amesar/mlflow-export-import
You can copy a run from one experiment to another - either in the same tracking server or between two tracking servers. Caveats apply if they are Databricks MLflow tracking servers.

For local users, I directly move run folders and modify the experiment id in meta.yaml.

For local users, I directly move run folders and modify the experiment
id in meta.yaml.
To help elaborate, every run has its own folder, which basically contains more folders like "artifacts", "metrics", "params", and "tags", you could find a "meta.yaml" file in that directory.
folder example
Open the meta.yaml file and you should find the experiment_id, simply change the number behind it should do the trick.
meta.yaml example

Related

How to hide PHP code on the server from other person under root?

Good day!
There are PHP scripts, classes, configs. All this stuff is interconnected, I need to give a person access to the server so that he works (started under the root) with these scripts, while changing only the config files, and in order to not be able to view the source code.
I've researched various free obsfukators which converting code into something:
<?php include(base64_decode('Li4vY29uZmlnLnBocA=='));include(base64_decode('cHJpdmF0ZS92ZW5kb3IvYXV0b2xvYWQucGhw'));$krc_5bf7f45b=[];foreach($bhi_6f9322e1as $xol_e8b7be43){$xol_e8b7be43=explode(base64_decode('Og=='),$xol_e8b7be43);try{$uic_c59361f8=new \xee_d9cb1642\cko_659fc60();$uic_c59361f8->ldc_aa08cb10($xol_e8b7be43[0],$xol_e8b7be43[1]);$krc_5bf7f45b[]=$uic_c59361f8;}catch(Exception $wky_efda7a5a)
What if the files of configs have variable names and it turns out that when obfuscating the main working code, the variables have different names? Not to force the user to run through the obsfukation corrected config every time? So far, this option seems the only one.
Is it possible inside the server under Ubuntu to somehow limit the ability to copy or view or download certain files or make some other methods of protection-hiding, but at the same time with the ability to run this code. It was thought to hide the code somewhere in the depths of the file system folders, calling them random names, and run them somehow through the symlinks by the file name or something like that. Is it possible to?
Option not to provide root access to the server, but to launch via the browser, to give access only to FTP to upload the config to a separate folder. But there are a number of points - all scripts run up to a week, and must be executed as root. How to solve it?

Setting up a trigger to watch new folders Azure Logic Apps

I am trying to create a logic app that will transfer files as they are created from my FTP server to my Azure file share. The structure of the folder my trigger is watching is structured by date (see below). Each day that a file is added, a new folder is created, so I need the trigger to check new subfolders but I don't want to go into the app every day to change which folder the trigger looks at. Is this possible?
Here's how my folder(Called data) structure is, each day that a file is added a new folder is created.
-DATA-
2016-10-01
2016-10-02
2016-10-03
...
The FTP Connector uses a configurable polling where you set how many times it should look for a file. The trigger currently does not support dynamic folders. However what you could try is the following:
Trigger your logic app by recurrence (same principle as the FTP trigger in fact)
Action: Create a variable to store the date time (format used in your folder naming)
Action: Do a list files in folder (here you should be able to dynamically set the folder name using the variable you created)
For-each file in folder
Action: Get File Content
Whatever you need to do with the file (call nested logic app in case you need to do multiple processing actions on each fiel is smart if you need to handle resubmits of the flow by file)
In order to avoid that you pick up every file each time, you will need to find a way to exlude files which have been processed in an earlier run. So either rename the file after it's processed to an extension you can exclude in the next run or move the file to a subfolder "Processed\datetime" in the root.
This solution will require more actions and thus will be more expensive. I haven't tried it out, but I think this should work. Or at least it's the approach I would try to set up.
Unfortunately, what you're asking is not possible with the current FTP Connector. And there aren't any really great solution right now...:(
As an aside, I've seen this pattern several times and, as you are seeing, it just causes more problems than it could solve, which realistically is 0. :)
If you own the FTP Server, the best thing to do is put the files in one folder.
If you do not own the FTP Server, politely mention to the owner that this patterns is causing problems and doesn't help you in any way so please, put the files on one folder ;)

How persistent is data I put on my Azure WebApp via FTP?

I've been searching around and can't find any clear answers to this. I need a small amount of data - talking kilobytes, probably not ever reaching megabyte range - available as a file on my Azure instance, outside the web app itself, for a web job to work with. I won't get into why this is necessary, but it is (alternatives have been explored), and the question is now where to put those files. The obvious answer seems to be to connect to the FTP, create a directory, plop them there and work with them there.
I did a quick test and I'm able to create a "downloads" directory within the "data" directory, drop some files in it, and work with them there. It works great for this very small, simple need that I have.
How long will that data stay there? Is that directory purged at any point automatically by the servers? Is that directory part of any backups that are maintained? How "safe" is something I manually put outside the wwwroot folder?
It will never be purged. The only folder that can get purged is the %TEMP% folder. All other folders that you have write access to will be persisted forever.

Command for sharing same folder with multiple names

I’m currently working on an improvement of our deployment process and I’d really like to automate one of the new steps. Through Windows UI I can share the same folder on the network under multiple different names.
There is a prompt command for sharing (“NET SHARE _shared$=c:\Example”), but it only allows adding one shared name for the folder. Is there a prompt command for this and/or is there a way to do this programmatically, like with Windows API or even WMI? 

source code location for debugging multiple instance of an application

Hi have an application running separateley (1 instance for customer) in different folders, 1 per each customer.
Each customer is a separate user on my machine.
At the moment I have the source code in each of these folders where I rebuild the code per each instance. Would it be better if I do something like the following?
create a shared folder where I build the code
deploy the binary in each user folder.
allow permission for each user to access the source code in READ ONLY mode.
when it is time to debug, by using gdb in each user folder will allow to read the source code and debug will happen.
Do you think that this could be a better approach or there are better practice?
My only concern is that each user has the chance to read the source code, but since the user will not access directly his folder (it is in my control) this should not trouble me.
I am using CENTOS 6.4, SVN and G++/GDB.
in different folders
There are no "folders" on UNIX, they are called directories.
I rebuild the code per each instance
Why would you do that?
Is the code identical (it sounds like it is)? If so, build the application once. There is no reason at all to have multiple copies of the resulting binary, or the sources.
If you make the directory with sources and binaries world-readable, then every user will be able to debug it independently.

Resources