Copying Experiments from a MLFlow server to another MLFlow server - mlflow

I have a user in a Linux machine and I run a mlflow server from this user. Artifacts are stored in local mlruns folder. Lets call this user as user A. Then I run another mlflow server from another Linux user and call this user as user B. I wanted to move older experiments that resides in mlruns directory of user A to mlflow that run in user B. I simply moved mlruns directory of user A to the home directory of user B and run mlflow from there again. When I accessed to mlflow UI by browser I saw that artifact location is configured correctly to mlruns folder of user B, but I couldn't see the experiments that moved from user A's mlruns directory. How can I see them in the UI too?

You want to use the official MLflow API to migrate experiments and runs between tracking servers.
See: https://github.com/amesar/mlflow-export-import

I managed to migrate experiments and runs from the older server to the new server by following these steps:
I copied mlruns directory to the new server's location.
I created a different PostgreSQL database with the exact same content of the older server.
I changed artifact_uri field of RUNS table and artifact_location field of EXPERIMENTS table in the database to reflect new location of the experiments and runs.
Started server as this: mlflow server --backend-store-uri <postgresql-db> <db-pass>#<db> --default-artifact-root <new-artifact-location> -h 0.0.0.0 -p 8000

Related

Jenkins Slave Node : Can I use it to take over build done on different domain?

I have successfully set up Jenkins on local domain as a test. It builds from SCM, zips the build, extracts to a unique timestamp folder, and then copies over the files to the IIS folder.
I now have to set it up to deploy to a Azure VM. Now things are getting hairy.
I get the file to copy across - it takes a long time. Unzipping literally takes an hour.
Cross domain user rights are also making things difficult as the user running Jenkins service does not exist on production boxes which are on Azure domains.
What are my options?
Should I install a slave node on the production box and then "activate" the slave from the master and then let the slave :
1. perhaps copy the file over from Azure storage to the production box?
2. extract the files
3. Copy the files to the IIS folder.
Well, there's no clear answer to this, try what works best for you. So the main options i see are:
1. Use slave node in Azure, upload zip to some place (Azure storage account or whatever) and let slave node handle the download\unpacking\etc.
2. Use remote PowerShell and connect directly to servers in Azure and download the zip from the web (or Azure storage or whatever) and extract it.
3. Use a tool, like Octopus, which does literally the same, but is kind of build with deployments in mind.

Moving neo4j database from Windows to Ubuntu

I created neo4j database using cypher queries through browser and some python (py2neo) routines.
Now, I have to transfer this database to another neo4j instance on my Linux desktop.
What I did-
Zip the contents of folder default.graphdb.
Unzipped the contents of the zip file to data/graph.db in my linux installation.
Also the user:pass of the database are same.
But when I goto the browser, I can't find any of that data. The directory does point to the folder that I extracted to (/home/goelakash/neo4j-community-2.3.0/data/graph.db).
How do I get that database?
EDIT - messages.log
https://drive.google.com/file/d/0B3JPglmAz1b5ak1vRWR5Z0p5UVE/view?usp=sharing
The data files should be located directly in data/graph.db. So check e.g. if there is a file called neostore.nodestore.db. If so, check the permissions - the system user running Neo4j needs to have full recursive read/write permission on the graph.db folder.
Also make sure that you're using the same version of Neo4j on Windows and Linux (or upgrade the store following the reference manual).
For more insight attach the startup sequence form data/graph.db/messages.log.

How can I bundle my database file into my app, ready for a side load install?

I am creating a Windows 10 Universal App which uses a local SQLite Database.
In order for the app to use the database file It must be placed in:
C:\Users\<Username>\AppData\Local\Packages\<Name of Package>\Local State
Now I understand this is the 'local' file structure for the application. However I have a pre-made database that the app needs to interact with and therefore should be bundled as part of the app on install.
Is there a method of including my database in a usable fashion when distributing my application via a side-load install?
Furthermore, This problem is of paramount importance as This 'C:\' Directory will not exist when pushing my application to the mobile phone or other Windows 10 (not a desktop) device.
You cannot package the database directly as read-write data (local state). If you only ever need to read from the database, you can just include it in your project and read it from Package.Current.InstalledLocation.
If you need to write to the database, but it contains some initial values you want to ship with your app, then you still need to include the database in your project, but then copy it from the InstalledLocation to ApplicationData.Current.LocalFolder if it doesn't exist when your app starts up.
You can all ways export your existing data base as SQL script and save it in your project assets.
On the first run of your application you can create the Sqlite file in your LocalFolder, and run the script with CREATE and INSERT queries.

How to package synced folder in vagrant box

What I want and achieved so far:
I want to create a custom vagrant box including a configuration and an application to reuse it in different client or serve environments.
Specifically I managed to create vagrant box, based on Ubuntu (precise/64), that has node.js installed, and package it on my dev machine with
vagrant package my-box --output filename.box
I am able to copy the filename.box to a remote server and vagrant up the box there. Node.js is installed within the vagrant box as expected.
The problem is, that I am not able to package the files in the synced folder vagrant. After starting the box on the remote server, the synced folder is empty
Therefore the application I developed on the local machine is not included in the box.
I tried to find a solution or any information about this behavior, but apart from this unanswered Post i couldn't find anything on the net.
My questions:
How can i preserve the files in the synced folder and package them in the filename.box for reuse in the server environment.
Is this even possible? Is the behavior I see a bug or is Vagrant not meant to package the files also?
I didn't do any configuration for the synced folders so far. Is it possible to package files from a different synced folder than the regular /vagrant?
If this is not possible at all, what are best practices for deployment or reusage of vagrant environments including applications?
1-3) No. This is not possible and not intended to work in the way you expect it to work.
Think of VirtualBox's shared folder as a mounted volume on a remote machine. It's not part of the file system of your virtual machine. The actual data is saved on the host machine, not the virtual one.
4) If you want to add data into your box, just copy your data over to the vm before you pack it. No need to use shared folders.
You cannot package a synced folder but what you are desiring is absolutely possible.
The easiest way to accomplish this is to put the data in some other directory in the box (thus ensuring it gets packaged with the box). And during the Vagrant box's provisioning, move or copy the data to the synced directory.
Once the box is up and running, the synced directory will have the files you want in it.

Can I migrate the installed OpenAM to another machine?

Thank you everyone in advance... ^^;
My manager asked me to migrate the installed OpenAM (already used) to another machine (newly obtained).
I tried to migrate it by file level.
(.openamcfg folder, openam folder, tomcat whole folder, ...)
But, after file migration... first access to /openam, it showed initial page(wizard) again. (no using installed configurations)
So, I should do first step. (amadmin password setting, and so on...)
Hmm... Is there any solution for pre-installed OpenAm instance migration?
If No, I can tell her there's no migration way.
Migrating the files should be enough, however you must make sure that :
The .openamcfg folder is in Tomcat's home folder
The file inside the .openamcfg folder contains a valid path to the openam folder
The .openamcfg and openam folders can be read/write by the user who runs the Tomcat service
The webapp context (the part of the URL after the server's IP address, typically /openam) stays the same
Also, when you copy the files, you must first properly stop the Tomcat service, especially if you use OpenAM's embedded datastore.

Resources