It is been a while I haven't touch MLFlow, (I plan to refresh this from now on) but I remember that it stored metadata, parameters, models etc in a local folder (mlruns?)inside your local ML repo.
I would like to ask, how can all of this be automatically saved (uploaded?) to a NAS location?
Related
We are trying to use cloud hot folder functionality and in order to do so we are modifying our existing hot-folder implementation that was not implemented originally for usage within cloud.
Following the steps on this help page:
https://help.sap.com/viewer/0fa6bcf4736c46f78c248512391eb467/SHIP/en-US/4abf9290a64f43b59fbf35a3d8e5ba4d.html
We are trying to test the cloud functionality locally. I have on my machine azurite docker container running and I have modified the mentioned properties in local.properties file but it seems that the files are not being picked up by hybris in any of the cases that we are trying.
First we have in our local azurite storage a blob storage called hybris. Within this blob storage we have folders master>hotfolder, and according to docs uploading a sample.csv file into this should trigger a hot folder upload.
Also we have a mapping for our hot-folder import that scans the files within this folder: #{baseDirectory}/${tenantId}/sample/classifications. {baseDirectory} is configured using a property like so: ${HYBRIS_DATA_DIR}/sample/import
Can we keep these mappings within our hot folder xml definitions, or do we need to change them?
How should the blob container be named in order for it to be accessible to hybris?
Thank you very much,
I would be very happy to provide any further information.
In the end I did manage to run cloud hot folder imports on local machine.
It was a matter of correctly configuring a number of properties that are used by cloudhotfolder and azurecloudhotfolder extensions.
Simply use the following properties to set the desired behaviour of the system:
cluster.node.groups=integration,yHotfolderCandidate
azure.hotfolder.storage.account.connection-string=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:32770/devstoreaccount1;
azure.hotfolder.storage.container.hotfolder=${tenantId}/your/path/here
cloud.hotfolder.default.mapping.file.name.pattern=^(customer|product|url_media|sampleFilePattern|anotherFileNamePattern)-\\d+.*
cloud.hotfolder.default.images.root.url=http://127.0.0.1:32785/devstoreaccount1/${azure.hotfolder.storage.container.name}/master/path/to/media/folder
cloud.hotfolder.default.mapping.header.catalog=YourProductCatalog
And that is it, if there are existing routings for traditional hot folder import, these can also be used but their mappings should be in the value of
cloud.hotfolder.default.mapping.file.name.pattern
property.
I am trying the same - to set up a local dev env to test out the cloud hotfolder. It seems that you have had some success. Can you provide where you located the azurecloudhotfolder - which is called out here https://help.sap.com/viewer/0fa6bcf4736c46f78c248512391eb467/SHIP/en-US/4abf9290a64f43b59fbf35a3d8e5ba4d.html
Thanks
I am migrating from Batch AI to the new Azure Machine Learning Service. Previously I had my python scripts on an Azure Files share and those scripts ran directly from there.
In the new service when you create an Estimator you have to provide a source directory and an entry script. The documentation states the source directory is a local directory that is copied to the remote computer.
However, the Estimator constructor also allows you to specify a datastore name that is supposed to specify the datastore for the project share.
To me, this sounds like you can specify a datastore and then the source directory is relative to that however this does not work, it still wants to find the source directory on the local machine.
tf_est = TensorFlow(source_directory='./script',
source_directory_data_store=ds,
script_params=script_params,
compute_target=compute_target,
entry_script='helloworld.py',
use_gpu=False)
Does anybody know if its possible to run a training job using a datastore for execution?
I created a spring-boot web project and uploaded it to server already(centOS7).
currently the img upload to jar file on server is stored inside the static package in jar file
this makes the jar file very large and hard to edit.
can some one give me a idea to store the img somewhere else on server and how to find the position of picture out of jar inside html.
First of all, you have to decide, in which directory you are going to store your files and create it:
mkdir /path/to/your/dir
Then assign a newly created directory to your application user:chown <your user>:<users group> /path/to/your/dir
Then, don't forget to give read/write permissions for the user, under which you run your app - to the already created directory.
chmod 600 /path/to/your/dir - this will allow your app to only read/write to the directory and prevent the execution of files within it (for security reasons).
Then just replace the path you already have - with the new one (to the newly created directory).
Please, be aware that there is a lot of security stuff to consider when you're going to store files on your server.
By the way, please consider reading about different storage options like AWS S3, Google Cloud Storage and Ceph.
Please, take into account - that if you're going to store your files on your server - then you should take care about them (for example: keep eyes on space, make sure you have mirroring across discs and so on so forth). With AWS S3, for example, you don't need to care about all of that stuff and it's very cheap.
I know that a good way to store data like db passwords, etc. is via environment variables, but setting environment variables manually for every server instance created is time consuming.
I'm planning to deploy my project to the cloud (using aws ebs or heroku).
where should I store my db password?
I think the .ebextensions file isn't a good option because it's tracked in vcs
Don't ever store secrets in source control. A common practice is to either put them in a secure file or in something like https://www.vaultproject.io/ then inject them (programmatically via a script or some other deployment/configuration tool) into the environment when you bring up your VM (or container or whatever).
My recommendation is to create a properties file which can be stored in the resources folder of your application and the code can access the resources. do not need environment variable. One property file can contain all db's userid and passwords. Deploy job based on url mapping in the properties file. For example, look at a spring hibernate example project which uses a property file. Or look at ant deploy scripts. Hope it helps.
I accidentally deleted a table from Data of Mobile Service. Is there any way I can recover it?
I used the default free database given with making a mobile service. I really do not care about the data in table, instead I really want the scripts than ran on it.
.........................................
In order to retrieve the data I did the following:
Cloned the mobile service, reverted it to a previous commit, copied the deleted table and its script files, pulled again from the server, added the table and the script files where they should be, added the files to git tracking index, pushed the commit to master
Now the files are there in the azure mobile service, but the table is not being displayed in the GUI.
I tried to restart the azure mobile service but still it is not there.
In order to confirm the table and its files were indeed there I even cloned the mob service again and this time in the table folder I had users.json and its script files, but sadly they are not visible in azure portal
To get the table to show again in the UI, you need to use the portal create table command. It will basically noop if it detects the table already exists in SQL. I don't believe it will touch your table scripts, however it may override the .json permissions file.
If it does override the js files, then after creating the table through the UI you can revert the commit that changed the json/js table files as part of that process.
At that point you should be good again.