Storing mlflow artifacts to s3, while having SQL databse as backend - mlflow

When using a SQL database as backend for mlflow are the artifacts stored in the same database or in default ./mlruns directory?
Is it possible to store them in different location as in AWS S3?

Yes, you can different artifact locations for each experiment and have the same backend registry. Here is an example that shows it
In this example, my backend registry is "mlruns.db" and the artifacts will be stored in their respective locations.

Yes. You can make use of the mlflow serve as below.
mlflow server --backend-store-uri=sqlite:///mlflow.db --default-artifact-root="s3://<bucket_name>" --host 0.0.0.0 --port 80
Also dont forget add install boto3 and configure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables so that mlflow can read and write from and to the bucket respectively

Related

Passing arguments to Docker containers via Kubernetes Deployment YML

I'm using a CI/CD pipeline to deploy a Node.js app on my Kubernetes cluster. Now, we use different sensible environment variables in local, and we would like to deploy them as environment variables within the cluster to be used by the different containers...
Which strategy should I go with?
TIA
There are many tools created in order to let you inject secret into kubernetes safely.
Natively you can use the "Secrets" object: https://kubernetes.io/docs/concepts/configuration/secret/
And mount the secret as env var.
Alternatively, you can use some open source tools that make this process more secured by encrypting the secrets, here are some I recommend:
https://learnk8s.io/kubernetes-secrets-in-git
https://www.vaultproject.io/docs/platform/k8s

DC/OS private registry with authentication fails

I got a running DC/OS cluster on Azure and i'm trying to configure it to use private registry credentials.
I'm running Azure Private Registry with admin. I can log in and use the images.
I followed the guide provided by DC/OS but it recommends saving it on the nodes themselves. I want to use Azure File Storage instead.
I saved the config.json file to auth to the loginserver on a blob and provide the URI with deployment configuration.
config.json:
auths:
stageon.azurecr.io:
auth "..."
Now the configuration just keeps running without any output so I assume it's hanging on pulling the image.
I am providing the direct link URL to the file and when I access it through webbrowser it returns the JSON.
Did anyone do something similar before I found this thread for amazon before but I can't seem to get it working.
I've used a customization to acs-engine a few times to push registry credentials to the agent nodes.
This approach makes sure that the credentials will be present even when you add nodes later on.
The code is here: https://github.com/xtophs/acs-engine-1/tree/xtoph-registry. Example cluster API model is at: https://github.com/xtophs/acs-engine-1/blob/xtoph-registry/examples/privateregistry/dcos1.8.4.json

Error when deploying from codeship to amazon aws

I have a local git repo and I am trying to do continuous integration and deployment using Codeship. https://documentation.codeship.com
I have the github hooked up to the continuous integration and it seems to work fine.
I have an AWS account and a bucket on there with my access keys and permissions.
When I run the deploy script I get this error:
How can I fix the error?
I had this very issue when using aws-cli and relying on the following files to hold AWS credentials and config for the default profile:
~/.aws/credentials
~/.aws/config
I suspect there is an issue with this technique; as reported in github: Unable to locate credentials
I ended up using codeship project's Environment Variables for the following:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Now, this is not ideal. However my AWS-IAM user has very limited access to perform the specific task of uploading to the bucket being used for the deployment.
Alternatively, depending on your need, you could also check out the Codeshop Pro platform; it allows you to have an encrypted file with environment variables that are decrypted at runtime, during your build.
On both Basic and Pro platforms, if you want/need to use credentials in a file, you can store the credentials in environment variables (like suggested by Nigel) and then echo it into the file as part of your test setup.

AWS Elastic Beanstalk NodeJS Project access S3 Bucket

i want to access S3 Bucket from my NodeJS application without write and commit the credentials for this Bucket in my application. I see that it is possible to set a .config file in the .elasticbeanstalk folder where you can specified RDS databases. In the application you can use this this RDS without set any credentials with variables like process.env.RDS_HOSTNAME. I want the same with S3 Bucket, but process.env.S3_xxx doesn't work. How should the .config look?
Alternatively,
you can explicitly set an environment variable from elasticbeanstalk at http://console.aws.amazon.com
Step 1: go to the above url login and open your elasticbeanstalk app.
Step 2: open the configuration tab and in that open software configuration.
Step 3 : scroll to environment properties and add your variable there i.e Property Name:S3_xxx,Property Value:"whatever value".
now you can access this variable in your app using process.env.S3_xxx
without any .config in your app.

Configure AWS credentials to work with both the CLI and SDKs

In trying to automate some deploy tasks to S3, I noticed that the credentials I provided via aws configure are not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?
After lots of searching, it was the excerpt from this article that caused a eureka moment.
If you've been using the AWS CLI, you might already have a credentials
file, which is in the same location as the new credentials file, but
is named config. If so, the CLI will continue to use that file.
However, if you create a new credentials file, the CLI will use that
one instead. (Be aware that the aws configure command that you can
use to set credentials from the command line will put the credentials
in the config file, not the credentials file.)
By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.

Resources