`mlflow server` - Difference between `--default-artifact-root` and `--artifacts-destination` - mlflow

I am using mlflow server to set up mlflow tracking server. mlflow server has 2 command options that accept artifact URI, --default-artifact-root <URI> and --artifacts-destination <URI>.
From my understanding, --artifacts-destination is used when the tracking server is serving the artifacts.
Based on the scenarios 4 and 5 provided by MLflow Tracking documentation
mlflow server --backend-store-uri postgresql://user:password#postgres:5432/mlflowdb --default-artifact-root s3://bucket_name --host remote_host --no-serve-artifacts
mlflow server \
--backend-store-uri postgresql://user:password#postgres:5432/mlflowdb \
# Artifact access is enabled through the proxy URI 'mlflow-artifacts:/',
# giving users access to this location without having to manage credentials
# or permissions.
--artifacts-destination s3://bucket_name \
--host remote_host
In the 2 scenarios, both --default-artifact-root and --artifacts-destination accept a s3 bucket URI, s3://bucket_name as the argument. I fail to see why we need 2 separate command options for setting artifact URI.
Their descriptions are
--default-artifact-root <URI>
Directory in which to store artifacts for any new experiments created. For tracking server backends that rely on SQL, this option is required in order to store artifacts. Note that this flag does not impact already-created experiments with any previous configuration of an MLflow server instance. By default, data will be logged to the mlflow-artifacts:/ uri proxy if the –serve-artifacts option is enabled. Otherwise, the default location will be ./mlruns.
--artifacts-destination <URI>
The base artifact location from which to resolve artifact upload/download/list requests (e.g. ‘s3://my-bucket’). Defaults to a local ‘./mlartifacts’ directory. This option only applies when the tracking server is configured to stream artifacts and the experiment’s artifact root location is http or mlflow-artifacts URI.
What is the reason of having the 2 command options? What happen if both are specified, will one URI precede the other?

At first, it looks confusing because you have high flexibility.
You can use both of them or only one of them. Let's explain it a bit more :-)
The --default-artifact-root is a directory for storing artifacts for every new experiment.
NOTE: The default value depend if the -serve-artifacts is enabled or not
(mlflow-artifacts:/, ./mlruns)
--artifacts-destination is used to specify the location of artifacts in HTTP requests.
NOTE: This option only applies when the tracking server is configured to stream artifacts (--serve-artifacts is enabled) AND the experiment’s artifact root location is http or mlflow-artifacts URI
Case 1: Use both --default-artifact-root & --artifacts-destination:
mlflow server
--default-artifact-root mlflow-artifacts:/
--artifacts-destination s3://my-root-bucket
--host remote_host
--serve-artifacts
Case 2: Use only --artifacts-destination
mlflow server
--artifacts-destination s3://my-root-bucket
--host remote_host
--serve-artifacts
Case 3: Use only --default-artifact-root
mlflow server
--default-artifact-root is s3://my-root-bucket/mlartifacts
--serve-artifacts
In this case the server can resolve all the following patterns to the configured proxied object store location of s3://my-root-bucket/mlartifacts:
https://<host>:<port>/mlartifacts
http://<host>/mlartifacts
mlflow-artifacts://<host>/mlartifacts
mlflow-artifacts://<host>:<port>/mlartifacts
mlflow-artifacts:/mlartifacts

Related

Fetch secrets and certificates from AzureKeyVault inside Docker container

I have a .net framework console application. Inside this application, I'm fetching secrets and certificates from keyvault using tenantId, client Id and Client Secret.
Application is fetching secrets and certificates properly.
Now I have containerized the application using Docker. After running the image I'm unable to fetch secrets and certificates. I'm getting below error:
" Retry failed after 4 tries. Retry settings can be adjusted in ClientOptions.Retry. (No such host is known.) (No such host is known.) (No such
host is known.) (No such host is known.)"
To resolve the error, please try the following workarounds:
Check whether your container was setup behind an nginx reverse proxy.
If yes, then try removing the upstream section from the nginx reverse proxy and set proxy_pass to use docker-compose service's hostname.
After any change make sure to restart WSL and Docker.
Check if DNS is resolving the host names successfully or not, otherwise try adding the below in your docker-compose.yml file.
dns:
- 8.8.8.8
Try removing auto generated values by WSL in /etc/resolv.conf and add DNS like below if above doesn't work.
# [network]
# generateResolvConf = false
nameserver 8.8.8.8
Try restarting the WSL by running below command as an Admin:
Restart-NetAdapter -Name "vEthernet (WSL)"
Try installing a Docker Desktop update as a workaround.
For more in detail, please refer below links:
Getting "Name or service not known (login.microsoftonline.com:443)" regularly, but occasionally it succeeds? · Discussion #3102 · dotnet/dotnet-docker · GitHub
ssl - How to fetch Certificate from Azure Key vault to be used in docker image - Stack Overflow

ERROR: Registering runner with gitlab-runner

I read other posts and solutions described but they didnt work for me.
I have my own gitlab server running at AWS with its url mygitlab.com. The gitlab server works fine with a lot of projects.
I have another server S1 in the same AWS network than my gitlab server. Servers see each other, telnet works fine on 80 (http) or 443 (https) ports from my server S1 to the gitlab server.
For my project named "test" on my gitlab server I go to the webpage of this project then to the menu "settings -> CI/CD" then I "expand" the "Runners" section then I go in the section "Specific runners" saying : "These runners are specific to this project." I copy the given url (mygitlab.com) and the specific token.
On my server S1 I installed gitlab-runner then I launch :
sudo gitlab-runner register --url https://mygitlab.com --registration-token mytoken
I have this error :
ERROR: Registering runner... forbidden (check registration token) runner=mytoken
PANIC: Failed to register the runner. You may be having network problems.
I tried http instead of https, I got the same error
I tried and checked solutions I read here and on other forums :
be sure the token is specific to the project : done !
try to "Reset registration token" : done !
is there 127.0.0.0 localhost in /etc/hosts : done !
checking network between servers : done !
Thanks for tips and any idea to test !
It may be you run your GitLab instance behind a reverse proxy such as Nginx.
and you or your friends set allow from IP and block other source traffic.
You have to mention https://... in the url
e.g. https://gitlab.com
Enter the GitLab instance URL e.g. https://gitlab.com
Enter the registration token e.g. GR1348941iADNi
Enter a description for the runner e.g.
linux
Enter tags for the runner (comma-separated) e.g. linuxos , local-runnner , local-shell
Enter optional maintenance note for the runner e.g. git-cicd
Enter an executor e.g. virtualbox, docker+machine, docker-ssh, ssh, parallels, shell, docker-ssh+machine, kubernetes, custom, docker

How Azure pipelines can get source from Internal TFS and External Git? How can I update the proxy?

I am setting up Azure Pipelines, I have few that get sources from GitHub and trying to setup pipelines to reach TFS on Intranet, I created a Service Connection of type: “Azure Repos/Team Foundation Server” using this Other Git URL: https://tfs.myCie.com/defaultcollection/MyProject/_versionControl
When I run the pipeline, it takes some time then it displays a 504 Timeout error but the pipeline is still pending. After a while, it goes into error with this message in the step “Checkout repository#master to s”:
git -c http.proxy="http://myProxy.myCie.com:80" fetch --force --tags --prune --progress --no-recurse-submodules origin
fatal: unable to access 'https://tfs. myCie.com/defaultcollection/myProject/_versionControl/': OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to tfs.oecd.org:443
##[warning] Git fetch failed with exit code 128, back off 3.667 seconds before retry.
Security team says that I should use a PAC file to setup the proxy and that should enable intranet and Internet calls but I don’t see how to update the proxy settings of my Self-Hosted Windows Agent.
Can I specify a file? Can there be a configuration for Internet and another one for intranet?
I don’t see how to update the proxy settings of my Self-Hosted Windows
Agent. Can I specify a file?
For the agent you need to create a .proxy file with the proxy URL in the root directory of your agent.
Locate the root directory of your build agent (this is the folder
that contains the run.exe and the _work folder).
Open a Command Prompt at this location.
Type this command, but replace PROXYIP & PORT with your values:
echo http://PROXYIP:PORT > .proxy
Check that your .proxy file is created at the right place:
Optional: If your proxy needs authentication, you must set these
environment variables:
set VSTS_HTTP_PROXY_USERNAME=user
set VSTS_HTTP_PROXY_PASSWORD=password
Restart the service for your build agent.
When you know that you need a proxy at the time of the installation, you can configure the proxy settings right when you call config.cmd:
./config.cmd --proxyurl http://127.0.0.1:8888 --proxyusername "user" --proxypassword "password"
For details, please refer to this blog.
Here is the official document you can refer to.

Is it possible to configure gitlab builtin container registry with self-signed certs?

I'm using a docker gitlab/gitlab-ce:12.7.2-ce.0 image to run a GitLab. I'm trying to use a built-in container registry feature. Documentation sais: "If you are using the Omnibus GitLab built in Let’s Encrypt integration, as of GitLab 12.5, the Container Registry will be automatically enabled on port 5050 of the default domain.". Is it possible to configure GitLab builtin container registry with self-signed certs?
After a few tests, the configuration presented in the https://docs.gitlab.com/ee/administration/packages/container_registry.html turned out to be correct.
In addition, I placed the entire CA certificate path in /etc/gitlab/trusted-certs (in PEM format) so that when the GitLab container starts, the appropriate symlinks appear in the /opt/gitlab/embedded/ssl/certs directory.

Proxy configuration for OpenShift Origin

I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
After creating a project, I add a new app like this (successfully):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
OpenShift tries to build immediatelly, only to fail as follows:
F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access 'https://github.com/openshift/ruby-hello-world.git/': Failed connect to github.com:443; Connection refused
I consulted the documentation about the proxy configuration:
https://docs.openshift.com/enterprise/3.0/admin_guide/http_proxies.html#git-repository-access
Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy.
...
source:
type: Git
git:
uri: "git://github.com/openshift/ruby-hello-world.git"
httpProxy: http://proxy.example.com
httpsProxy: https://proxy.example.com
...
With that change the build proceeds.
Can the HTTP proxy be configured system wide?
Note: again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.
After some time I now know enough to answer my own question.
There are two places where one needs to deal with corporate proxy settings.
Docker
This thread will tell you what to do in detail:
Cannot download Docker images behind a proxy
In my case on RHEL 7.2 I needed to edit this file: /etc/sysconfig/docker
I had to add the following entries:
HTTP_PROXY="http://proxy.company.com:4128"
HTTPS_PROXY="http://proxy.company.com:4128"
Then a restart of the docker service was necessary.
Origin Proxy
What I missed originally was the place to configure our corporate proxy settings. Currently I have a cluster (1 master, 1 node) installed via ansible.
These are the relevant files to edit on the servers:
* /etc/sysconfig/origin-master
* /etc/sysconfig/origin-node
There already placeholders in this file:
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD#IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD#IPADDR:PORT
Documentation:
https://docs.openshift.org/latest/install_config/http_proxies.html

Resources