Azure Machine Learning Service - Environment variables defined in Environment not accessible from init() method in entry_script - azure-machine-learning-service

I am deploying a model to AKS via AML using the python-sdk, and I am facing a problem accessing the environment variables defined for the Environment object myenvused for the deployment.
# add environment variable
myenv.environment_variables = {'SOME_ENV_VARIABLE': 'ABC'}
# register to workspace
myenv.register(ws)
This environment object is stated in the inference configuration for the deployment:
myenv = Environment.get(workspace=ws,name="myenv")
inference_config = InferenceConfig(entry_script='score.py',
source_directory=os.path.abspath(__file__ + "//.."),
environment=myenv,
enable_gpu=True,
description="...")
When the model executes, the init() method in the entry_script score.py should be able to access these environment variables by calling os.environ['SOME_ENV_VARIABLE']. However, that is not working. The conda and pip packages defined in myenv are present in the image.
Shouldn't it be possible to access these env variables from the entry_script?

Environment variables set on environment object are deprecated. Those are runtime variables set during the execution and implementation moved to the RunConfiguration.
https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfig.runconfiguration?view=azure-ml-py#azureml-core-runconfig-runconfiguration-environment-variables
Variables baked into the image could be set in the dockerfile of the environment or in your base image.
Sorry if some of the docs are misleading, we are working on fixing those.

You can create an environment by instantiating Environment object and then setting its attributes: set of Python packages, environment variables and others.
Specify environment variables:
myenv.environment_variables = {"MESSAGE":"Hello from Azure Machine Learning"}
You can add environment variables to your environment. These then become available using os.environ.get in your training script.
Please follow the below for using environments for inferencing.

I just tested environment_variables from an Environment ojb. It works. I also included a couple of environment variables for an inference environment.
python sdk v1.38.0.
The solution I used more or less the same as what was described above. After model was deployed in ACI, the environment variables are visible from properties (Containers -> Properties) e.g., enter image description here

Related

Azure Devops Powershell task: Programmatic Access of Build Variables

Hi I want to know if I can programatically access my release variables in the powershell release task. My use case is this: I want to create a generic powershell script that can be used to deploy to multiple environments. I want to set an environment variable on the powershell task that will specify env=dev or test or prod, etc. So I want the powershell script to dynamically access the appropriate build variables (without creating a massive switch statement) based on the environment variable. I'm able to access the environment variable just fine.
So I have this:
$deployenv = "${Env:kudu.env}" #(this works just fine)
$apiUrl = '$(dev.kudu.url)' #(when hard coded like this it works fine)
Currently $apiUrl is able to retrieve the release variable just fine but I don't want to hard code "dev" in the param name.
I've tried a bunch of things like
$apiUrl = variables["$deployenv.kudu.url"]
So I'm wondering is that a way to programatically access these release variables from my powershell task?
You're trying to implement a solution to the wrong problem. The actual problem is that you're not using variable scopes properly.
Use the same variable names across your environments and define different values at different scopes. i.e. the DEV environment has a Kudu.Url variable set to the value for the dev environment. The QA environment has a Kudu.Url set to the value for the QA environment. And so on.

How to set environment variables in Dockerfile via Azure DevOps

In my projects Docker file I have some environment variables, like this:
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
And I would like to pass the password here as an environment variable set in my pipeline.
In Azure DevOps I have two pipelines. One for building the solution and one for building and pushing docker images to DockerHub. There are options to set variables in both these pipelines:
I have set the password in both pipelines and edited my password in the Dockerfile to look like this:
ENV SA_PASSWORD=$(SA_PASSWORD)
But that does not seem to be working. What is the correct way of passing environment variables from Azure DevOps into a Docker image?
Also, is this a safe way of passing secrets? Is there any way someone could read secrets from a Docker image?
Thanks!
You can set an ARG var_name and reference ENV to the ARG variables. Then you can replace those variables when docker build the image $ docker build --build-arg var_name=$(VARIABLE_NAME)
For example the add ARG in dockerfile, and have the ENV variable refer to it:
ARG SECRET
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=$SECRET
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
You can use dock build task and dock push task separately, as buildandpush command cannot accept arguments. And set a variable SECRET in your pipeline.
The set the Build Arguments SECRET= $(SECRET) to replace the ARG SECRET
You can also refer to a similar thread.
I am using the Replace Tokens extension for exactly tasks like this: https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens
However, putting secrets into your Dockerfile might not be the best idea. Usually you would provide secrets or generally runtime configuration as environment variables when you actually execute the container.
I suggest to set the environment variables at runtime. If you are deploying to an Azure App Service, app settings are injected into the process as environment variables automatically.
You can then use the same image for multiple environments. With the Deploy Azure App Service task in a release pipeline, you can change the app settings for each environment.
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#configure-environment-variables
Also, is this a safe way of passing secrets? Is there any way someone
could read secrets from a Docker image?
This questions really depends on the appproach and the importance of your image here. Usually there are 2 ways that somebody would look at to achieve this. Build arguments and Environment variables.
Build Arguments are usually declarted in the dockerfile, and are supplied using --build-arg parameter to the docker builder (docker build). Nothe that the docker build will complain if you declare an argument and not pass it during build if you also didnt supply a default value during declaration. These are available only when the image is being built. The subsequent containers from this image will not have access to the ARG variables, as long as you dont set them again using ENV.
ENV are environment varriables which can be declared and set from the dockerfile or at the os-level. The ENV variables are then available for use in subsequent instructions. These are available during the image build and all the subsequent containers from this specific image will have access to these variables. So if you ssh/exec into the container and take a look at which environment variables are set you will find them.
But that does not seem to be working. What is the correct way of
passing environment variables from Azure DevOps into a Docker image?
buildAndPush command will ignore ARG by default in the task-inputs.. So either use a bash step to build the image or separate build and push tasks as described earlier.
In release, choose deploy azure app service task. Provide required properties at App settings section under Application and Configuration Settings option.

Azure Pipelines Environment variables not working for PowerShell on Target Machines

I’m unable to get the environment variables (for example $env:RELEASE_RELEASENAME) for a task that runs a PowerShell script on a target machine, however, the env variables work for PowerShell Inline.
Does getting env variables from PowerShell on target machines need special treatment or am I missing something here?
Sometimes, I met this problem with Ubuntu hosted agent. My solution is to manually add the environment, then I can get the environment variable in inline script or script file.

Add Environment variables while creating a Google Compute Engine VM Instance

I'm creating a VM instance through cloud function in GCE.I want to add some environment variables to the Instance during creation.
I'm referring this code for instance creation:
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/compute/api/create_instance.py
I don't want to add this in startup script, because already I'm running a set of tasks in startup script, and i want one of those tasks to use these environment variables. Is there any other way,like passing values in config, while creating instance ?
When you create a new Cloud-Function, you can expand the bottom menu named:
Environment variables, networking, timeouts and more
and set your environment variables from there.
Edit: I'd like to specify that environment variables are used by Cloud-Function itself on the main.py. However, you should may be interested into Instance Metadata & Metadata Server.
You can set environment variables by using gcloud command or through GCP Console [1]:
By using gcloud command:
You can use the --set-env-vars flag to define a variable using the gcloud command-line.
e.g.:
gcloud functions deploy FUNCTION_NAME --set-env-vars env1=whatever,env2=whatever FLAGS...
*Note: The --set-env-vars and --env-vars-file flags for environment variables are destructive. That is, they replace all current variables with those provided at deployment. To make additive changes, use the --update-env-vars flag described in the next section.
e.g.:
gcloud functions deploy FUNCTION_NAME --update-env-vars env1=whatever
Through GCP Console:
Open the Functions Overview page in the GCP Console:
GO TO THE CLOUD FUNCTIONS OVERVIEW PAGE.
Click Create function.
Fill in the required fields for your function.
Expand the advanced settings by clicking More.
In the Environment variables section, set variables by clicking Add variable.
References:
[1] https://cloud.google.com/functions/docs/env-var#setting_environment_variables

Set Config property based on deployment platform (dev, staging, qa, production) in Node.js

In Node.js, I want to set config property value based on platform (dev, staging, qa, production) it will get deployed.
So for example for dev and staging environment, I want to set value '234'
And for prod, i want to set value '456'.
for deployment, i am using VSTS.
Shall I make the use of deployment variables?
For setting config property based, please take a look at this question: Node.js setting up environment specific configs to be used with everyauth
In VSTS Release, you could use environment scoped value per environment.
Share values across all of the tasks within one specific environment
by using environment variables. Use an environment-level variable for
values that vary from environment to environment (and are the same for
all the tasks in an environment). You define and manage these
variables in the Variables tab of an environment in a release
pipeline.
Source Link: Custom variables
If you want to change values, suggest you to use Replace Tokens task Visual Studio Team Services Build and Release extension that replace tokens in files with variable values.

Resources