Python : Implementing Feature Flags based on Environment (dev, prod) - python-3.x

I would like the features to be based on the environment. For example a feature is being worked or tested so I could have it on in DEV , but it's not ready for the public, so it's turned off in PROD.
Do I need to implement a custom strategy or can I use one of the existing strategies in a creative way?
If there any concise example that would be most helpful.

The easiest way I’ve found to implement feature flags at the environment level is to use a third-party hosted management system. Many feature flag services allow you to control which environment a flag will be enabled in. In the example below, I used DevCycle’s Python SDK, referencing feature flags and environments I had created in the DevCycle dashboard.
First, I installed the SDK from the command line:
$ pip install devcycle-python-server-sdk
Then I initialized the SDK it with the SDK key that corresponded to my desired environment. In this case, I used the key for my Dev environment. DevCycle provides SDK keys for each environment you set up.
from __future__ import print_function
from devcycle_python_sdk import Configuration, DVCClient
from devcycle_python_sdk.rest import ApiException
configuration = Configuration()
# Set up authorization
configuration.api_key['Authorization'] = 'SDK_KEY_FOR_DEV_ENV'
# Create an instance of the API class
dvc = DVCClient(configuration)
# Create user object. All functions require user data to be an instance of the UserData class
user = UserData(
user_id='test'
)
key = 'enable-feature-flag' # feature flag key created in the DevCycle Dashboard
try:
# Fetch variable values using the identifier key, with a default value and user object
# The default value can be of type string, boolean, number, or JSON
flag = dvc.variable(user, key, False)
# Use received value of feature flag.
if flag.value:
# Put feature code here, or launch feature from here
else:
# Put existing functionality here
except ApiException as e:
print("Exception when calling DVCClient->variable: %s" %e)
By passing in the SDK key for my Dev environment, 'SDK_KEY_FOR_DEV_ENV', it gives my program access to only the features enabled in Dev. You can choose which environment(s) a feature is enabled in directly from the DevCycle dashboard. So if 'enable-feature-flag' was set to true for your Dev environment, you would see your feature. Likewise, you could set 'enable-feature-flag' to false in your Prod environment, and replace 'SDK_KEY_FOR_DEV_ENV' with the key for your Prod environment. This would disable the new functionality from Prod.
Full disclosure: My name is Sandrine and I am a Developer Advocate for DevCycle. I hope this answer helps you get started on environment-specific feature flags.

Related

How to get reference to AzureML Workspace Class in scoring script?

My scoring function needs to refer to an Azure ML Registered Dataset for which I need a reference to the AzureML Workspace object. When including this in the init() function of the scoring script it gives the following error:
"code": "ScoreInitRestart",
"message": "Your scoring file's init() function restarts frequently. You can address the error by increasing the value of memory_gb in deployment_config."
On debugging the issue is:
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code [REDACTED] to authenticate.
How can I resolve this issue without exposing Service Principal Credentials in the scoring script?
I found a workaround to reference the workspace in the scoring script. Below is a code snippet of how one can do that -
My deploy script looks like this :
from azureml.core import Environment
from azureml.core.model import InferenceConfig
#Add python dependencies for the models
scoringenv = Environment.from_conda_specification(
name = "scoringenv",
file_path="config_files/scoring_env.yml"
)
#Create a dictionary to set-up the env variables
env_variables={'tenant_id':tenant_id,
'subscription_id':subscription_id,
'resource_group':resource_group,
'client_id':client_id,
'client_secret':client_secret
}
scoringenv.environment_variables=env_variables
# Configure the scoring environment
inference_config = InferenceConfig(
entry_script='score.py',
source_directory='scripts/',
environment=scoringenv
)
What I am doing here is creating an image with the python dependencies(in the scoring_env.yml) and passing a dictionary of the secrets as environment variables. I have the secrets stored in the key-vault.
You may define and pass native python datatype variables.
Now, In my score.py, I reference these environment variables in the init() like this -
tenant_id = os.environ.get('tenant_id')
client_id = os.environ.get('client_id')
client_secret = os.environ.get('client_secret')
subscription_id = os.environ.get('subscription_id')
resource_group = os.environ.get('resource_group')
Once you have these variables, you may create a workspace object using Service Principal authentication like #Anders Swanson mentioned in his reply.
Another way to resolve this may be by using managed identities for AKS. I did not explore that option.
Hope this helps! Please let me know if you found a better way of solving this.
Thanks!
Does your score.py include a Workspace.get() with auth=InteractiveAuthentication call? You should swap it to ServicePrincipalAuthentication (docs) to which you pass your credentials ideally through environment variables.
import os
from azureml.core.authentication import ServicePrincipalAuthentication
svc_pr_password = os.environ.get("AZUREML_PASSWORD")
svc_pr = ServicePrincipalAuthentication(
tenant_id="my-tenant-id",
service_principal_id="my-application-id",
service_principal_password=svc_pr_password)
ws = Workspace(
subscription_id="my-subscription-id",
resource_group="my-ml-rg",
workspace_name="my-ml-workspace",
auth=svc_pr
)
print("Found workspace {} at location {}".format(ws.name, ws.location))
You can get the workspace object directly from your run.
from azureml.core.run import Run
ws = Run.get_context().experiment.workspace
I came across the same challenge. As you are mentioning AML Datasets, I assume an AML Batch Endpoint is suitable to your scenario. The scoring script for a batch endpoint is meant to receive a list of files as input. When invoking the batch endpoint, you can pass (among the others) AML Datasets (consider that an endpoint is deployed in the context of an AML workspace). Have a look to this.
When Running on AML Compute CLuster use the following code
from azureml.pipeline.core import PipelineRun as Run
run = Run.get_context()
run.experiment.workspace
ws = run.experiment.workspace
Note: This works only when you run on AML Cluster
Run.get_context() gets the conext of total AML cluster from that object we can extract workspace context which allows you to authenticate AML workspace with AML cluster

What does IP_NETWORK and IP_DEVICE in the Decouple Python library mean?

I was reading through the Decouple Python library, but I don't understand what the following code does-
IP_NETWORK = config("IP_NETWORK")
IP_DEVICE = config("IP_DEVICE")
I know that, there has to be a .env file setup, where the IP_NETWORK and IP_DEVICE have to be declared. But I'm not sure how this module works.
Also, how do I find the IP_NETWORK and the IP_DEVICE ?
I'm not too sure what I'm talking about and may not make sense, but any explanation is appreciated!
Python Decouple library: Strict separation of settings from code
Install:
pip install python-decouple
This library comes handy in separating your settings parameters from your source code. it’s always a good idea to keep your secret key, database url, password etc... in a separate place (environment file - .ini/.env file) and not in your source code git repository for security reasons.
It also comes handy if you want to have different project settings on different environment (e.g - You might want debug mode on for your development environment but not on production.)
How do we decide whether parameter should go inside your source code git repository or environment file ?
It's simple trick - Parameters related to project settings goes straight to the source code and parameters related to instance settings goes to an environment file.
Below first 2 are project settings the last 3 are instance settings.
Locale and i18n;
Middlewares and Installed Apps;
Resource handles to the database, Memcached, and other backing services;
Credentials to external services such as Amazon S3 or Twitter;
Per-deploy values such as the canonical hostname for the instance.
Let's understand how to use it with Django(python framework).
First create a file named .env or .ini in the root of your project and say below is the content of that file.
DEBUG=True
SECRET_KEY=ARANDOMSECRETKEY
DB_NAME=Test
DB_USER=Test
DB_PASSWORD=some_strong_password
Now let's see how we can use it with Django. Sample snippet of settings.py
# other import statement..
from decouple import config
SECRET_KEY = config('SECRET_KEY')
DEBUG = config('DEBUG', cast=bool)
DATABASES = {
'default': {
'NAME': config('DB_NAME'),
'USER': config('DB_USER'),
'PASSWORD': config('DB_PASSWORD'),
# other parameters
}
}
# remaining code.
Hope this answer your question.

Rails 6+: order in which Rails reads SECRET_KEY_BASE (env var versus credentials.yml.enc)

For context, I'm in the process of updating a Rails app to 5.2 and then to 6.0.
I'm updating my credentials to use the config/credentials.yml.enc and config/master.key defaults with Rails 5.2+ apps.
The Rails docs state:
In test and development applications get a secret_key_base derived from the app name. Other environments must use a random key present in config/credentials.yml.enc
(emphasis added)
This leads me to think that in production the SECRET_KEY_BASE value is required to be read from Rails.application.credentials.secret_key_base via config/credentials.yml.enc. In test and development environments, the secret_base_key is essentially "irrelevant", since it's calculated from the app name.
However, when I was looking at the Rails source code, it reads:
def key
read_env_key || read_key_file || handle_missing_key
end
That seems to say the order of reading values is:
ENV["SECRET_BASE_KEY"]
Rails.application.credentials.secret_base_key
Raise error
I use Heroku for my hosting, and have a ENV["SECRET_BASE_KEY"] env variable that stores this secret value.
Questions
If I have both ENV["SECRET_BASE_KEY"] and Rails.application.credentials.secret_base_key set, which one takes priority?
Is using the ENV var going to be deprecated at some point?
I have lots of environment-specific ENV variables because I don't want to use my production accounts in development for AWS S3 buckets, stripe accounts, etc. The flat-file format of credentials.yml.enc seems to assume developers only need to access these 3rd-party APIs in production. Is there an accepted format to handle environment-specific credentials yet in Rails?
I read through the comment threads on DHH's original PR as well as a linked PR that says it implements environment-specific credentials, but the docs don't mention this implementation so I'm not certain if it's the standard or if it's going to go away sometime soon.

What is suggested method to get service versions

What is the best way to get list of service versions in google app engine in flex env? (from service instance in Python 3). I want to authenticate using service account json keys file. I need to find currently default version (with most of traffic).
Is there any lib I can use like googleapiclient.discovery, or google.appengine.api.modules? Or I should build it from scratches and request REST api on apps.services.versions.list using oauth? I couldn't not find any information in google docs..
https://cloud.google.com/appengine/docs/standard/python3/python-differences#cloud_client_libraries
Finally I was able to solve it. Simple things on GAE became big problems..
SOLUTION:
I have path to service_account.json set in GOOGLE_APPLICATION_CREDENTIALS env variable. Then you can use google.auth.default
from googleapiclient.discovery import build
import google.auth
creds, project = google.auth.default(scopes=['https://www.googleapis.com/auth/cloud-platform.read-only'])
service = build('appengine', 'v1', credentials=creds, cache_discovery=False)
data = service.apps().services().get(appsId=APPLICATION_ID, servicesId=SERVICE_ID).execute()
print data['split']['allocations']
Return value is allocations dictionary with versions as keys and traffic percents in values.
All the best!
You can use Google's Python Client Library to interact with the Google App Engine Admin API, in order to get the list of a GAE service versions.
Once you have google-api-python-client installed, you might want to use the list method to list all services in your application:
list(appsId, pageSize=None, pageToken=None, x__xgafv=None)
The arguments of the method should include the following:
appsId: string, Part of `name`. Name of the resource requested. Example: apps/myapp. (required)
pageSize: integer, Maximum results to return per page.
pageToken: string, Continuation token for fetching the next page of results.
x__xgafv: string, V1 error format. Allowed values: v1 error format, v2 error format
You can find more information on this method in the link mentioned above.

how to set default application version in azure batch using java sdk

Is there a way to set the default application version in azure batch account using java sdk?
The sample script that they have in the git does not show how to set the default version(https://github.com/Azure-Samples/batch-java-manage-batch-accounts/blob/master/src/main/java/com/microsoft/azure/management/batch/samples/ManageBatchAccount.java).
Also I was trying to dig in the interface(https://github.com/Azure/azure-libraries-for-java/blob/master/azure-mgmt-batch/src/main/java/com/microsoft/azure/management/batch/Application.java) to get some clues but couldn't see anything that supports updating the default version.
UPDATE:
I was able to get the version update working following #brklein suggestion:
BatchApplication batchApplication = batchAccount.applications().get(applicationName)
ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(applicationId, tenantId, appSecret, AzureEnvironment.AZURE)
BatchManager batchManager = BatchManager.authenticate(credentials, subscriptionId)
ApplicationsInner applicationsInner = batchManager.inner().applications()
ApplicationUpdateParameters parameters = new ApplicationUpdateParameters(defaultVersion: DEFAULT_APP_VERSION)
applicationsInner.update(resourceGroupName, batchAccountName, batchApplication.id(), parameters)
It does not appear that default version is being surface at the client layer of the SDK.
To get around this you should be able to call the implementation methods manually, which have the full functionality of the REST API (as they are auto-generated).
To do this create either CreateApplicationParameters or ApplicationUpdateParameters and set the defaultVersion property. Then you can call the implementations create or update methods manually (https://github.com/Azure/azure-libraries-for-java/blob/78e8ff2940eba34bc63f8e7be6807a377500f5c7/azure-mgmt-batch/src/main/java/com/microsoft/azure/management/batch/implementation/ApplicationsInner.java#L474).

Resources