Unable to import google logging metric using terraform - terraform

I have created in terraform the following logging metric resource
resource "google_logging_metric" "proservices_run" {
name = "user/proservices-run"
filter = "resource.type=gae_app AND severity>=ERROR"
project = "${google_project.service.project_id}"
metric_descriptor {
metric_kind = "DELTA"
value_type = "INT64"
}
}
I have also on Stackdriver a custom metric named user/proservices-run.
However the following two import attempts fail:
$ terraform import google_logging_metric.proservices_run proservices-run
google_logging_metric.proservices_run: Importing from ID "proservices-run"...
google_logging_metric.proservices_run: Import complete!
Imported google_logging_metric (ID: proservices-run)
google_logging_metric.proservices_run: Refreshing state... (ID: proservices-run)
Error: google_logging_metric.proservices_run (import id: proservices-run): 1 error occurred:
* import google_logging_metric.proservices_run result: proservices-run: google_logging_metric.proservices_run: project: required field is not set
$ terraform import google_logging_metric.proservices_run user/proservices-run
google_logging_metric.proservices_run: Importing from ID "user/proservices-run"...
google_logging_metric.proservices_run: Import complete!
Imported google_logging_metric (ID: user/proservices-run)
google_logging_metric.proservices_run: Refreshing state... (ID: user/proservices-run)
Error: google_logging_metric.proservices_run (import id: user/proservices-run): 1 error occurred:
* import google_logging_metric.proservices_run result: user/proservices-run: google_logging_metric.proservices_run: project: required field is not set
Using
Terraform v0.11.14
and
provider.google = 2.11.0
provider.google-beta 2.11.0
edit: I noticed the project: required field is not set in the error message, I added the field project in my TF code, however the outcome is still the same.

I ran into the same issue trying to import a log-based metrics.
The solution was to set the env-var GOOGLE_PROJECT=<your-project-id> when running the command.
GOOGLE_PROJECT=MyProjectId \
terraform import \
"google_logging_metric.create_user_count" \
"create_user_count"

Related

unable to initialize snowflake data source

I am trying to access the snowflake datasource using "great_expectations" library.
The following is what I tried so far:
from ruamel import yaml
import great_expectations as ge
from great_expectations.core.batch import BatchRequest, RuntimeBatchRequest
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
print(context.test_yaml_config(yaml.dump(datasource_config)))
I initiated great_expectation before executing above code:
great_expectations init
but I am getting the error below:
great_expectations.exceptions.exceptions.DatasourceInitializationError: Cannot initialize datasource my_snowflake_datasource, error: 'NoneType' object has no attribute 'create_engine'
What am I doing wrong?
Your configuration seems to be ok, corresponding to the example here.
If you look at the traceback you should notice that the error propagates starting at the file great_expectations/execution_engine/sqlalchemy_execution_engine.py in your virtual environment.
The actual line where the error occurs is:
self.engine = sa.create_engine(connection_string, **kwargs)
And if you search for that sa at the top of that file:
import sqlalchemy as sa
make_url = import_make_url()
except ImportError:
sa = None
So sqlalchemy is not installed, which you
don't get automatically in your environement if you install greate_expectiations. The thing to do is to
install snowflake-sqlalchemy, since you want to use sqlalchemy's snowflake
plugin (assumption based on your connection_string).
/your/virtualenv/bin/python -m pip install snowflake-sqlalchemy
After that you should no longer get an error, it looks like test_yaml_config is waiting for the connection
to time out.
What worries me greatly is the documented use of a deprecated API of ruamel.yaml.
The function ruamel.yaml.dump is going to be removed in the near future, and you
should use the .dump() method of a ruamel.yaml.YAML() instance.
You should use the following code instead:
import sys
from ruamel.yaml import YAML
import great_expectations as ge
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
yaml = YAML()
yaml.dump(datasource_config, sys.stdout, transform=context.test_yaml_config)
I'll make a PR for great-excpectations to update their documentation/use of ruamel.yaml.

Access Github Secrets in Python workflow

I have an issue with access Github Secrets in CI workflow
the tests part of main.ymlfile -
# Run our unit tests
- name: Run unit tests
env:
CI: true
MONGO_USER: ${{ secrets.MONGO_USER }}
MONGO_PWD: ${{ secrets.MONGO_PWD }}
ADMIN: ${{ secrets.ADMIN }}
run: |
pipenv run python app.py
I have a database.py file in which I am accessing these environment variables
import os
import urllib
from typing import Dict, List, Union
import pymongo
from dotenv import load_dotenv
load_dotenv()
print("Mongodb user: ", os.environ.get("MONGO_USER"))
class Database:
try:
client = pymongo.MongoClient(
"mongodb+srv://" +
urllib.parse.quote_plus(os.environ.get("MONGO_USER")) +
":" +
urllib.parse.quote_plus(os.environ.get("MONGO_PWD")) +
"#main.rajun.mongodb.net/myFirstDatabase?retryWrites=true&w=majority"
)
DATABASE = client.Main
except TypeError as NoCredentialsError:
print("MongoDB credentials not available")
raise Exception(
"MongoDB credentials not available"
) from NoCredentialsError
...
...
this is the issue I get in the build-
Traceback (most recent call last):
Mongodb user: None
MongoDB credentials not available
Followed by urllib raising bytes expected error
I have followed the documentation here but I still cannot find out my mistake

Azure-ML Deployment does NOT see AzureML Environment (wrong version number)

I've followed the documentation pretty well as outlined here.
I've setup my azure machine learning environment the following way:
from azureml.core import Workspace
# Connect to the workspace
ws = Workspace.from_config()
from azureml.core import Environment
from azureml.core import ContainerRegistry
myenv = Environment(name = "myenv")
myenv.inferencing_stack_version = "latest" # This will install the inference specific apt packages.
# Docker
myenv.docker.enabled = True
myenv.docker.base_image_registry.address = "myazureregistry.azurecr.io"
myenv.docker.base_image_registry.username = "myusername"
myenv.docker.base_image_registry.password = "mypassword"
myenv.docker.base_image = "4fb3..."
myenv.docker.arguments = None
# Environment variable (I need python to look at folders
myenv.environment_variables = {"PYTHONPATH":"/root"}
# python
myenv.python.user_managed_dependencies = True
myenv.python.interpreter_path = "/opt/miniconda/envs/myenv/bin/python"
from azureml.core.conda_dependencies import CondaDependencies
conda_dep = CondaDependencies()
conda_dep.add_pip_package("azureml-defaults")
myenv.python.conda_dependencies=conda_dep
myenv.register(workspace=ws) # works!
I have a score.py file configured for inference (not relevant to the problem I'm having)...
I then setup inference configuration
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
I setup my compute cluster:
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.exceptions import ComputeTargetException
# Choose a name for your cluster
aks_name = "theclustername"
# Check to see if the cluster already exists
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
prov_config = AksCompute.provisioning_configuration(vm_size="Standard_NC6_Promo")
aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config)
aks_target.wait_for_completion(show_output=True)
from azureml.core.webservice import AksWebservice
# Example
gpu_aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
num_replicas=3,
cpu_cores=4,
memory_gb=10)
Everything succeeds; then I try and deploy the model for inference:
from azureml.core.model import Model
model = Model(ws, name="thenameofmymodel")
# Name of the web service that is deployed
aks_service_name = 'tryingtodeply'
# Deploy the model
aks_service = Model.deploy(ws,
aks_service_name,
models=[model],
inference_config=inference_config,
deployment_config=gpu_aks_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
And it fails saying that it can't find the environment. More specifically, my environment version is version 11, but it keeps trying to find an environment with a version number that is 1 higher (i.e., version 12) than the current environment:
FailedERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: 0f03a025-3407-4dc1-9922-a53cc27267d4
More information can be found here:
Error:
{
"code": "BadRequest",
"statusCode": 400,
"message": "The request is invalid",
"details": [
{
"code": "EnvironmentDetailsFetchFailedUserError",
"message": "Failed to fetch details for Environment with Name: myenv Version: 12."
}
]
}
I have tried to manually edit the environment JSON to match the version that azureml is trying to fetch, but nothing works. Can anyone see anything wrong with this code?
Update
Changing the name of the environment (e.g., my_inference_env) and passing it to InferenceConfig seems to be on the right track. However, the error now changes to the following
Running..........
Failed
ERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: f0dfc13b-6fb6-494b-91a7-de42b9384692
More information can be found here: https://some_long_http_address_that_leads_to_nothing
Error:
{
"code": "DeploymentFailed",
"statusCode": 404,
"message": "Deployment not found"
}
Solution
The answer from Anders below is indeed correct regarding the use of azure ML environments. However, the last error I was getting was because I was setting the container image using the digest value (a sha) and NOT the image name and tag (e.g., imagename:tag). Note the line of code in the first block:
myenv.docker.base_image = "4fb3..."
I reference the digest value, but it should be changed to
myenv.docker.base_image = "imagename:tag"
Once I made that change, the deployment succeeded! :)
One concept that took me a while to get was the bifurcation of registering and using an Azure ML Environment. If you have already registered your env, myenv, and none of the details of the your environment have changed, there is no need re-register it with myenv.register(). You can simply get the already register env using Environment.get() like so:
myenv = Environment.get(ws, name='myenv', version=11)
My recommendation would be to name your environment something new: like "model_scoring_env". Register it once, then pass it to the InferenceConfig.

Authentication error using new Pulumi azuread module

I've installed the latest Pulumi azuread module and I have this error when I try a pulumi preview:
Previewing update (int):
Type Name Plan Info
pulumi:pulumi:Stack test-int
└─ azuread:index:Application test 1 error
Diagnostics:
azuread:index:Application (test):
error: Error obtaining Authorization Token from the Azure CLI: Error waiting for the Azure CLI: exit status 1
my index.ts is very basic:
import * as pulumi from "#pulumi/pulumi";
import * as azure from "#pulumi/azure";
import * as azuread from "#pulumi/azuread";
const projectName = pulumi.getProject();
const stack = pulumi.getStack();
const config = new pulumi.Config(projectName);
const baseName = `${projectName}-${stack}`;
const testRg = new azure.core.ResourceGroup(baseName, {
name: baseName
});
const test = new azuread.Application("test", {
availableToOtherTenants: false,
homepage: "https://homepage",
identifierUris: ["https://uri"],
oauth2AllowImplicitFlow: true,
replyUrls: ["https://replyurl"],
type: "webapp/api",
});
Creating resources and AD application with the old module azure.ad works fine.
I have no clue what I am missing now....
EDIT:
index.ts the old way
import * as pulumi from "#pulumi/pulumi";
import * as azure from "#pulumi/azure";
const projectName = pulumi.getProject();
const stack = pulumi.getStack();
const config = new pulumi.Config(projectName);
const baseName = `${projectName}-${stack}`;
const testRg = new azure.core.ResourceGroup(baseName, {
name: baseName
});
const test = new azure.ad.Application("test", {
homepage: "https://homepage",
availableToOtherTenants: false,
identifierUris: ["https://uri"],
oauth2AllowImplicitFlow: true,
replyUrls: ["https://replyurl"]
});
Result of pulumi preview:
Previewing update (int):
Type Name Plan Info
pulumi:pulumi:Stack test-int
+ └─ azure:ad:Application test create 1 warning
Diagnostics:
azure:ad:Application (test):
warning: urn:pulumi:int::test::azure:ad/application:Application::test verification warning: The Azure Active Directory resources have been split out into their own Provider.
Information on migrating to the new AzureAD Provider can be found here: https://terraform.io/docs/providers/azurerm/guides/migrating-to-azuread.html
As such the Azure Active Directory resources within the AzureRM Provider are now deprecated and will be removed in v2.0 of the AzureRM Provider.
Resources:
+ 1 to create
2 unchanged
EDIT 2:
I'm running this on Windows 10:
az cli = 2.0.68
pulumi cli = 0.17.22
#pulumi/azure = 0.19.2
#pulumi/azuread = 0.18.2
#pulumi/pulumi = 0.17.21
Here are my principal permissions for Azure Active Directory Graph:
And the permissions for Microsoft Graph:
I ran into this issue and after hours I realized Fiddler was somehow interfering with the Az CLI running

Python SnowflakeOperator setup snowflake_default

Good day, I cannot find how to do basic setup to airflow.contrib.operators.snowflake_operator.SnowflakeOperatorto connect to snowflake. snowflake.connector.connect works fine.
When I do it with SnowflakeOperator :
op = snowflake_operator.SnowflakeOperator(sql = "create table test(*****)", task_id = '123')
I get the
airflow.exceptions.AirflowException: The conn_idsnowflake_defaultisn't defined
I tried to insert in backend sqlite db
INSERT INTO connection(
conn_id, conn_type, host
, schema, login, password
, port, is_encrypted, is_extra_encrypted
) VALUES (*****)
But after it I get an error:
snowflake.connector.errors.ProgrammingError: 251001: None: Account must be specified.
Passing account kwarg into SnowflakeOperator constructor does not help. It seems I cannot pass account into db or into constructor, but it's required.
Please help me, let me know what data I should insert into backend local db to be able to connect via SnowflakeOperator
Go to Admin -> Connections and update snowflake_default connection like this:
based on source code airflow/contrib/hooks/snowflake_hook.py:53 we need to add extras like this:
{
"schema": "schema",
"database": "database",
"account": "account",
"warehouse": "warehouse"
}
With this context:
$ airflow version
2.2.3
$ pip install snowflake-connector-python==2.4.1
$ pip install apache-airflow-providers-snowflake==2.5.0
You have to specify the Snowflake Account and Snowflake Region twice like this:
airflow connections add 'my_snowflake_db' \
--conn-type 'snowflake' \
--conn-login 'my_user' \
--conn-password 'my_password' \
--conn-port 443 \
--conn-schema 'public' \
--conn-host 'my_account_xyz.my_region_abc.snowflakecomputing.com' \
--conn-extra '{ "account": "my_account_xyz", "warehouse": "my_warehouse", "region": "my_region_abc" }'
Otherwise it doesn't work throwing the Python exception:
snowflake.connector.errors.ProgrammingError: 251001: 251001: Account must be specified
I think this might be due to that airflow command parameter --conn-host that is expecting a full domain with subdomain (the my_account_xyz.my_region_abc), that usually for Snowflake are specified as query parameters in a way similar to this template (although I did not check all the combinations of the command airflow connections add and the DAG execution):
"snowflake://{user}:{password}#{account}{region}{cloud}/{database}/{schema}?role={role}&warehouse={warehouse}&timezone={timezone}"
Then a dummy Snowflake DAG like this SELECT 1; will find its own way to the Snowflake cloud service and will work:
import datetime
from datetime import timedelta
from airflow.models import DAG
# https://airflow.apache.org/docs/apache-airflow-providers-snowflake/stable/operators/snowflake.html
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
my_dag = DAG(
"example_snowflake",
start_date=datetime.datetime.utcnow(),
default_args={"snowflake_conn_id": "my_snowflake_db"},
schedule_interval="0 0 1 * *",
tags=["example"],
catchup=False,
dagrun_timeout=timedelta(minutes=10),
)
sf_task_1 = SnowflakeOperator(
task_id="sf_task_1",
dag=my_dag,
sql="SELECT 1;",
)

Resources