Local hosting python azure function fail with M1 - python-3.x

As title, I want to host azure function in local with VSCode but something error.
Python version 3.9.12 (python3).
Azure Functions Core Tools
Core Tools Version: 4.0.4483 Commit hash: N/A (64-bit)
Function Runtime Version: 4.1.3.17473
host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
local.setting.json:
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": ""
}
}
Error Message:
Functions:
HttpTrigger1: [GET,POST] http://localhost:7071/api/HttpTrigger1
For detailed output, run func with --verbose flag.
....
[2022-05-09T06:52:10.300Z] from . import dispatcher
[2022-05-09T06:52:10.300Z] File "/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/azure_functions_worker/dispatcher.py", line 19, in <module>
[2022-05-09T06:52:10.300Z] import grpc
[2022-05-09T06:52:10.300Z] File "/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/grpc/__init__.py", line 23, in <module>
[2022-05-09T06:52:10.300Z] from grpc._cython import cygrpc as _cygrpc
[2022-05-09T06:52:10.300Z] ImportError: dlopen(/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/grpc/_cython/cygrpc.cpython-39-darwin.so, 0x0002): tried: '/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/grpc/_cython/cygrpc.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cygrpc.cpython-39-darwin.so' (no such file), '/usr/lib/cygrpc.cpython-39-darwin.so' (no such file)
[2022-05-09T06:52:13.512Z] Host lock lease acquired by instance ID '0000000000000000000000008F1C7F2E'.

After reproducing from our end we observed that If you have an arm64 Python, it'll never be able to load an x86_64 shared library hence we need to enable Rosetta which works at a process by process level.
Steps to be followed
Check the Rosetta in iTerm.
Install homebrew, azure functions core tools, and python in the current homebrew.
And then run your azure function.
REFERENCES:
Support running on M1 Macs [Python]

Related

unable to initialize snowflake data source

I am trying to access the snowflake datasource using "great_expectations" library.
The following is what I tried so far:
from ruamel import yaml
import great_expectations as ge
from great_expectations.core.batch import BatchRequest, RuntimeBatchRequest
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
print(context.test_yaml_config(yaml.dump(datasource_config)))
I initiated great_expectation before executing above code:
great_expectations init
but I am getting the error below:
great_expectations.exceptions.exceptions.DatasourceInitializationError: Cannot initialize datasource my_snowflake_datasource, error: 'NoneType' object has no attribute 'create_engine'
What am I doing wrong?
Your configuration seems to be ok, corresponding to the example here.
If you look at the traceback you should notice that the error propagates starting at the file great_expectations/execution_engine/sqlalchemy_execution_engine.py in your virtual environment.
The actual line where the error occurs is:
self.engine = sa.create_engine(connection_string, **kwargs)
And if you search for that sa at the top of that file:
import sqlalchemy as sa
make_url = import_make_url()
except ImportError:
sa = None
So sqlalchemy is not installed, which you
don't get automatically in your environement if you install greate_expectiations. The thing to do is to
install snowflake-sqlalchemy, since you want to use sqlalchemy's snowflake
plugin (assumption based on your connection_string).
/your/virtualenv/bin/python -m pip install snowflake-sqlalchemy
After that you should no longer get an error, it looks like test_yaml_config is waiting for the connection
to time out.
What worries me greatly is the documented use of a deprecated API of ruamel.yaml.
The function ruamel.yaml.dump is going to be removed in the near future, and you
should use the .dump() method of a ruamel.yaml.YAML() instance.
You should use the following code instead:
import sys
from ruamel.yaml import YAML
import great_expectations as ge
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
yaml = YAML()
yaml.dump(datasource_config, sys.stdout, transform=context.test_yaml_config)
I'll make a PR for great-excpectations to update their documentation/use of ruamel.yaml.

Cypress build error in Azure pipeline: Cannot find module '#cypress/code-coverage/task'

Here is my config:
// cypress/plugins/index.js
module.exports = (on, config) => {
require('#cypress/code-coverage/task')(on, config);
//require('#bahmutov/cypress-extends')(on, config);
return config
}
I am getting an ERROR when trying to run cypress in a Azure pipeline script (within a cypress/included container). This error doesn't occur when I run on my local.
The function exported by the plugins file threw an error.
We invoked the function exported by `/root/e2e/cypress/plugins/index.js`, but it threw an error.
Error: Cannot find module '#cypress/code-coverage/task'
Require stack:
- /root/e2e/cypress/plugins/index.js
- /root/.cache/Cypress/9.1.1/Cypress/resources/app/packages/server/lib/plugins/child/run_plugins.js
The only unusual thing I am doing is this:
// cypress/config/cypress.local.json
{
"extends": "../../cypress.json",
"baseUrl": "https://localhost:4200"
}
And a normal cypress.json config:
// /cypress.json
{
"baseUrl": "http://localhost:4200",
"proxyUrl": "",
"defaultCommandTimeout": 10000,
"video" : false,
"screenshotOnRunFailure" : true,
"experimentalStudio": true,
"projectId": "seixri",
"trashAssetsBeforeRuns" : true,
"videoUploadOnPasses" : false,
"retries": {
"runMode": 0,
"openMode": 0
},
"viewportWidth": 1000,
"viewportHeight": 1200
}
The problem here might be that Cypress does not support extending the configuration file in the way you did, as also stated here: https://www.cypress.io/blog/2020/06/18/extending-the-cypress-config-file/
In my opinion there are two suitable solution approaches:
1. Approach: Use separate configuration files (my recommendation)
As extending an existing configuration file does not work, I would recommend having separate configuration files, e.g. one for local usage and one for the execution in Azure pipelines. You could then simple add two separate commands in your package.json like:
"scripts": {
"cy:ci": "cypress run --config-file cypress/cypress.json",
"cy:local": "cypress run --config-file cypress/cypress.local.json"
},
Docs: https://docs.cypress.io/guides/references/configuration
2. Approach: Set configuration options in your tests
Cypress gives you the option to overwrite configurations directly in your tests. For example, if you have configured the following in cypress.json:
{
"viewportWidth": 1280,
"viewportHeight": 720
}
You can change the viewportWidth in your test like:
Cypress.config('viewportWidth', 800)
Docs: https://docs.cypress.io/api/cypress-api/config#Syntax

Ansible K8s Install Fails - No module named Kubernetes, Failed to import the required Python Library

I am running my playbook with the following command:
ansible-playbook kubernetes_cluster.yml -b --ask-pass -vvv
My error message is:
Traceback (most recent call last):
File "/tmp/ansible_k8s_payload_2BRCdT/ansible_k8s_payload.zip/ansible/module_utils/k8s/common.py", line 33, in <module>
import kubernetes
ImportError: No module named kubernetes
fatal: [kube1.idm.nac-issa.org]: FAILED! => {
"changed": false,
"error": "No module named kubernetes",
"invocation": {
"module_args": {
"api_key": null,
"api_version": "v1",
"append_hash": false,
"apply": false,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"context": null,
"force": false,
"host": null,
"kind": "Namespace",
"kubeconfig": "/root/.kube/config",
"merge_type": null,
"name": "",
"namespace": null,
"password": null,
"proxy": null,
"resource_definition": null,
"src": null,
"state": "present",
"username": null,
"validate": null,
"validate_certs": null,
"wait": false,
"wait_condition": null,
"wait_sleep": 5,
"wait_timeout": 120
}
},
"msg": "Failed to import the required Python library (openshift) on kube1.idm.nac-issa.org's Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"
My environment:
ansible 2.9.16
config file = /home/ansuser/ansible/ansible.cfg
configured module search path = ['/home/ansuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
I followed the directions from a similar question and performed the pip installs on the target host (not the management host)-
Ansible K8s module: Failed to import the required Python library (openshift) on Python /usr/bin/python3
and Cannot execute k8s module
but that hasn't made any difference
pip3 install openshift pyyaml kubernetes and also `sudo pip3 install --upgrade --user openshift`
I'm stuck figuring out what to try next. Any ideas are appreciated!

Debugging python in docker container using debugpy and vs code results in timeout/connection refused

I'm trying to setup native debugging for a python script running in docker for Visual Studio Code using debugpy. Ideally I'd like to just F5 and be on my way (including a build phase if needed). Currently I'm bouncing between a timeout caused from debugpy.listen(5678) inlined within the VS code editor itself (Exception has occurred: RuntimeError timed out waiting for adapter to connect) or a connection refused.
I created a launch.json from the documentation provided by microsoft:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Integration (test)",
"type": "python",
"request": "attach",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/test",
"remoteRoot": "/test"
}
],
"port": 5678,
"host": "127.0.0.1"
}
]
}
building the image looks like this so far:
Dockerfile
FROM python:3.7-slim-buster as base
RUN apt-get -y update; apt-get install -y vim git cmake
WORKDIR /
RUN mkdir .cache src in out config log
COPY requirements.txt .
RUN pip install -r requirements.txt; rm requirements.txt
#! TODO: config folder needs to be a mapped volume so they can change creds without rebuild
WORKDIR /src
COPY test ../test
COPY config ../config
COPY src/ .
#? D E B U G I M A G E
FROM base as debug
RUN pip install debugpy
CMD python -m debugpy --listen 0.0.0.0:5678 ../test/edu.employer._test.py
#! P R O D U C T I O N I M A G E
# FROM base as prod
# CMD [ "python", "/test/edu.employer._test.py" ]
Some examples I found try to simply things with a docker-compose.yaml, but I'm unsure if i need one at this point.
docker-compose.yaml
services:
tester:
container_name: tester
image: employer/test:1.0.0
build:
context: .
target: debug
dockerfile: test/edu.employer._test.Dockerfile
volumes:
- ./out:/out
- ./.cache:/.cache
- ./log:/log
ports:
- 5678:5678
which I based off a the CLI command: docker run -it -v $(pwd)/out:/out -v $(pwd)/.cache:/.cache -v $(pwd)/log:/log employer/test:1.0.0;
"critical" parts of my script just listen and wait for the bugger:
from __future__ import absolute_import
# Standard
import os
import sys
# 3rd Party
import debugpy
debugpy.listen(5678)
debugpy.wait_for_client()
# 1st Party. NOTE: All source files are in /src, so we can add that path here for testing
# and batch import all integrations files. Not very clean however
sys.path.insert(0, os.path.join('/', 'src'))
import integrations as ints
You have to configure the debugger with: debugpy.listen(("0.0.0.0", 5678)).
This happens because, by default, debugpy is listening on localhost. If you have your docker container on another host you have to add 0.0.0.0.
Turns out I needed to create a tasks.json file and provide the details on running the image...
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dependsOn": ["docker-build"],
"dockerRun": {
"image": "employer/test:1.0.0"
// "env": {
// "FLASK_APP": "path_to/flask_entry_point.py"
// }
},
"python": {
"args": [],
"file": "/test/edu.employer._test.py"
}
}
]
}
and define a preLaunchTask:
{
"name": "Docker: Python",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/test",
"remoteRoot": "/test"
}
],
//"projectType": "django"
}
}

Unable to build local AMLS environment with private wheel

I am trying to write a small program using the AzureML Python SDK (v1.0.85) to register an Environment in AMLS and use that definition to construct a local Conda environment when experiments are being run (for a pre-trained model). The code works fine for simple scenarios where all dependencies are loaded from Conda/ public PyPI, but when I introduce a private dependency (e.g. a utils library) I am getting a InternalServerError with the message "Error getting recipe specifications".
The code I am using to register the environment is (after having authenticated to Azure and connected to our workspace):
environment_name = config['environment']['name']
py_version = "3.7"
conda_packages = ["pip"]
pip_packages = ["azureml-defaults"]
private_packages = ["./env-wheels/utils-0.0.3-py3-none-any.whl"]
print(f"Creating environment with name {environment_name}")
environment = Environment(name=environment_name)
conda_deps = CondaDependencies()
print(f"Adding Python version: {py_version}")
conda_deps.set_python_version(py_version)
for conda_pkg in conda_packages:
print(f"Adding Conda denpendency: {conda_pkg}")
conda_deps.add_conda_package(conda_pkg)
for pip_pkg in pip_packages:
print(f"Adding Pip dependency: {pip_pkg}")
conda_deps.add_pip_package(pip_pkg)
for private_pkg in private_packages:
print(f"Uploading private wheel from {private_pkg}")
private_pkg_url = Environment.add_private_pip_wheel(workspace=ws, file_path=Path(private_pkg).absolute(), exist_ok=True)
print(f"Adding private Pip dependency: {private_pkg_url}")
conda_deps.add_pip_package(private_pkg_url)
environment.python.conda_dependencies = conda_deps
environment.register(workspace=ws)
And the code I am using to create the local Conda environment is:
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
print(f"Building environment...")
amls_environment.build_local(workspace=ws)
The exact error message being returned when build_local(...) is called is:
Traceback (most recent call last):
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 814, in build_local
raise error
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 807, in build_local
recipe = environment_client._get_recipe_for_build(name=self.name, version=self.version, **payload)
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\_restclient\environment_client.py", line 171, in _get_recipe_for_build
raise Exception(message)
Exception: Error getting recipe specifications. Code: 500
: {
"error": {
"code": "ServiceError",
"message": "InternalServerError",
"detailsUri": null,
"target": null,
"details": [],
"innerError": null,
"debugInfo": null
},
"correlation": {
"operation": "15043e1469e85a4c96a3c18c45a2af67",
"request": "19231be75a2b8192"
},
"environment": "westeurope",
"location": "westeurope",
"time": "2020-02-28T09:38:47.8900715+00:00"
}
Process finished with exit code 1
Has anyone seen this error before or able to provide some guidance around what the issue may be?
The issue was with out firewall blocking the required requests between AMLS and the storage container (I presume to get the environment definitions/ private wheels).
We resolved this by updating the firewall with appropriate ALLOW rules for the AMLS service to contact and read from the attached storage container.
Assuming that you'd like to run in the script on a remote compute, then my suggestion would be to pass the environment you just "got". to a RunConfiguration, then pass that to an ScriptRunConfig, Estimator, or a PythonScriptStep
from azureml.core import ScriptRunConfig
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
src = ScriptRunConfig(source_directory=project_folder, script='train.py')
# Set compute target to the one created in previous step
src.run_config.target = cpu_cluster.name
# Set environment
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
src.run_config.environment = amls_environment
run = experiment.submit(config=src)
run
Check out the rest of the notebook here.
If you're looking for a local run this notebook might help.

Resources