Error from Azure function: ModuleNotFoundError - python-3.x

I followed the official documentation to set up my requirements.txt file. My understanding was that the function should be able to use modules if they are in requirements.txt. Here is an example of what that file looks like, with all the modules and their versions written in this way:
azure-common==1.1.12
azure-keyvault==1.0.0
azure-mgmt-keyvault==1.0.0
azure-mgmt-nspkg==2.0.0
azure-mgmt-resource==1.2.2
azure-storage-blob==12.3.1
azure-mgmt-subscription==0.5.0
azure-mgmt-network==10.2.0
azure-functions==1.2.1
However, when I look at the function's logs, I can see that it keeps throwing the error, "ModuleNotFoundError: No module named 'azure.common'". This is the first module I try to import in __init__.py. What am I doing wrong?

It seems the modules you use in your function are all old version(such as azure-common==1.1.12, azure-keyvault==1.0.0.....). So could you please install the modules with the latest version. You can search them on this page and for example if install the latest azure-common module, just run the command pip install azure-common(no need the version number) it will install the latest version of the module.
And then use the command below in your VS code to generate the "requirements.txt" automatically.
pip freeze > requirements.txt
Then deploy the function code from local to azure by the command:
func azure functionapp publish <function app name> --build remote
It will deploy the code to azure and install the modules according to the content in the "requirements.txt" which you generated just now.
Hope it helps~

Related

Some AWS lambda functions stopped working with "No module named setuptools._distutils" error

I have an application with many serverless functions deployed to AWS lambda. These functions use Python 3.7 runtime environment. But yesterday after deploying some minor changes, few of these functions stopped working with the error.
[ERROR] Runtime.ImportModuleError: Unable to import module 'functions.graphql.lambda.user_balance': No module named 'setuptools._distutils'
The weird thing is that the functions which are throwing this error were not changed and other functions are working without any issues. No python module was added/removed.
Just to check if the code change has anything to do with this error, I tried deploying a previous version. But the error persists.
I used the serverless framework for deployment.
It looks like this is an issue that started happening for all Python users as of yesterday as the set up tools got updated, but PIP did not.
According to this GitHub issue there is a temporary workaround until this is actually fixed.
Setting environment variable SETUPTOOLS_USE_DISTUTILS=stdlib is a workaround, e.g.:
export SETUPTOOLS_USE_DISTUTILS=stdlib
pip3 install ....
My assumption would be that you could add this as an environment variable for the Lambda possibly through the serverless config?
This is the bug in setuptools https://github.com/pypa/setuptools/issues/2353. Follow the below temporary workaround.
Linux
export SETUPTOOLS_USE_DISTUTILS=stdlib
Windows
set SETUPTOOLS_USE_DISTUTILS=stdlib
After that, execute the pip command.
pip install XXXXX

Trying to add modules(speech_py_impl) in python azure functions through kudu console, but facing issues with virtual env

Below is the error when i try to create virtual env from the function(voicetotext) folder,
root#a8686ca40:/home/site/wwwroot/voicetotext# python -m virtualenv myenv
usage: virtualenv [--version] [--with-traceback] [-v | -q] [--app-data APP_DATA] [--reset-app-data]
[--upgrade-embed-wheels] [--discovery {
builtin}] [-p py] [--creator {builtin,cpython3-posix,venv}] [--seeder {app-data,pip}] [--no-seed]
[--activators comma_sep_list] [--clear] [--system-site-packages] [--symlinks | --
copies] [--no-download | --download] [--
extra-search-dir d [d ...]] [--pip version] [--setuptools version] [--wheel version] [--no-pip]
[--no-setuptools] [--no-wheel] [--no-periodic-update] [--symlink-app-data] [--prompt
prompt] [-h]
dest
virtualenv: error: argument dest: the destination . is not write-able at
/home/site/wwwroot/voicetotext
SystemExit: 2
Please guide on how to import modules to azure functions(python)
Initial error for function is "no module named speech_py_impl"... Read from internet that we should add package "libasound2".. When i try to add this module through kudu's, i am struck. If there is any alternative also please advise. Thanks!
It is not recommended to use kudu to install your modules when using Python Azure Functions. That environment is not always persistent, and you may loose changes. Additionally, module installed through kudu may not be accessible to your function code.
The proper approach is to develop locally and then publish to Azure. To use custom modules such as the one you referred, you need a requirements.txt file in your function app root directly listing all your dependencies. When developing from a local environment with all the dependencies installed, you can run pip freeze > requirements.txt. Once you are ready to develop, you can use the VS Code extension for Azure Functions or the azure-functions-core-tools CLI. For more information on this process, please follow the development guide -- https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#package-management

Google App Engine: cv2 libSM.so.6 error on Flask app

I have a Flask app that I want to deploy on the Google App Engine. Everything checks out fine, the requirements file contains all the modules which get installed successfully. But towards the end I get this error:
from .cv2 import *
ImportError: libSM.so.6: cannot open shared object file: No such file or directory
I read on various blogs and other similar stackoverflow questions that you need to install libsm6 using:
sudo apt-get install libsm6
But even after I did that, I still get the same error. How do I solve this?
The App Engine runtime has a fixed set of system packages that are included in the runtime and unfortunately libsm6 is not one of them. In addition, it's not possible to install additional system packages.
However, this is an ideal use case for Cloud Run, which lets you define your own runtime via a Dockerfile. See the quickstart for an example: https://cloud.google.com/run/docs/quickstarts/build-and-deploy

"Package init file not found (or not a regular file)" - error when building sdist for namespace package

I have a namespace package with folder structure as:
CompanyName\DepartmentName\SubDepartmentName\PkgName
Python 3.3 onwards supports namespace packages so I have not place the __init__.py files in the following folders:
CompanyName
DepartmentName
SubDepartmentName
In the setup.py I have place the following piece of code setuptools.find_namespace_packages() instead of setuptools.find_packages().
When I try to build the sdist, using the following commands:
python -m pip install --upgrade pip
python -m pip install --upgrade setuptools wheel
python setup.py sdist
I get the following error:
package init file 'CompanyName\__init__.py' not found (or not a regular file)
package init file 'CompanyName\DepartmentName\__init__.py' not found (or not a regular file)
package init file 'CompanyName\DepartmentName\SubDepartmentName\__init__.py' not found (or not a regular file)
I have the task setup as part of azure devops pipeline's command line task and have set 'Fail on standard error' to true. The pipeline fails due to the above error.
Though Package init file not found (or not a regular file) is more like a warning than an error locally, it will cause the build pipeline to fail if you set Fail on standard error to true when using VSTS.
Locally:
VSTS with Fail on standard error to default false:
VSTS with Fail on standard error to true:
1.For this, you can choose to turn-off the option(Fail on standard error) cause the python namespace package can be generated successfully though that message occurs. So in this situation, I think you can suppress that message.
2.Also, another direction is to resolve the message when generating the package. Since the cause of the message has something to do with your definitions in your setup.py file, you should use setuptools.find_namespace_packages() like this document suggests.
Because mynamespace doesn’t contain an init.py, setuptools.find_packages() won’t find the sub-package. You must use setuptools.find_namespace_packages() instead or explicitly list all packages in your setup.py.
In addition: It's not that recommended to remove all __init__.py files in packages, check this detailed description from AndreasLukas: If you want to run a particular initialization script when the package or any of its modules or sub-packages are imported, you still require an init.py file.

Domino10 appDevPack: "Error: Cannot find module '#domino/domino-db'"

Just installed the latest Domino 10.0.1 Server on my linux machine and also installed and configured the latest proton package. As far as I can tell it's all running fine.
Next I plan to try my first Node-RED flow using the new Domino10 nodes. So I installed the 'node-red-contrib-dominodb' palette.
Finally tried my first very simple flow trying to query node-demo.nsf as it's described here. From what I read there I assumed that it's sufficient to install the palette, but that obviously is not the case:
as soon as I hit 'Deploy' I receive this error:
Error: Cannot find module '#domino/domino-db'
So I thought that I maybe still have to do a global install in node.js using
npm install -g <package-path>/domino-domino-db-1.1.0.tgz
This indeed created a local #domino/domino-db module inside my node.js npm\node_modules folder. But obviously my node-red environment doesn't know about it.
Question is: how do I register / install that npm package for my local node-red environment?
IBM's instructions (https://flows.nodered.org/node/node-red-contrib-dominodb#Installation)
Say to go view this guide(https://github.com/stefanopog/node-red-contrib-dominodb/blob/master/docs/Using%20the%20new%20Domino%20V10%20NodeRED%20nodes%202.pdf) for installing the domino-db module.
The link is broken, here's an old copy: https://github.com/stefanopog/node-red-contrib-dominodb/blob/a723ef88498c5bfa243abd956a7cc697f0a42610/docs/Using%20the%20new%20Domino%20V10%20NodeRED%20nodes%202.pdf
I believe the section you want is called "Import the tarball". The steps before that require you to unpack and then re-pack the module... which is unnecessary. Just use the tgz that was in the AppDev Pack to begin with.

Resources