lxml library in AWS Lambda - python-3.x

I've included this library as a layer to my lambda function but when I go to test it I get the error: cannot import name 'etree' from 'lxml'
There are multiple posts about people having this issue and some say to that I need to build it to compile some c libraries. Most posts say to look for another file or folder name 'lxml' which I've verified is not the issue.
I'm able to run the same code I've deployed to my layer on my local linux workstation and it runs without an issue.

Turns out my lambda was running python version 3.8 and that version is not compatible with the version of lxml I am using 4.5.1. Changing the run time to 3.7 fixed the issue. Hope this helps someone.

Related

python package installation error while creating a webjob in azure

i am creating a webjob which has following python dependencies(azure-storage-blob==12.8.1,azure) along with other dependencies, the problem is here that my code is getting stuck at below from almost 3-4 hours only.
Dowenloading azure_common-1.1.8-py2.py3-none-any.whl(7.9kb)
pip is looking at multiple versions of azure-core to determine which version is compatible
with other requirements. This could take a while.
[08/12/2021 19:55:54 > d827c9: INFO] INFO: This is taking longer than usual. You might need to
provide the dependency resolver with stricter constraints to reduce runtime. If you want to
abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what
happened here: https://pip.pypa.io/surveys/backtracking
The thing is that if i installed specific version of azure then its not compatible with azure-storage-blob and its throwing error at importing blob storage and if f didnt install azure or other version of azure which is not compatible with azure-storage-blob==12.8.1 and throwing below error :
from azure.keyvault import KeyVaultAuthentication, KeyVaultClient
ImportError: cannot import name 'KeyVaultAuthentication'
does anyone know how to install python packages while creating azure webjob and also solution to overcome this issue
i have another question related to a triggered webjob , so suppose if i installed packages successfully so every time it runs whether it will install all the packages ever ytime or it will do only at first hit and saved packages in env
Check what the dependency tree looks like locally by running pip freeze and then provide the strict versions to prevent the dependency resolution timeouts.

When importing matplotlib. I get the error: No module named 'numpy.core._multiarray_umath'

I am using matplotlib library in my python project which in turn uses numpy. I have deployed the libraries in AWS Lambda Layers and I am importing them in my AWS lambda function. When I test my AWS Lambda function it throws the following error:
Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/var/lang/bin/python3.8" * The NumPy version is: "1.18.5" Original error was: No module named 'numpy.core._multiarray_umath'
Any idea what could be the possible reason and how to resolve it?
I am answering the question so that if anyone in future also faces this issue so the below solution would might work for them as well.
The problem was that I compiled the required packages in windows 10 enviroment and then I deployed them on layers to be used by AWS Lambda function. AWS Lambda function and Layers use Linux in
the background so the packages compiled in the Windows enviroment were not compatible with AWS Lambda function. When I compiled the required packages again in Linux enviroment and deployed them on layers and used them again with lambda function then it worked like a charm!
This Medium article helped me in solving my issue.

Why serverless lambda deploy will error with: No module named '_sqlite3' ?

There are other similar question like mine, but I think no one looks complete or fits/answer my case.
I'm deploying a Python 3.6 application on AWS lambda via serverless framework.
With this application I'm using diskcache to perform some small file caching (not using at all sqlite actually)
I'm using "serverless-python-requirements" plugin in order to have all my dependencies (defined in requirements.txt file) packed up and uploaded (diskcache in this case)
When application is live on AWS and I request it, I'll get back a 500 error. And in my logs I can read:
Unable to import module 'handler': No module named '_sqlite3'
Then from answer below I get that sqlite module should not be needed to be installed.
Python: sqlite no matching distribution found for sqlite
So no need (and it wont work) to add sqlite as a requirement...
Then I wonder why AWS lambda is unable to find sqlite once deployed.
Any hint pls?
Thanks

Where is the BlockBlobService Class Located in Python Azure Module?

I am pretty new to using the microsoft azure service and trying to follow the tutorial in https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python Using Python 3.5.6 in conda 4.5.11 distribution on a Windows PC.
The first problem I am facing while importing azure is I cannot see the version the usual way. That means
azure.__version__
gives an error.
Then, this line of the code gives me an error saying it can neither import names BlockBlobService, nor PublicAccess. Seems like both have been deprecated or I am myself using some old version.
from azure.storage.blob import BlockBlobService, PublicAccess #Option 1
However, the following import is working.
from azure.storage.blob import BlobService #Option 2
But the problem with this is after I create a local file and try to upload with a create_blob_from_path method (as advised in the tutorial), the method is either non-existent or deprecated.
I looked around the web for solution of this BlockBlobService issue, and seemed there has been a persistent confusion around the correct module hierarchy and class names to import. One user, for example, got some official documentation from the library which advised this, which also does not work.
from azure.storage import BlobService #Option 3
Still someone else reported some complaint with this, which is working on my system at least. But this does not import the needed Blob object.
import azure.storage.blob #Option 4
Further, according to this documentation, https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blockblobservice.blockblobservice?view=azure-python
the BlockBlobService class is located in azure.storage.blob.blockblobservice module. But the interpreter throws an import error when I try to import that as well.
Most of the proposed solutions are around some upgrading/downgrading of versions, but, silly me, somehow I cannot even find the version of the azure module like I do for other modules. Also, it seems many of the solutions are for pip3 running on Linux, whence I am using conda 4.5.11 on Windows. So how to make the azure API work?
As of November 2020, Azure maintains two versions of storage SDK:
v12 (Link)
v2.1 (Link)
2.1 is considered to be the legacy version of the API (Link):
This quickstart uses a legacy version of the Azure Blob storage client library. To get >started with the latest version, see Quickstart: Manage blobs with Python v12 SDK.
BlockBlobService is located in v2.1 and should be avoided. Use v12 version instead.
On windows, you should use pip install azure.
My environment is windows 10 with python 3.6.5, but I didn't use conda.
First, in cmd, run pip install azure, screenshot as below:
Then in pycharm, try use the from xxx import xx, screenshots as below:
For BlockBlobService:
For PublicAccess:
The BlockBlobService location:

Trouble importing HTTPTokenAuth from flask_httpauth

I am trying to use token authentication for a Flask project.
from flask_httpauth import HTTPBasicAuth # works
from flask_httpauth import HTTPTokenAuth # does not work.
I get the following error
ImportError: cannot import name HTTPTokenAuth
I tried
pip install flask_httpauth --upgrade
But it claims everything is up to date. (Flask-HTTPAuth==3.1.1)
The docs suggest this is the proper way to import it, but for some reason it is not working. Any ideas how I can get the token auth to import?
Edit:: I deleted and recreated the virtual environment I was using.
I am using python anywhere.
The problem persists. I discovered an older version of Flask_httpauth is loaded by default (v2.2.0 instead of v3.1.1). I went into the site packages and saw the HTTPTokenAuth is there and should be called.
I tried doing
import flask_httpauth
print (flask_httpauth.__version__)
to check the version being called by my app, but that doesnt work for all python packages, and it seems flask_httpauth doesnt have that functionality.
There are no errors displayed where I have the virtual enviorment linked on the web tab of pythonanywhere.
PythonAnywhere dev here, just reposting the solution that was discovered from #ExperimentsWithCode's forum post. The problem was happening when the code was being run from the editor on PythonAnywhere. This is separate from the configuration that is done on the "Web" tab where the virtualenv was specified: people can run any code they want from the editor, regardless of which web app it's associated with, or even code that's not associated with a web app.
So the solution was what #Miguel suggested: use a shebang.

Resources