When importing matplotlib. I get the error: No module named 'numpy.core._multiarray_umath' - python-3.x

I am using matplotlib library in my python project which in turn uses numpy. I have deployed the libraries in AWS Lambda Layers and I am importing them in my AWS lambda function. When I test my AWS Lambda function it throws the following error:
Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/var/lang/bin/python3.8" * The NumPy version is: "1.18.5" Original error was: No module named 'numpy.core._multiarray_umath'
Any idea what could be the possible reason and how to resolve it?

I am answering the question so that if anyone in future also faces this issue so the below solution would might work for them as well.
The problem was that I compiled the required packages in windows 10 enviroment and then I deployed them on layers to be used by AWS Lambda function. AWS Lambda function and Layers use Linux in
the background so the packages compiled in the Windows enviroment were not compatible with AWS Lambda function. When I compiled the required packages again in Linux enviroment and deployed them on layers and used them again with lambda function then it worked like a charm!
This Medium article helped me in solving my issue.

Related

lxml library in AWS Lambda

I've included this library as a layer to my lambda function but when I go to test it I get the error: cannot import name 'etree' from 'lxml'
There are multiple posts about people having this issue and some say to that I need to build it to compile some c libraries. Most posts say to look for another file or folder name 'lxml' which I've verified is not the issue.
I'm able to run the same code I've deployed to my layer on my local linux workstation and it runs without an issue.
Turns out my lambda was running python version 3.8 and that version is not compatible with the version of lxml I am using 4.5.1. Changing the run time to 3.7 fixed the issue. Hope this helps someone.

Why serverless lambda deploy will error with: No module named '_sqlite3' ?

There are other similar question like mine, but I think no one looks complete or fits/answer my case.
I'm deploying a Python 3.6 application on AWS lambda via serverless framework.
With this application I'm using diskcache to perform some small file caching (not using at all sqlite actually)
I'm using "serverless-python-requirements" plugin in order to have all my dependencies (defined in requirements.txt file) packed up and uploaded (diskcache in this case)
When application is live on AWS and I request it, I'll get back a 500 error. And in my logs I can read:
Unable to import module 'handler': No module named '_sqlite3'
Then from answer below I get that sqlite module should not be needed to be installed.
Python: sqlite no matching distribution found for sqlite
So no need (and it wont work) to add sqlite as a requirement...
Then I wonder why AWS lambda is unable to find sqlite once deployed.
Any hint pls?
Thanks

importing python modules in aws lambda

I have about 40-42 python modules to use in my lambda functions, does it mean i have to make a zip of all with my main handler.py and upload. I know about boto-3 but could not get through it as the documentation is not specific. Could someone help to find the easiest way to get around this.
Unfortunately (with exception of the aws-sdk) it's up to you to package and include all your modules when deploying to Lambda.
Luckily this is such a common task that there are a few good frameworks out there to help you automate this process, such as:
Serverless Framework (with Python Requirements): https://github.com/UnitedIncome/serverless-python-requirements
Chalice: https://github.com/aws/chalice
n.b. this isn't unique to Python- node developers have to include their node_modules, and Golang devs have to compile to a static executable.

Uploading a lambda code along with keras libraries on AWS

I have a working lambda code, but I have this new idea of a neural network system that I am trying to implement for which I would require Keras (Python). I tried uploading the same working code along with the keras dependencies as a ZIP file form my desktop with an additional "import keras" statement. Funnily , the same code with the additional "import keras" gives me an error saying remote endpoint could not be reached on the developer portal but when I remove the "import keras" statement, my code works perfectly fine. Why is this happening even after I give both the keras dependencies and my lambda code on the ZIP file? I understand that I am not using keras anywhere in my existing code but when I give the keras dependencies and just try to import, logically, it should still work right?

AWS Lambda to Firestore error: cannot import name 'cygrpc'

On my AWS Lambda Python 3.6 function I'd like to use Google Firestore (Cloud Firestore BETA) for caching purposes, but as soon as I add
from google.cloud import firestore
to my Python script and upload ZIP to AWS Lambda function, Lambda test come back with error
Unable to import module 'MyLambdaFunction': cannot import name 'cygrpc'.
AWS CloudWatch log doesn't contain any details on the error, just that same error message.
Lambda function works great on my local dev machine (Windows 10), and I can write to Firestore fine. It also works on AWS if I comment out the import and all Firestore related lines.
Any tips how I could go about solving this?
The python client for Firestore relies on the C-based implementation of GRPC. This appears not to work by default in AWS Lambda.
Node.js users have reported similar problems and they've documented a workaround of building a docker image.
This should be similar to any getting any other python package that requires native code to work. Perhaps something like this method for getting scikit to work?
I hope this is enough to get you going in the right direction, but unfortunately I don't know anything about AWS Lambda :-(.
Ran into same issue, i solved it by using the plugin serverless-python-requirements for serverless framework and passing:
pythonRequirements:
dockerizePip: true
Essentially this installs your c-based packages (and all other packages) in a docker container where it would work and then symlinks them to your lambda fn.
A helpful guide can be found on: https://serverless.com/blog/serverless-python-packaging/
Plugin: https://github.com/UnitedIncome/serverless-python-requirements

Resources