Why serverless lambda deploy will error with: No module named '_sqlite3' ? - python-3.x

There are other similar question like mine, but I think no one looks complete or fits/answer my case.
I'm deploying a Python 3.6 application on AWS lambda via serverless framework.
With this application I'm using diskcache to perform some small file caching (not using at all sqlite actually)
I'm using "serverless-python-requirements" plugin in order to have all my dependencies (defined in requirements.txt file) packed up and uploaded (diskcache in this case)
When application is live on AWS and I request it, I'll get back a 500 error. And in my logs I can read:
Unable to import module 'handler': No module named '_sqlite3'
Then from answer below I get that sqlite module should not be needed to be installed.
Python: sqlite no matching distribution found for sqlite
So no need (and it wont work) to add sqlite as a requirement...
Then I wonder why AWS lambda is unable to find sqlite once deployed.
Any hint pls?
Thanks

Related

When importing matplotlib. I get the error: No module named 'numpy.core._multiarray_umath'

I am using matplotlib library in my python project which in turn uses numpy. I have deployed the libraries in AWS Lambda Layers and I am importing them in my AWS lambda function. When I test my AWS Lambda function it throws the following error:
Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/var/lang/bin/python3.8" * The NumPy version is: "1.18.5" Original error was: No module named 'numpy.core._multiarray_umath'
Any idea what could be the possible reason and how to resolve it?
I am answering the question so that if anyone in future also faces this issue so the below solution would might work for them as well.
The problem was that I compiled the required packages in windows 10 enviroment and then I deployed them on layers to be used by AWS Lambda function. AWS Lambda function and Layers use Linux in
the background so the packages compiled in the Windows enviroment were not compatible with AWS Lambda function. When I compiled the required packages again in Linux enviroment and deployed them on layers and used them again with lambda function then it worked like a charm!
This Medium article helped me in solving my issue.

lxml library in AWS Lambda

I've included this library as a layer to my lambda function but when I go to test it I get the error: cannot import name 'etree' from 'lxml'
There are multiple posts about people having this issue and some say to that I need to build it to compile some c libraries. Most posts say to look for another file or folder name 'lxml' which I've verified is not the issue.
I'm able to run the same code I've deployed to my layer on my local linux workstation and it runs without an issue.
Turns out my lambda was running python version 3.8 and that version is not compatible with the version of lxml I am using 4.5.1. Changing the run time to 3.7 fixed the issue. Hope this helps someone.

ImportError: No module named 'google.appengine' [duplicate]

I'm testing Google App Engine and trying to run a simple function to upload files to either the Blobstore or Cloud Storage. I'm typing the Python code directly in the Cloud Shell of my instance. The code is failing when I call:
from google.appengine.ext import blobstore
I get the error code:
Traceback (most recent call last):
File "upload_test.py", line 1, in <module>
from google.appengine.api import users
ImportError: No module named 'google.appengine'
Even though the documentation says that: You can use Google Cloud Shell, which comes with git and Cloud SDK already installed, I've tried installing a bunch of libraries:
gcloud components install app-engine-python
pip install google-cloud-datastore
pip install google-cloud-storage
pip install --upgrade google-api-python-client
I'm still getting the same error. How can I get the appengine library to work? Alternatively, is this the wrong method for creating an app that allows the user to upload files?
The google.appengine module is baked into the first-generation Python (2.7) runtime. It's not available to install via pip, in the second-generation (3.7) runtime, or in Cloud Shell.
The only way to use it is by writing and deploying a first-generation App Engine app.
Thanks #Dustin Ingram
I found the answer in this page.
The current "correct" way of uploading to Cloud Storage is to use google.cloud.storage. The tutorial I linked above explains how to implement it.
The impression I have, however, is that this uses twice the bandwidth as the solution via google.appengine. Originally, the front-end would receive an upload url and send the file directly to the Blobstore (or to Cloud Storage). Now the application uploads to the back-end which, in turn, uploads to Cloud Storage.
I'm not too worried, as I will not be dealing with excessively large files, but it seems strange that the ability to upload directly has been discontinued.
In any case, my problem has been solved.

AWS Lambda to Firestore error: cannot import name 'cygrpc'

On my AWS Lambda Python 3.6 function I'd like to use Google Firestore (Cloud Firestore BETA) for caching purposes, but as soon as I add
from google.cloud import firestore
to my Python script and upload ZIP to AWS Lambda function, Lambda test come back with error
Unable to import module 'MyLambdaFunction': cannot import name 'cygrpc'.
AWS CloudWatch log doesn't contain any details on the error, just that same error message.
Lambda function works great on my local dev machine (Windows 10), and I can write to Firestore fine. It also works on AWS if I comment out the import and all Firestore related lines.
Any tips how I could go about solving this?
The python client for Firestore relies on the C-based implementation of GRPC. This appears not to work by default in AWS Lambda.
Node.js users have reported similar problems and they've documented a workaround of building a docker image.
This should be similar to any getting any other python package that requires native code to work. Perhaps something like this method for getting scikit to work?
I hope this is enough to get you going in the right direction, but unfortunately I don't know anything about AWS Lambda :-(.
Ran into same issue, i solved it by using the plugin serverless-python-requirements for serverless framework and passing:
pythonRequirements:
dockerizePip: true
Essentially this installs your c-based packages (and all other packages) in a docker container where it would work and then symlinks them to your lambda fn.
A helpful guide can be found on: https://serverless.com/blog/serverless-python-packaging/
Plugin: https://github.com/UnitedIncome/serverless-python-requirements

CRUD operations to couchbase from AWS Lambda using couchbase sdk for node.js

I need to run CRUD operations on my bucket (database) in couchbase which is deployed ec2 instance. And the code which I have is running on aws lambda. However, when I try to test this code on lambda by passing details in the body I get the error as : "errorMessage": "/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /var/task/node_modules/couchbase/build/Release/couchbase_impl.node)". This error is because in my function I require an npm module called "couchbase" which is used for executing CRUD operations on my couchbase bucket.
So can you guys help me as to what might be the problem here? Is the file missing on nodejs environment running on lambda or do I need to implement in a different way so as to get it working?
Thanks in advance.
I was able to solve this issue by locally compiling node_modules with the same nodejs version (v0.10.36) which lambda uses and uploading the zip file to lambda.

Resources