CRUD operations to couchbase from AWS Lambda using couchbase sdk for node.js - node.js

I need to run CRUD operations on my bucket (database) in couchbase which is deployed ec2 instance. And the code which I have is running on aws lambda. However, when I try to test this code on lambda by passing details in the body I get the error as : "errorMessage": "/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /var/task/node_modules/couchbase/build/Release/couchbase_impl.node)". This error is because in my function I require an npm module called "couchbase" which is used for executing CRUD operations on my couchbase bucket.
So can you guys help me as to what might be the problem here? Is the file missing on nodejs environment running on lambda or do I need to implement in a different way so as to get it working?
Thanks in advance.

I was able to solve this issue by locally compiling node_modules with the same nodejs version (v0.10.36) which lambda uses and uploading the zip file to lambda.

Related

How do I get exact version numbers of NodeJS my AWS lambda currently supports?

How do I get the exact versions of Node.js, e.g. 18.1.2 that I can run on AWS lambda? The documentation gives me 18.x which is not very specific.
Seems like:
AWS doesn't show node.js version in the UI, so you need to console.log(process.version)
the version can change at any time, no one will notify you

AWS Lambda internal node module

As far as I know, there are node modules that are automatically installed by AWS Lambda.
Is it excluded if you know that the module included a request module?
If anyone knows about this part, I would appreciate it if you could let me know.
Thank you.
request module is not installed by default, only NodeJS standard libraries and aws-sdk are installed by default and the rest of modules will originate from either your lambda layers or the lambda function's node_modules. You can find more details from here.

AWS CloudFormation with node.js 10.x Update Error "ZipFile can only be used when Runtime is set to <older node.js versions>"

We are using CloudFormation template to deploy some intermediate code on Lambda function.
We are using ZipFile function to add inline code through CloudFormation.
Current runtime for lambda function is node.js 8.10.
We need to update node version to 10.x.
While updating Lambda using cloudformation we are getting below error:
ZipFile can only be used when Runtime is set to either of nodejs,
nodejs4.3, nodejs6.10, nodejs8.10, python2.7, python3.6, python3.7.
I believe that this is a known issue. https://forums.aws.amazon.com/thread.jspa?threadID=303166&tstart=0
As of this writing, it is still an issue. My suggestion is to have super basic code in an S3 bucket and reference that instead of using a zip file and deploy your actually code after the lambda function is created. Alternatively, you can just upload your zip artifact to an S3 bucket. If your code is proprietary be careful about S3.

Why serverless lambda deploy will error with: No module named '_sqlite3' ?

There are other similar question like mine, but I think no one looks complete or fits/answer my case.
I'm deploying a Python 3.6 application on AWS lambda via serverless framework.
With this application I'm using diskcache to perform some small file caching (not using at all sqlite actually)
I'm using "serverless-python-requirements" plugin in order to have all my dependencies (defined in requirements.txt file) packed up and uploaded (diskcache in this case)
When application is live on AWS and I request it, I'll get back a 500 error. And in my logs I can read:
Unable to import module 'handler': No module named '_sqlite3'
Then from answer below I get that sqlite module should not be needed to be installed.
Python: sqlite no matching distribution found for sqlite
So no need (and it wont work) to add sqlite as a requirement...
Then I wonder why AWS lambda is unable to find sqlite once deployed.
Any hint pls?
Thanks

AWS Lambda to Firestore error: cannot import name 'cygrpc'

On my AWS Lambda Python 3.6 function I'd like to use Google Firestore (Cloud Firestore BETA) for caching purposes, but as soon as I add
from google.cloud import firestore
to my Python script and upload ZIP to AWS Lambda function, Lambda test come back with error
Unable to import module 'MyLambdaFunction': cannot import name 'cygrpc'.
AWS CloudWatch log doesn't contain any details on the error, just that same error message.
Lambda function works great on my local dev machine (Windows 10), and I can write to Firestore fine. It also works on AWS if I comment out the import and all Firestore related lines.
Any tips how I could go about solving this?
The python client for Firestore relies on the C-based implementation of GRPC. This appears not to work by default in AWS Lambda.
Node.js users have reported similar problems and they've documented a workaround of building a docker image.
This should be similar to any getting any other python package that requires native code to work. Perhaps something like this method for getting scikit to work?
I hope this is enough to get you going in the right direction, but unfortunately I don't know anything about AWS Lambda :-(.
Ran into same issue, i solved it by using the plugin serverless-python-requirements for serverless framework and passing:
pythonRequirements:
dockerizePip: true
Essentially this installs your c-based packages (and all other packages) in a docker container where it would work and then symlinks them to your lambda fn.
A helpful guide can be found on: https://serverless.com/blog/serverless-python-packaging/
Plugin: https://github.com/UnitedIncome/serverless-python-requirements

Resources