Google Cloud Platform API for Python and AWS Lambda Incompatibility: Cannot import name 'cygrpc' - python-3.x

I am trying to use Google Cloud Platform (specifically, the Vision API) for Python with AWS Lambda. Thus, I have to create a deployment package for my dependencies. However, when I try to create this deployment package, I get several compilation errors, regardless of the version of Python (3.6 or 2.7). Considering the version 3.6, I get the issue "Cannot import name 'cygrpc'". For 2.7, I get some unknown error with the .path file. I am following the AWS Lambda Deployment Package instructions here. They recommend two options, and both do not work / result in the same issue. Is GCP just not compatible with AWS Lambda for some reason? What's the deal?
Neither Python 3.6 nor 2.7 work for me.
NOTE: I am posting this question here to answer it myself because it took me quite a while to find a solution, and I would like to share my solution.

TL;DR: You cannot compile the deployment package on your Mac or whatever pc you use. You have to do it using a specific OS/"setup", the same one that AWS Lambda uses to run your code. To do this, you have to use EC2.
I will provide here an answer on how to get Google Cloud Vision working on AWS Lambda for Python 2.7. This answer is potentially extendable for other other APIs and other programming languages on AWS Lambda.
So the my journey to a solution began with this initial posting on Github with others who have the same issue. One solution someone posted was
I had the same issue " cannot import name 'cygrpc' " while running
the lambda. Solved it with pip install google-cloud-vision in the AMI
amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 instance and exported the
lib/python3.6/site-packages to aws lambda Thank you #tseaver
This is partially correct, unless I read it wrong, but regardless it led me on the right path. You will have to use EC2. Here are the steps I took:
Set up an EC2 instance by going to EC2 on Amazon. Do a quick read about AWS EC2 if you have not already. Set one up for amzn-ami-hvm-2018.03.0.20180811-x86_64-gp2 or something along those lines (i.e. the most updated one).
Get your EC2 .pem file. Go to your Terminal. cd into your folder where your .pem file is. ssh into your instance using
ssh -i "your-file-name-here.pem" ec2-user#ec2-ip-address-here.compute-1.amazonaws.com
Create the following folders on your instance using mkdir: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.
On your EC2 instance, cd into google-cloud-vision. Run the command:
pip install google-cloud-vision -t .
Note If you get "bash: pip: command not found", then enter "sudo easy_install pip" source.
Repeat step 4 with the following packages, while cd'ing into the respective folder: protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.
Copy each folder on your computer. You can do this using the scp command. Again, in your Terminal, not your EC2 instance and not the Terminal window you used to access your EC2 instance, run the command (below is an example for your "google-cloud-vision" folder, but repeat this with every folder):
sudo scp -r -i your-pem-file-name.pem ec2-user#ec2-ip-address-here.compute-1.amazonaws.com:~/google-cloud-vision ~/Documents/your-local-directory/
Stop your EC2 instance from the AWS console so you don't get overcharged.
For your deployment package, you will need a single folder containing all your modules and your Python scripts. To begin combining all of the modules, create an empty folder titled "modules." Copy and paste all of the contents of the "google-cloud-vision" folder into the "modules" folder. Now place only the folder titled "protobuf" from the "protobuf" (sic) main folder in the "Google" folder of the "modules" folder. Also from the "protobuf" main folder, paste the Protobuf .pth file and the -info folder in the Google folder.
For each module after protobuf, copy and paste in the "modules" folder the folder titled with the module name, the .pth file, and the "-info" folder.
You now have all of your modules properly combined (almost). To finish combination, remove these two files from your "modules" folder: googleapis_common_protos-1.5.3-nspkg.pth and google_cloud_vision-0.34.0-py3.6-nspkg.pth. Copy and paste everything in the "modules" folder into your deployment package folder. Also, if you're using GCP, paste in your .json file for your credentials as well.
Finally, put your Python scripts in this folder, zip the contents (not the folder), upload to S3, and paste the link in your AWS Lambda function and get going!
If something here doesn't work as described, please forgive me and either message me or feel free to edit my answer. Hope this helps.

Building off the answer from #Josh Wolff (thanks a lot, btw!), this can be streamlined a bit by using a Docker image for Lambdas that Amazon makes available.
You can either bundle the libraries with your project source or, as I did below in a Makefile script, upload it as an AWS layer.
layer:
set -e ;\
docker run -v "$(PWD)/src":/var/task "lambci/lambda:build-python3.6" /bin/sh -c "rm -R python; pip install -r requirements.txt -t python/lib/python3.6/site-packages/; exit" ;\
pushd src ;\
zip -r my_lambda_layer.zip python > /dev/null ;\
rm -R python ;\
aws lambda publish-layer-version --layer-name my_lambda_layer --description "Lambda layer" --zip-file fileb://my_lambda_layer.zip --compatible-runtimes "python3.6" ;\
rm my_lambda_layer.zip ;\
popd ;
The above script will:
Pull the Docker image if you don't have it yet (above uses Python 3.6)
Delete the python directory (only useful for running a second
time)
Install all requirements to the python directory, created in your projects /src directory
ZIP the python directory
Upload the AWS layer
Delete the python directory and zip file
Make sure your requirements.txt file includes the modules listed above by Josh: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2

There's a fast solution that doesn't require much coding.
Cloud9 uses AMI so using pip on their virtual environment should make it work.
I created a Lambda from the Cloud9 UI and from the console activated the venv for the EC2 machine. I proceeded to install google-cloud-speech with pip.That was enough to fix the issue.

I was facing same error using goolge-ads API.
{
"errorMessage": "Unable to import module 'lambda_function': cannot import name'cygrpc' from 'grpc._cython' (/var/task/grpc/_cython/init.py)","errorType": "Runtime.ImportModuleError","stackTrace": []}
My Lambda runtime was Python 3.9 and architecture x86_64.
If somebody encounter similar ImportModuleError then see my answer here : Cannot import name 'cygrpc' from 'grpc._cython' - Google Ads API

Related

How to make venv completely portable?

I want to create a venv environment (not virtualenv) using the following commands:
sudo apt-get install python3.8-venv
python3.8 -m venv venv_name
source venv_name/bin/activate
But it seems to be that it contains dependency on the system where it is created and it creates problems whenever I want to make it portable. That means, I want when I copy this folder along with my project and run it in another machine, it will work without making any changes.
But I am unable to activate the environment (it gets activated but the interpreter still uses system's python and pip.
Therefore, I tried making another venv in the second computer and copied the lib and lib64 folders from the older venv to this newer venv (without replacing existing files) but getting the following error this time:
File "/usr/local/lib/python3.8/ctypes/__init__.py" line 7, in <module>
from _ctypes import Union, Structure, Array
ModuleNotFoundError: No module named '_ctypes'
But interesting thing is, if you notice, the newly created venv in the new machine also searching the missing package in its local directory and not in the venv.
How do I make the venv portable along with all its dependencies and reliably deploy in another device just by activating it?
Disclaimer: None of this is my work, I just found this blog-post and will briefly summarize: https://aarongorka.com/blog/portable-virtualenv/ archived
Caveat: This only works (semi-reliably) among Linux machines. Don't use in production!
The first step is to get copies of your python-executables in the venv/bin folder, so be sure to specify --copies when creating the virtual environment:
python3 -m venv --copies venv
All that's left seems to be changing the hardcoded absolute paths into relative paths, using your tool of choice. In the blogpost, they use pwd after changing to the venv-parent-directory whenever venv/bin/activate is run.
sed -i '43s/.*/VIRTUAL_ENV="$(cd "$(dirname "$(dirname "${BASH_SOURCE[0]}" )")" && pwd)"/' venv/bin/activate
Then, similarly all pip-scripts need to be adapted to run execution with the local python
sed -i '1s/./#!/usr/bin/env python/' venv/bin/pip
BUT, the real problem starts when installing new modules. I would expect most modules to behave nicely, but there will be those that hardcode expected path-structures or similarly thwart any work towards replacing path dependencies.
However: I find this trick is very useful to share a single folder among developers for finding elusive bugs.

Powershell Compress-Archive not publishing Node.js AWS lambda layer correctly

I work at a company that deploys Node.js and C# AWS lambda functions. I work on a windows machine. Our azure pipeline build environment is also windows environment.
I wrote a powershell script that packages lambda functions and layers as zip files and publishes them to AWS. My issue is deploying node.js lambda layers.
When I use Compress-Archive powershell command to zip the layer files it is preserving the windows \ in file paths. When this gets unzipped in AWS it is expecting / in file paths. So the file structure is incorrect for a node.js runtime and my lambda function that uses layer cannot find needed modules.
One way I made this work from my local machine is to install 7zip utility to zip the files. It seems it zips the files with / file paths and this works correctly when unzipped for lambda layer using node.js runtime. But when I use this powershell script in azure pipeline I cannot install 7zip utility on the build server.
Is there a way to zip files with / in file paths instead of \ that does not require to use a third party utility?
Compress-Archive doesn't keep folder structure and more details and workarounds you can find here. But apart from that you can use Archive Files task (link here), or install 7zip using chocolatey choco install 7zip.install -y.

Unable to import Pandas in AWS Lambda

I am new to AWS Lambda and I want to run code on Lambda for a machine learning API. The functions that I want to run on Lambda are, in summary, one to read some csv files to create a pandas dataFrame and search in it and the other to run some pickled machine learning models through requests from a Flask application. To do this, I need to import pandas, joblib and possibly scikit-learn which are compatible with Amazon Linux. I am using a Windows machine.
In general, I am going with the approach of using Lambda's layers by uploading zip files. Of course, since Lambda has a pre-built layer with SciPy and Numpy so I will not import them. If I import them, I will exceed Lambda's layer limit anyway.
To be more specific, I have done the following:
Downloaded and extracted linux-compatible versions of the libraries listed above. For example: From this link I have downloaded "pandas-0.25.0-cp35-cp35m-manylinux1_x86_64.whl" and unzipped to a folder.
The unzipped libraries are in the following directory:
lambda_layers\python\lib\python3.7\site-packages
They are zipped into a file and uploaded onto S3 Bucket for creating a layer.
I imported the packages:
import json
import boto3
import pandas as pd
I got the following error from Lambda:
{
"errorMessage": "Unable to import module 'lambda_function': C extension: No module named 'pandas._libs.tslibs.conversion' not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first.",
"errorType": "Runtime.ImportModuleError"
}
Folder structure should be standard, you can also use Docker to create the zipped Linux compatible library and upload it in AWS Lambda layers. Below are the tested commands to create the zipped library for AWS Lambda layer:
Create and navigate to a directory :
$mkdir aws1
$cd aws1
Write the below commands in Dockerfile and exit by CTRL + D :
$cat> Dockerfile
FROM amazonlinux:2017.03
RUN yum -y install git \
python36 \
python36-pip \
zip \
&& yum clean all
RUN python3 -m pip install --upgrade pip \
&& python3 -m pip install boto3
You can provide any name for the image :
$docker build -t pythn1/lambda .
Run the image :
$docker run --rm -it -v ${PWD}:/var/task pythn1/lambda:latest bash
Specify the package which you want to zip, in requirements.txt and exit by CTRL + D :
$ cat > requirements.txt
pandas
sklearn
You can try using correct file structure (/python/lib/python3.6/site-packages/) here, but I did not test it yet :
$pip install -r requirements.txt -t /usr/lib/python3.6/dist-packages/
Navigate to the below directory :
$cd var/task
Create a zip file :
$ zip -r ./layers.zip /usr/lib/python3.6/dist-packages/
You should be able to see a layers.zip file in aws1 folder. If you provide the correct folder structure while installing, then the below steps are not required. But, with the folder structure I used, below commands are required :
Unzip layers.zip.
Exit Docker or open a new terminal and navigate to the folder where you unzipped the file. Unzipped file will be in the folder structure /usr/lib/python3.6/dist-packages/.
Copy these files to the correct folder structure :
$ cp -r ./python/lib/python3.6/site-packages/ /usr/lib/python3.6/dist-packages/
Zip them again :
$ zip -r ./lib_python.zip ./python
Upload the zip file to the layer, and add that layer to your Lambda function. Also, make sure that you select the right running environment while creating the layer.
Following this document - https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path, you should zip python\lib\python3.7\site-packages\pandas (and other dependencies) folder for your python layers.
Make sure you add the layer to your function and follow the documentation for the right permissions.
I appreciate the answers that were given, just posting my own answer (that I found after a whole day looking) here for reference purpose.
I followed this guide and also this guide.
In summary, the steps to what I did are:
Connect to my Amazon EC2 instance (running on Linux) through ssh. I
wanted to deploy an application on Beanstalk so it was already up for
me anyway.
Follow the steps in the first guide to install python 3.7.
Follow the steps in the second guide to install the libraries. One of
the key notes is not to install with pip install -t since that
will lead to the libraries and the C extensions not built.
Zip the directory found in python\lib\python3.7\site-packages\ as
mentioned by the answers here (although I did follow the directory
guide in my first attempts)
Get the file from EC2 instance through
FileZilla.
Follow the Lambda layers guide and it is done.

libffi-d78936b1.so.6.0.4: cannot open shared object file Error on AWS Lambda function

I am trying to deploy a python Lambda package with watson_developer_cloud sdk. Cryptography is one of many dependencies this package have. I have build this package on Linux machine. My package includes .libffi-d78936b1.so.6.0.4 hidden file too. But it is still not accessible to my lambda function. I am still getting 'libffi-d78936b1.so.6.0.4: cannot open shared object file' Error.
I have built my packages on Vagrant server, using instructions from here: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python
Exact error:
Unable to import module 'test_translation': libffi-d78936b1.so.6.0.4: cannot open shared object file: No such file or directory
On a note, as explained in this solution, I have already created my package using zip -r9 $DIR/lambda_function.zip . instead of *. But it is still not working for me.
Any direction is highly appreciable.
The libffi-d78936b1.so.6.0.4 is in a hidden folder named .libs_cffi_backend.
So to add this hidden folder in your lambda zip, you should do something like:
zip -r ../lambda_function.zip * .[^.]*
That will create a zip file in the directory above with the name lambda_function.zip, containing all files in the current directory (first *) and every thing starting with .* but not ..* ([^.])
In a situation like this, I would invest some time setting up a local SAM environment so you can:
1 - Debug your Lambda
2 - Check what is being packaged and the files hierarchy
https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html
Alternatively you can remove this import and instrument your lambda function to print some of the files and directories it "sees".
I strongly recommend you giving SAM a try though, since it will make not only this debugging way easier but any further test you need to perform down the road. Lambdas are tricky to debug.
A little late, and I would comment on Frank's answer but not enough reputation.
I was including the the hidden directory .libs_cffi_backend in my deployment package, but for some reason Lambda could not find the libffi-d78936b1.so.6.0.4 file located within.
After copying this file into the same 'root' level directory as my lambda handler it was able to load the dependency and execute.
Also, make sure all the files in the deployment package are readable chmod -R 644 .

How to install rabbitmq and erlang on centOS without root user?

Can anyone help me with installation?
I have install virtualEnv and trying to install both of these. but not sure it is correct or not.
I know this is an old version, but it worked for me on a different Linux build (Mate OS). Follow the steps in this blog post which I have simplified below.
Download the below
ERLANG from OTP R16B03-1 Source File
RabbitMQ from RabbitMQ Server.tar.gz
Installing Erlang
Extract the ERLANG file
cd to source folder
run $ configure
run $ make
Open Makefile and change /Users/deepkrish/Application/erlang to a suitable directory
The line you are looking for is the below:
# prefix from configure, default is /usr/local (must be an absolute path) prefix = /Users/deepkrish/Application/erlang
Run $ make install
Once Erlang is installed non root user add the erlang/bin to PATH in .bash_profile like below:
export ERLANG=”/Users/deepkrish/Application/erlang/bin”
export PATH=${ERLANG}:${PATH}
Now execute the profile by running `$ source .bash_profile” or log off and login again.
Check $ erl -version This should give you the below:
Erlang (SMP,ASYNC_THREADS,HIPE) (BEAM) emulator version 5.10.4
Installing RabbitMQ
Untar the RabbitMQ.tar file and $ cd to the extracted folder
run $ make
2. This should create a scripts folder. Now change into that. $ cd scripts
Now change the below in the rabbitmq-defaults file. This will change where we run and log rabbitMQ. You can change it to the folder you want to run RabbitMQ from as below
### next line potentially updated in package install steps SYS_PREFIX=~/Application/RabbitMQ
Save and close the file
Now create a directory mkdir -p ../etc/rabbitmq Note that if you don't have access to the /etc directory you can also change it to somewhere else.
./rabbitmq-plugins enable rabbitmq_management
Start rabbitmq server $ ./rabbitmq-server &
I use the below script file to start RabbitMQ server whenever I log on.
#!/bin/sh
cd /home/myusername/myproject/RabbitMQ/rabbitmq-server-3.2.3/scripts
export ERLANG="/home/myusername/myproject/RabbitMQ/erlang/bin"
export PATH=${ERLANG}:${PATH}
./rabbitmq-server &

Resources