Powershell Compress-Archive not publishing Node.js AWS lambda layer correctly - node.js

I work at a company that deploys Node.js and C# AWS lambda functions. I work on a windows machine. Our azure pipeline build environment is also windows environment.
I wrote a powershell script that packages lambda functions and layers as zip files and publishes them to AWS. My issue is deploying node.js lambda layers.
When I use Compress-Archive powershell command to zip the layer files it is preserving the windows \ in file paths. When this gets unzipped in AWS it is expecting / in file paths. So the file structure is incorrect for a node.js runtime and my lambda function that uses layer cannot find needed modules.
One way I made this work from my local machine is to install 7zip utility to zip the files. It seems it zips the files with / file paths and this works correctly when unzipped for lambda layer using node.js runtime. But when I use this powershell script in azure pipeline I cannot install 7zip utility on the build server.
Is there a way to zip files with / in file paths instead of \ that does not require to use a third party utility?

Compress-Archive doesn't keep folder structure and more details and workarounds you can find here. But apart from that you can use Archive Files task (link here), or install 7zip using chocolatey choco install 7zip.install -y.

Related

Google Cloud Platform API for Python and AWS Lambda Incompatibility: Cannot import name 'cygrpc'

I am trying to use Google Cloud Platform (specifically, the Vision API) for Python with AWS Lambda. Thus, I have to create a deployment package for my dependencies. However, when I try to create this deployment package, I get several compilation errors, regardless of the version of Python (3.6 or 2.7). Considering the version 3.6, I get the issue "Cannot import name 'cygrpc'". For 2.7, I get some unknown error with the .path file. I am following the AWS Lambda Deployment Package instructions here. They recommend two options, and both do not work / result in the same issue. Is GCP just not compatible with AWS Lambda for some reason? What's the deal?
Neither Python 3.6 nor 2.7 work for me.
NOTE: I am posting this question here to answer it myself because it took me quite a while to find a solution, and I would like to share my solution.
TL;DR: You cannot compile the deployment package on your Mac or whatever pc you use. You have to do it using a specific OS/"setup", the same one that AWS Lambda uses to run your code. To do this, you have to use EC2.
I will provide here an answer on how to get Google Cloud Vision working on AWS Lambda for Python 2.7. This answer is potentially extendable for other other APIs and other programming languages on AWS Lambda.
So the my journey to a solution began with this initial posting on Github with others who have the same issue. One solution someone posted was
I had the same issue " cannot import name 'cygrpc' " while running
the lambda. Solved it with pip install google-cloud-vision in the AMI
amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 instance and exported the
lib/python3.6/site-packages to aws lambda Thank you #tseaver
This is partially correct, unless I read it wrong, but regardless it led me on the right path. You will have to use EC2. Here are the steps I took:
Set up an EC2 instance by going to EC2 on Amazon. Do a quick read about AWS EC2 if you have not already. Set one up for amzn-ami-hvm-2018.03.0.20180811-x86_64-gp2 or something along those lines (i.e. the most updated one).
Get your EC2 .pem file. Go to your Terminal. cd into your folder where your .pem file is. ssh into your instance using
ssh -i "your-file-name-here.pem" ec2-user#ec2-ip-address-here.compute-1.amazonaws.com
Create the following folders on your instance using mkdir: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.
On your EC2 instance, cd into google-cloud-vision. Run the command:
pip install google-cloud-vision -t .
Note If you get "bash: pip: command not found", then enter "sudo easy_install pip" source.
Repeat step 4 with the following packages, while cd'ing into the respective folder: protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.
Copy each folder on your computer. You can do this using the scp command. Again, in your Terminal, not your EC2 instance and not the Terminal window you used to access your EC2 instance, run the command (below is an example for your "google-cloud-vision" folder, but repeat this with every folder):
sudo scp -r -i your-pem-file-name.pem ec2-user#ec2-ip-address-here.compute-1.amazonaws.com:~/google-cloud-vision ~/Documents/your-local-directory/
Stop your EC2 instance from the AWS console so you don't get overcharged.
For your deployment package, you will need a single folder containing all your modules and your Python scripts. To begin combining all of the modules, create an empty folder titled "modules." Copy and paste all of the contents of the "google-cloud-vision" folder into the "modules" folder. Now place only the folder titled "protobuf" from the "protobuf" (sic) main folder in the "Google" folder of the "modules" folder. Also from the "protobuf" main folder, paste the Protobuf .pth file and the -info folder in the Google folder.
For each module after protobuf, copy and paste in the "modules" folder the folder titled with the module name, the .pth file, and the "-info" folder.
You now have all of your modules properly combined (almost). To finish combination, remove these two files from your "modules" folder: googleapis_common_protos-1.5.3-nspkg.pth and google_cloud_vision-0.34.0-py3.6-nspkg.pth. Copy and paste everything in the "modules" folder into your deployment package folder. Also, if you're using GCP, paste in your .json file for your credentials as well.
Finally, put your Python scripts in this folder, zip the contents (not the folder), upload to S3, and paste the link in your AWS Lambda function and get going!
If something here doesn't work as described, please forgive me and either message me or feel free to edit my answer. Hope this helps.
Building off the answer from #Josh Wolff (thanks a lot, btw!), this can be streamlined a bit by using a Docker image for Lambdas that Amazon makes available.
You can either bundle the libraries with your project source or, as I did below in a Makefile script, upload it as an AWS layer.
layer:
set -e ;\
docker run -v "$(PWD)/src":/var/task "lambci/lambda:build-python3.6" /bin/sh -c "rm -R python; pip install -r requirements.txt -t python/lib/python3.6/site-packages/; exit" ;\
pushd src ;\
zip -r my_lambda_layer.zip python > /dev/null ;\
rm -R python ;\
aws lambda publish-layer-version --layer-name my_lambda_layer --description "Lambda layer" --zip-file fileb://my_lambda_layer.zip --compatible-runtimes "python3.6" ;\
rm my_lambda_layer.zip ;\
popd ;
The above script will:
Pull the Docker image if you don't have it yet (above uses Python 3.6)
Delete the python directory (only useful for running a second
time)
Install all requirements to the python directory, created in your projects /src directory
ZIP the python directory
Upload the AWS layer
Delete the python directory and zip file
Make sure your requirements.txt file includes the modules listed above by Josh: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2
There's a fast solution that doesn't require much coding.
Cloud9 uses AMI so using pip on their virtual environment should make it work.
I created a Lambda from the Cloud9 UI and from the console activated the venv for the EC2 machine. I proceeded to install google-cloud-speech with pip.That was enough to fix the issue.
I was facing same error using goolge-ads API.
{
"errorMessage": "Unable to import module 'lambda_function': cannot import name'cygrpc' from 'grpc._cython' (/var/task/grpc/_cython/init.py)","errorType": "Runtime.ImportModuleError","stackTrace": []}
My Lambda runtime was Python 3.9 and architecture x86_64.
If somebody encounter similar ImportModuleError then see my answer here : Cannot import name 'cygrpc' from 'grpc._cython' - Google Ads API

How to auto-generate deploy.cmd in new Azure CLI?

I'm following this guide to create a web app with a custom deploy.cmd file. The article suggests that I can get a copy of the current deploy.cmd file (which I'll then modify) using the following command:
azure site deploymentscript --python
Unfortunately, when I install the Azure CLI using the MSI linked in the article, there is no azure binary on my path. I do have az -- is this a newer version of the same CLI? -- but I can't find an equivalent deployment script generation command for that executable.
I found a deploy.cmd file using Kudu (under D:\home\site\deployments\tools) but am not sure if that's the appropriate file to use. Can anyone suggest the right Azure CLI command for deployment script generation, or confirm that the deploy.cmd file I found is the right one to modify? Thanks in advance!
Based on my knowledge, there is not an equivalent to azure site deploymentscript in azure cli(2.0). So, you could not do deploy custom script with Azure CLI 2.0.
You had better know the difference between Azure cli 2.0(az) with Azure cli 1.0(azure).
Azure CLI 2.0: Our next-generation CLI written in Python, for use with
the Resource Manager deployment model.
Azure CLI 1.0: Our CLI written in Node.js, for use with both the
classic and Resource Managerdeployment models.
For your scenario, if you could install Azure ClI 1.0, you could refer to this link to install Azure CLI 1.0.
Instead of using the command line to generate a starter deployment script, there is an alternative approach that is often easier:
Deploy your repo without any deployment scripts.
Go to the site's Kudu Console.
From the Tools menu, choose 'Download deployment script'. You'll get a zip with a .deployment and deploy.cmd files.
Commit both files at the root of your repo
Tweak them as needed
More information please refer to this link.
You can use kuduscript to generate the deployment script.
npm install -g kuduscript
kuduscript --python
Here is the list of options
Options:
-h, --help output usage information
-V, --version output the version number
-r, --repositoryRoot [dir path] The root path for the repository (default: .)
--aspWAP <projectFilePath> Create a deployment script for .NET web application, specify the project file path
--aspNetCore <projectFilePath> Create a deployment script for ASP.NET Core web application, specify the project file path
--aspWebSite Create a deployment script for basic website
--go Create a deployment script for Go website
--node Create a deployment script for node.js website
--ruby Create a deployment script for ruby website
--php Create a deployment script for php website
--python Create a deployment script for python website
--functionApp [projectFilePath] Create a deployment script for function App, specify the project file path if using msbuild
--basic Create a deployment script for any other website
--dotNetConsole <projectFilePath> Create a deployment script for .NET console application, specify the project file path
-s, --solutionFile <file path> The solution file path (sln)
-p, --sitePath <directory path> The path to the site being deployed (default: same as repositoryRoot)
-t, --scriptType <batch|bash|posh> The script output type (default: batch)
-o, --outputPath <output path> The path to output generated script (default: same as repository root)
-y, --suppressPrompt Suppresses prompting to confirm you want to overwrite an existing destination file.
--no-dot-deployment Do not generate the .deployment file.
--no-solution Do not require a solution file path (only for --aspWAP otherwise ignored).

upload and install to aws server

I have a test server that checks out my source from github and deploys it locally to my test server.
Now I have it running and tested. I want to upload that working directory to AWS server's, how do I do this?
I have access to AWS via putty.
Then when I have it all on the aws server, can I install it as I would on any other ubuntu server?
There are many ways to do this:
Make a tar ball, scp to your server, untar and install
Use Ansible to checkout code on your server
Best option: Use AWS Code Deploy See this Using AWS CodeDeploy to Deploy an Application from GitHub
Another option, which worked best for me.
Is create an AWS instance, log into it.
Set that up as you would any other server.
then you can save the image and use as a base in future.
I used Ubuntu, but there are numerous AWS instances available.

Run PreSync/PostSync commands via WPP deploy.cmd

I'm trying to figure out how to run a pre/post command using the deploy.cmd generated by VS/MSBuild. I understand there are pre/postsync commands which can be set on the command line with msbuild but this is fixed within the web deploy package inside of the x.deploy.cmd.
How do I go about customizing the output of this file so that I can run the deploy command with specific parameters?
The intention is a non-developer will pick up the package zip file and import the application into IIS. We use IIS to host some windows services and so to be able to deploy we need to stop and uninstall the service before deployment and then install restart in the post deploy stage.
For certain servers we allow auto deployments from TFS and hook this pre/post command using the .targets file of the msbuild WPP pipeline. However, we want to this to be available to the manual deploy command files.
PreSync/PostSync are features of the msdeploy command line and are not supported by the package/manifest providers, or even the API. They are equivalent to running msdeploy a second time, so there's no way you'll be able to include their functionality while directly importing the package into IIS.
I'd recommend having a batch/powershell file on the server that the user runs after copying the package into the same directory.
The .cmd file that MSBuild generates is boilerplate script that you can simply change to call your pre/post powershell scripts. Just overwrite the one generated by the build with your custom one.

How do I deploy Node.js applications as a single executable file? [duplicate]

This question already has answers here:
How to make exe files from a node.js app?
(20 answers)
Closed 7 years ago.
Supposed I have written a Node.js application, and I now would like to distribute it. Of course, I want to make it easy for the user, hence I do not want him to install Node.js, run npm install and then manually type node app.js.
What I'd prefer was a single executable file, e.g. an .exe file on Windows.
How could I approach this?
I am aware of this thread, anyway this is only about Windows. How could I achieve this in a platform-independent manner? Any ideas? Best practices? ...?
The perfect solution was a "compiler" I can give a source folder to. The source folder contains the app itself in various .js files, the node_modules folder and some metadata, such as the package.json. The output should be binaries for various platforms, such as Windows, OS X and Linux.
Oh, and what's important: I do not want to make any changes to the source code, so calls to require with relative paths should still work, even if this relative path is now inside the packaged app.
Any ideas?
PS: I do not want the user to install Node.js independently, it should be included inside the executable as well.
Meanwhile I have found the (for me) perfect solution: nexe, which creates a single executable from a Node.js application including all of its modules.
It's the next best thing to an ideal solution.
First, we're talking about packaging a Node.js app for workshops, demos, etc. where it can be handy to have an app "just running" without the need for the end user to care about installation and dependencies.
You can try the following setup:
Get your apps source code
npm install all dependencies (via package.json) to the local node_modules directory. It is important to perform this step on each platform you want to support separately, in case of binary dependencies.
Copy the Node.js binary – node.exe on Windows, (probably) /usr/local/bin/node on OS X/Linux to your project's root folder. On OS X/Linux you can find the location of the Node.js binary with which node.
For Windows:
Create a self extracting archive, 7zip_extra supports a way to execute a command right after extraction, see: http://www.msfn.org/board/topic/39048-how-to-make-a-7-zip-switchless-installer/.
For OS X/Linux:
You can use tools like makeself or unzipsfx (I don't know if this is compiled with CHEAP_SFX_AUTORUN defined by default).
These tools will extract the archive to a temporary directory, execute the given command (e.g. node app.js) and remove all files when finished.
Not to beat a dead horse, but the solution you're describing sounds a lot like Node-Webkit.
From the Git Page:
node-webkit is an app runtime based on Chromium and node.js. You can write native apps in HTML and JavaScript with node-webkit. It also lets you call Node.js modules directly from the DOM and enables a new way of writing native applications with all Web technologies.
These instructions specifically detail the creation of a single file app that a user can execute, and this portion describes the external dependencies.
I'm not sure if it's the exact solution, but it seems pretty close.
Hope it helps!
JXcore will allow you to turn any nodejs application into a single executable, including all dependencies, in either Windows, Linux, or Mac OS X.
Here is a link to the installer:
https://github.com/jxcore/jxcore-release
And here is a link to how to set it up:
http://jxcore.com/turn-node-applications-into-executables/
It is very easy to use and I have tested it in both Windows 8.1 and Ubuntu 14.04.
FYI: JXcore is a fork of NodeJS so it is 100% NodeJS compatible, with some extra features.
In addition to nexe, browserify can be used to bundle up all your dependencies as a single .js file. This does not bundle the actual node executable, just handles the javascript side. It too does not handle native modules. The command line options for pure node compilation would be browserify --output bundle.js --bare --dg false input.js.
There are a number of steps you have to go through to create an installer and it varies for each Operating System. For Example:
on Mac OS X you need to create a .pkg, there are instructions on how to do that here: https://coolaj86.com/articles/how-to-create-an-osx-pkg-installer.html
on Ubuntu Linux you need to create a .deb, there are instruction on how to do that here: https://coolaj86.com/articles/how-to-create-a-debian-installer.html
on Microsoft Windows you need to create a .exe or .msi, there are instruction on how do that using the innosetup installer here: https://coolaj86.com/articles/how-to-create-an-innosetup-installer.html
You could create a git repo and setup a link to the node git repo as a dependency. Then any user who clones the repo could also install node.
#git submodule [--quiet] add [-b branch] [-f|--force]
git submodule add /var/Node-repo.git common
You could easily package a script up to automatically clone the git repo you have hosted somewhere and "install" from one that one script file.
#!/bin/sh
#clone git repo
git clone your-repo.git

Resources