Further outlining is in the context of NodeJS and Monorepo (based on Lerna).
I have AWS stack with several AWS Lambda inside deployed by means of AWS CloudFormation. Some of the lambdas are simple (the single small module) and could be inlined:
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-from-wbr-inlinecode
const someLambda = new Function(this, 'some-lambda', {
code: Code.fromInline(fs.readFileSync(require.resolve(<relative path to lambda module>), 'utf-8')),
handler: 'index.handler',
runtime: Runtime.NODEJS_12_X
});
Some have no dependencies and packaged as follows:
const someLambda = new Function(this, 'some-lambda', {
code: Code.fromAsset(<relative path to folder with lambda>),
handler: 'index.handler',
runtime: Runtime.NODEJS_12_X
});
But in case of relatively huge lambdas with dependencies, as I understand, we only way to package (proposed by API) is #aws-cdk/aws-lambda-nodejs:
import * as lambdaNJS from "#aws-cdk/aws-lambda-nodejs";
export function createNodeJSFunction(
scope: cdk.Construct, id: string, nodejsFunctionProps: Partial<NodejsFunctionProps>
) {
const params: NodejsFunctionProps = Object.assign({
parcelEnvironment: { NODE_ENV: 'production' },
}, nodejsFunctionProps);
return new lambdaNJS.NodejsFunction(scope, id, params);
}
For standalone packages, it works well, but in case of the monorepo it just hangs on synth of the stack.
I just looking for alternatives, cause I believe it is not a good idea to bundle (parcel) BE sources.
I've created the following primitive library to zip only required node_modules despite packages hoisting.
https://github.com/redneckz/slice-node-modules
Usage (from monorepo root):
$ npx #redneckz/slice-node-modules \
-e packages/some-lambda/lib/index.js \
--exclude 'aws-*' \
--zip some-lambda.zip
--exclude 'aws-*' - AWS runtime is included by default, so no need to package it.
Here is an example if using cloudformation and template.yml.
Create a make file: Makefile with following targets
# Application
APPLICATION=applicatin-name
# AWS
PROFILE=your-profile
REGION=us-east-1
S3_BUCKET=${APPLICATION}-deploy
install:
rm -rf node_modules
npm install
clean:
rm -rf build
build: clean
mkdir build
zip -qr build/package.zip src node_modules
ls -lah build/package.*
deploy:
sam package \
--profile ${PROFILE} \
--region ${REGION} \
--template-file template.yaml \
--s3-bucket ${S3_BUCKET} \
--output-template-file ./build/package.yaml
sam deploy \
--profile ${PROFILE} \
--region ${REGION} \
--template-file ./build/package.yaml \
--stack-name ${APPLICATION}-lambda \
--capabilities CAPABILITY_NAMED_IAM
Make sure the s3 bucket is created, you could add this step as another target in the Makefile.
How to build and deploy on AWS ?
make build
make deploy
I have struggled with this as well, and I was using your slice-node-modules successfully for a while. As I have consolidated more of my projects into monorepos and begun using shared dependencies which reside as siblings rather than being externally published, I ran into shortcomings with that approach.
I've created a new tool called lerna-to-lambda which was specifically tailored to my use case. I published it publicly with minimal documentation, hopefully enough to help others in similar situations. The gist of it is that you run l2l in your bundling step, after you've installed all of your dependencies, and it copies what is needed into an output directory which is then ready to deploy to Lambda using SAM or whatever.
For example, from the README, something like this might be in your Lambda function's package.json:
"scripts": {
...
"clean": "rimraf build lambda",
"compile": "tsc -p tsconfig.build.json",
"package": "l2l -i build -o lambda",
"build": "yarn run clean && yarn run compile && yarn run package"
},
In this case, the compile step is compiling TypeScript files from a source directory into JavaScript files in the build directory. Then the package step bundles up all the code from build along with all of the Lambda's dependencies (except aws-sdk) into the directory lambda, which is what you'd deploy to AWS. If someone were using plain JavaScript rather than TypeScript, they could just copy the necessary .js files into the build directory before packaging.
It's likely that your solution is still working fine for your needs, but I thought I would share this here as an alternative in case others are in a similar situation and have trouble using slice-node-modules.
Related
I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...
I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.
TLDR: How can one send .env files to a submodule during the pm2 deployment process while avoiding the fatal: destination path '/home/projects/client' already exists and is not an empty directory error message from git?
I'm running an automatic deployment script which clones some .env files from my local machine into my production machine, using pm2.
The below script does the following:
Clones the .env files from my local machine into the host for both my server and the client
Deploys the project w/ pm2
Is supposed to update the submodules***
Installs the dependencies for my server and build the server
Install the dependencies for my front-end (create-react-app) and builds the frontend
Run the project
deploy: {
production: {
user: "harrison",
host: hosts,
key: "~/.ssh/id_rsa",
ref: "origin/master",
repo: process.env.GIT_REPO,
path: process.env.PROJECT_PATH,
"pre-deploy-local": `./deployEnvs.sh ${process.env.PROJECT_PATH} ${hostsBashArgs} && \
./deployClientEnvs.sh ${process.env.PROJECT_PATH} ${hostsBashArgs}`,
"post-deploy": `source ~/.zshrc && \
git submodule update --init --recursive && \
yarn install --ignore-engines && \
yarn prod:build && \
cd client && \
yarn install --ignore-engines && \
yarn build && \
cd ../ && \
yarn prod:serve`,
},
*** The problem that I'm having relates to the part of the script that downloads the submodule (everything else works fine).
The script tells me that the client folder is not empty (this is because it's necessary for me to send the .env files to the client folder before building the application.
Is it possible for me to somehow send the .env files to the client folder after updating the submodule? How can I send the .env files from my local machine to the submodule and avoid the "this directory is not empty" message?
For clarity, here is my folder structure:
Copy these env files in another directory on the distant server, then have the post-deploy action get them at the right time (after the submodules are initialized).
Can someone please provide information on how to deploy Strapi to AWS Elastic Beanstalk?
I have found many resources on how to deploy Strapi on many other different platforms such as Digital Ocean and Heroku, but I am very curious about deploying Strapi to Elastic Beanstalk. Is that possible and how can I do with that?
First you need an EBS application & environment (Web Server) running Node version 12 (as of now). You'll also need to change the package.json in your Strapi project and update the engines part, like this (major version must match EBS Node version):
"engines": {
"node": "12.X.Y", // minor (X) & patch (Y) versions are up to you
...
},
You must switch your project to use NPM instead of Yarn (EBS currently only supports NPM out-of-the-box), to do this I recommend a tool like synp.
Then create a Procfile which will describe how you want EBS to run your app:
web: npm run start
Then to deploy manually, you could first (in the project root) run npm install, then npm run build to build the Strapi Admin (React) application. After the Strapi Admin has been built, make sure to remove the node_modules folder, because EBS will automatically install dependencies for you. (*)
Last step is to zip the whole project (again, in project root, run: zip -r application.zip .), upload the zip to AWS EBS & let it do it's magic. Hopefully it should then install dependencies and start your application automatically.
Side note: When using some specific dependencies in your project (one example is sharp), the EBS may fail to install your dependencies, to fix this, add a .npmrc file to your project root with the following contents:
unsafe-perm=true
Side note #2: You need to set some environment variables in the EBS configuration panel in order for Strapi to work (like database credentials etc.).
(*) Although you could include node_modules in your app and zip it and upload to EBS (which could work), sometimes zipping node_modules may break some dependencies, so I recommend removing it and let EBS install dependencies for you.
If you want to deploy Strapi on Elastic Beanstalk with AWS CodePipeline the following steps worked for me:
Navigate to Elastic Beanstalk and Create a new application with the corresponding Node version for the application
Platform: Node.js
Platform Branch: Node.js 12 funning on 64bit Amazon Linux 2
Platform Version: 5.4.6
Select Sample Application to start (we will connect this to AWS CodePipeline in a later step)
Set up the code repository on GitHub (if one doesn’t already exist)
Navigate to AWS CodeBuild and select create build project
In the Source Section connect to your Github Repository
In the Environment Section select the following configurations
Environment Image: Manage image
Operating System: Ubuntu
Runtimes: Standard
Image: aws/codebuild/standard:5.0
Role name: AWS will create one for you
Buildspec
Select “Use a buildspec file” - We will have to add a buildspec.yml file to our project in step 4
Leave the other default settings and continue with Create build project
Update your Strapi Code
Add the Procfile, .npmrc, and update the package.json file accordingly as suggested by Richárd Szegh
Add the .ebignore file for Elastic Beanstalk
Add the following buildspec.yml and .ebignore file into your project
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
pre_build:
commands:
- npm install
build:
commands:
- npm run build
post_build:
commands:
- rm -rf node_modules
artifacts:
files:
- '**/*'
.ebignore
# dependencies
node_modules/
# repository/project stuff
.idea/
.git/
.gitlab-ci.yml
README.md
# misc
.DS_Store
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# local env files
.env.local
.env.development.local
.env.test.local
.env.production.local
# non prod env files
.env.development
.env.test
Navigate to AWS CodePipeline
Click Create pipeline
Pipeline Settings
Pipeline name: Name accordingly
Service role: New Service Role
Role name: AWS will create a default name for you
Source Stage:
Connect to your repository in this case GitHub (Version 2)
Connect To Github
Repository Name: select repository accordingly
Branch Name: select branch accordingly
Build Stage:
Build Provider: AWS CodeBuild
Region: Select the region where the initial created the CodeBuild project Step 3
Project Name: Select the CodeBuild project you created
Environment Variables: Add any environment variables
Deploy Stage:
Deploy Provider: AWS Elastic Beanstalk
Region: Select the region where you initially created the EB
Application name: Select the Application Name you created in Step 1
Environment name: Select the Environment Name you created in Step 1
Create pipeline
Now you can push changes to the repository and CodePipeline will pick up the changes, run the build, and deploy to Elastic Beanstalk
This seem to work for me, AWS Elastic Beanstalk t3.small instance, I wanted to use Free tier t3.micro but it didn't work for me, it seems t3.micro 1GB memory was not enough, t3.small had 2GB memory.
1)
added deploy to scripts
package.json
"scripts": {
"deploy": "NODE_ENV=production npm run build && NODE_ENV=production npm run start"
},
create file .npmrc and add:
unsafe-perm=true
Create Procfile and add:
web: npm run deploy
I used AWS Pipeline to trigger EB deploy when I push update to Bitbucket (I can also disable Pipeline if not used to save $$$)
I used AWS RDS PostgreSQL Free tier, the latest version of PostgreSQL didn't have the Free tier version but previous version did have the Free tier option checkbox to select it
I used AWS S3 bucket to store images
I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.
I'm getting an awfully unfortunate error on Lambda:
Unable to import module 'lib/index': Error
at require (internal/module.js:20:19)
Which is strange because there is definitely a function called handler getting exported from lib/index...not sure if the whole subdirectory thing has been an issue for others so I wanted to ask.
sam-template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Does something crazy
Resources:
SomeFunction:
Type: AWS::Serverless::Function
Properties:
Handler: lib/index.handler
Role: arn:aws:iam::...:role/lambda-role
Runtime: nodejs6.10
Timeout: 10
Events:
Timer:
Type: Schedule
Properties:
Schedule: rate(1 minute)
Module structure
|-- lib
| `-- index.js
`-- src
`-- index.js
I have nested it here because I'm transpiling ES6 during my build process using the following, excerpt from package.json:
"build": "babel src -d lib"
buildspec.yaml
version: 0.1
phases:
install:
commands:
- npm install
- aws cloudformation package --template-file sam-template.yaml --s3-bucket some-bucket --output-template-file compiled-sam-template.yaml
build:
commands:
- npm run build
post_build:
commands:
- npm prune --production
artifacts:
files:
- 'node_modules/**/*'
- 'lib/**/*'
- 'compiled-template.yaml'
The aws cloudformation package command is shipping the built assets, which is run in the install phase of the shown code. Moving it to the post_build will ensure it captures everything needed, including the lib/index in question:
post_build:
commands:
- npm prune --production
- aws cloudformation package ...
You are trying to import lib/index which will try to find a package named lib as if you did npm install --save lib but you are most likely trying to import a file relative to your own project and you are not giving it a relative path in your import.
Change 'lib/index' to './lib/index' - or '../lib/index' etc. - depending where it is and see if it helps.
By the way, if you're trying to import the file lib/index.js and not a directory lib/import/ then you may use a shorter ./lib path, as in:
const lib = require('./lib');
Of course you didn't show even a single line of your code so I can only guess what you're doing.
Your handler should be .lib/index.handler considering your index.js file is in a subdirectory lib.
The reference to the handler must be relative to the lambda to be execute;
Ex:
if the file lambda is placed in the path:
x-lambda/yyy/lambda.py
the handler must be:
..yyy/lambda.lambda_handler
it suppose that in the lambda.py exist the function: lambda_handler()