Deploy strapi to elastic beanstalk - node.js

Can someone please provide information on how to deploy Strapi to AWS Elastic Beanstalk?
I have found many resources on how to deploy Strapi on many other different platforms such as Digital Ocean and Heroku, but I am very curious about deploying Strapi to Elastic Beanstalk. Is that possible and how can I do with that?

First you need an EBS application & environment (Web Server) running Node version 12 (as of now). You'll also need to change the package.json in your Strapi project and update the engines part, like this (major version must match EBS Node version):
"engines": {
"node": "12.X.Y", // minor (X) & patch (Y) versions are up to you
...
},
You must switch your project to use NPM instead of Yarn (EBS currently only supports NPM out-of-the-box), to do this I recommend a tool like synp.
Then create a Procfile which will describe how you want EBS to run your app:
web: npm run start
Then to deploy manually, you could first (in the project root) run npm install, then npm run build to build the Strapi Admin (React) application. After the Strapi Admin has been built, make sure to remove the node_modules folder, because EBS will automatically install dependencies for you. (*)
Last step is to zip the whole project (again, in project root, run: zip -r application.zip .), upload the zip to AWS EBS & let it do it's magic. Hopefully it should then install dependencies and start your application automatically.
Side note: When using some specific dependencies in your project (one example is sharp), the EBS may fail to install your dependencies, to fix this, add a .npmrc file to your project root with the following contents:
unsafe-perm=true
Side note #2: You need to set some environment variables in the EBS configuration panel in order for Strapi to work (like database credentials etc.).
(*) Although you could include node_modules in your app and zip it and upload to EBS (which could work), sometimes zipping node_modules may break some dependencies, so I recommend removing it and let EBS install dependencies for you.

If you want to deploy Strapi on Elastic Beanstalk with AWS CodePipeline the following steps worked for me:
Navigate to Elastic Beanstalk and Create a new application with the corresponding Node version for the application
Platform: Node.js
Platform Branch: Node.js 12 funning on 64bit Amazon Linux 2
Platform Version: 5.4.6
Select Sample Application to start (we will connect this to AWS CodePipeline in a later step)
Set up the code repository on GitHub (if one doesn’t already exist)
Navigate to AWS CodeBuild and select create build project
In the Source Section connect to your Github Repository
In the Environment Section select the following configurations
Environment Image: Manage image
Operating System: Ubuntu
Runtimes: Standard
Image: aws/codebuild/standard:5.0
Role name: AWS will create one for you
Buildspec
Select “Use a buildspec file” - We will have to add a buildspec.yml file to our project in step 4
Leave the other default settings and continue with Create build project
Update your Strapi Code
Add the Procfile, .npmrc, and update the package.json file accordingly as suggested by Richárd Szegh
Add the .ebignore file for Elastic Beanstalk
Add the following buildspec.yml and .ebignore file into your project
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
pre_build:
commands:
- npm install
build:
commands:
- npm run build
post_build:
commands:
- rm -rf node_modules
artifacts:
files:
- '**/*'
.ebignore
# dependencies
node_modules/
# repository/project stuff
.idea/
.git/
.gitlab-ci.yml
README.md
# misc
.DS_Store
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# local env files
.env.local
.env.development.local
.env.test.local
.env.production.local
# non prod env files
.env.development
.env.test
Navigate to AWS CodePipeline
Click Create pipeline
Pipeline Settings
Pipeline name: Name accordingly
Service role: New Service Role
Role name: AWS will create a default name for you
Source Stage:
Connect to your repository in this case GitHub (Version 2)
Connect To Github
Repository Name: select repository accordingly
Branch Name: select branch accordingly
Build Stage:
Build Provider: AWS CodeBuild
Region: Select the region where the initial created the CodeBuild project Step 3
Project Name: Select the CodeBuild project you created
Environment Variables: Add any environment variables
Deploy Stage:
Deploy Provider: AWS Elastic Beanstalk
Region: Select the region where you initially created the EB
Application name: Select the Application Name you created in Step 1
Environment name: Select the Environment Name you created in Step 1
Create pipeline
Now you can push changes to the repository and CodePipeline will pick up the changes, run the build, and deploy to Elastic Beanstalk

This seem to work for me, AWS Elastic Beanstalk t3.small instance, I wanted to use Free tier t3.micro but it didn't work for me, it seems t3.micro 1GB memory was not enough, t3.small had 2GB memory.
1)
added deploy to scripts
package.json
"scripts": {
"deploy": "NODE_ENV=production npm run build && NODE_ENV=production npm run start"
},
create file .npmrc and add:
unsafe-perm=true
Create Procfile and add:
web: npm run deploy
I used AWS Pipeline to trigger EB deploy when I push update to Bitbucket (I can also disable Pipeline if not used to save $$$)
I used AWS RDS PostgreSQL Free tier, the latest version of PostgreSQL didn't have the Free tier version but previous version did have the Free tier option checkbox to select it
I used AWS S3 bucket to store images

Related

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

GCP Cloud build ignores timeout settings

I use Cloud Build for copying the configuration file from storage and deploying the app to App Engine flex.
The problem is that the build fails every time when it lasts more than 10 minutes. I've specified timeout in my cloudbuild.yaml but it looks like it's ignored. Also, I configured app/cloud_build_timeout and set it to 1000. Could somebody explain to me what is wrong here?
My cloudbuild.yaml looks in this way:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["cp", "gs://myproj-dev-247118.appspot.com/.env.cloud", ".env"]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: 1000s
timeout: 1600s
My app.yaml use custom env that build it from Dockerfile and looks like this:
runtime: custom
env: flex
manual_scaling:
instances: 1
env_variables:
NODE_ENV: dev
Dockerfile also contains nothing special, just installing dependencies and app building:
FROM node:10 as front-builder
WORKDIR /app
COPY front-end .
RUN npm install
RUN npm run build:web
FROM node:12
WORKDIR /app
COPY api .
RUN npm install
RUN npm run build
COPY .env .env
EXPOSE 8080
COPY --from=front-builder /app/web-build web-build
CMD npm start
When running gcloud app deploy directly for an App Engine Flex app, from your local machine for example, under the hood it spawns a Cloud Build job to build the image that is then deployed to GAE (you can see that build in Cloud Console > Cloud Build). This build has a 10min timeout that can be customized via:
gcloud config set app/cloud_build_timeout 1000
Now, the issue here is that you're issuing the gcloud app deploy command from within Cloud Build itself. Since each individual Cloud Build step is running in its own Docker container, you can't just add a previous step to customize the timeout since the next one will use the default gcloud setting.
You've got several options to solve this:
Add a build step to first build the image with docker build, upload it to Google Cloud Registry. You can set a custom timeout on these steps to fit your needs. Finally, deploy your app with glcoud app deploy --image-url=IMAGE-URL.
Create your own custom gcloud builder where app/cloud_build_timeout is set to your custom value. You can derive it from the default gcloud builder Dockerfile and add /builder/google-cloud-sdk/bin/gcloud config set app/cloud_build_timeout 1000
Just in case if you are using Google Cloud Build with Skaffold, remember checking the skaffold.yaml if you setted the timeout option inside the googleCloudBuild section in build. For example:
build:
googleCloudBuild:
timeout: 3600s
Skaffold will ignore the gcloud config of the machine where you are running the deploy. For example it will ignore this CLI command: gcloud config set app/cloud_build_timeout 3600

AdonisJS app deployment on AWS ElasticBeanstalk using AWS CodePipeline fails - missing .env

I've recently started using AdonisJS for API development.
I'm using AWS Elastic Beanstalk together with AWS CodeCommit and AWS CodePipeline to deploy new code on each git push.
Since .env file is not present in git repository, I've added env variables through Elastic Beanstalk web console.
But deployment failed when I tried to run node ace migration:run command.
Activity execution failed, because:
Error: ENOENT: no such file or directory, open '/tmp/deployment/application/.env'
1 Env.load
/tmp/deployment/application/node_modules/#adonisjs/framework/src/Env/index.js:110
2 new Env
/tmp/deployment/application/node_modules/#adonisjs/framework/src/Env/index.js:42
3 Object.app.singleton [as closure]
/tmp/deployment/application/node_modules/#adonisjs/framework/providers/AppProvider.js:29
4 Ioc._resolveBinding
/tmp/deployment/application/node_modules/#adonisjs/fold/src/Ioc/index.js:231
5 Ioc.use
/tmp/deployment/application/node_modules/#adonisjs/fold/src/Ioc/index.js:731
6 AppProvider.boot
/tmp/deployment/application/node_modules/#adonisjs/framework/providers/AppProvider.js:337
7 _.filter.map
/tmp/deployment/application/node_modules/#adonisjs/fold/src/Registrar/index.js:147
8 arrayMap
/tmp/deployment/application/node_modules/lodash/lodash.js:639
(ElasticBeanstalk::ExternalInvocationError)
Then I've tried to add ENV_SILENT=true flag before each command as stated in AdonisJS documentation. But that did not help.
So then, I've tried to upload .env file on S3 bucket, and copy its contents during deployment.
But it seems it does not work, since I'm getting the same error (no .env file).
These are my 2 config files from .ebextensions folder
01_copy_env.config (I'm using x-xxxxxxxxxxxx here for security)
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["elasticbeanstalk-us-east-x-xxxxxxxxxxxx"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
files:
"/tmp/deployment/application/.env":
mode: "000755"
owner: root
group: root
authentication: "S3Auth"
source: https://elasticbeanstalk-us-east-x-xxxxxxxxxxxx.s3.us-east-2.amazonaws.com/variables.txt
02_init.config
container_commands:
01_node_binary:
command: "ln -sf `ls -td /opt/elasticbeanstalk/node-install/node-v10* | head -1`/bin/node /bin/node"
leader_only: true
02_migration:
command: "node ace migration:run"
03_init_seed:
command: "node ace seed"
The only time the whole thing works is when I add .env file to git and deploy it with the rest of the code. But that is not the way to go, so if anyone knows a solution to my problem I would really appreciate it. Thanks!
Add new variable ENV_SILENT = true on your global variables (Elastic Beanstalk)
Adonis documentation

How do you deploy a Node.js application to Google App Engine without any Docker files?

I am deploying a new Node.js application to Google App Engine. I use Docker for local development so I have the following files in my root folder:
Dockerfile
.dockerignore
docker-compose.yml
docker-compose.debug.yml
When I run gcloud app deploy, I get the following error:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in /Users/Nag/Code/project/web-service/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
As per the Google App Engine app.yaml settings, I can include a skip_files key to ignore certain files. I included skip_files key in my app.yaml like this but it still throws the same error:
# [START app_yaml]
runtime: nodejs
env: flex
skip_files:
- /docker/i
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
env_variables:
NODE_ENV: development
# [END app_yaml]
I would like to continue using the nodejs runtime when the application is running on App Engine and use Docker only on my local machine. I can't Git ignore the Docker files because other developers on the team need access to it in the repository. Can someone please help me understand why App Engine still recognizes my Docker files even though I am skipping them? Thanks.

Continuous Deployment of a Node.js app to Heroku using GitLab

There are tutorials covering the deployment of Ruby and Python apps but I can't find good documentation or examples for NodeJS.
http://docs.gitlab.com/ce/ci/examples/test-and-deploy-python-application-to-heroku.html
http://docs.gitlab.com/ce/ci/examples/test-and-deploy-ruby-application-to-heroku.html
Does anyone have a .gitlab-ci.yml to share?
create a project
npm init -y
npm i #install dependencies
add the following lines in package.json
"engines": {
"node": "8.12.0", //node version
"npm": "6.4.1" //npm version
},
"scripts": {
"start": "node app.js", //heroku will using the following script to run node app
}
create a heroku project
select NEW -> Create new app
set the App name & choose a region
click on Create app
Gitlab setup create new repo or add to exist project given on gitlab website
create a .gitlab-ci.yml file
image: node:latest
stages:
- production
production:
type: deploy
stage: production
image: ruby:latest
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=APPNAME_OF_Heroku App --api-key=$HEROKU_API_KEY # security add the heroku api to CI/CD setting
only:
- master #branch name to deploy on heroku
Setting HEROKU_API_KEY
Setting -> CI/CD -> Variable -> Expand
Input Variable key -> variable name in .gitlab-ci.yml
Input Variable value -> Heroku Api Key
Get the Heroku Api Key
Heroki Dashborad -> Account Settings
set the Runner on Gitlab
Setting -> CI/CD -> Variable -> Expand
Specific Runners
Install the gitlab-runner
Windows
Linux
MacOS
For setup steps here
Shared Runners
just click Disable shared Runners to enable the shared runner
push the files to gitlab it will automatically deploy on heroku
git add . #to add all the files)
git commit -m "message" #to commit files
git push origin master
I have found a detailed article for continuous integration on Heroku:
https://medium.com/#seulkiro/deploy-node-js-app-with-gitlab-ci-cd-214d12bfeeb5
Sample .gitlab-ci.yml file :
https://gitlab.com/seulkiro/node-heroku-dpl

Resources