pm2 deploy master latest - node.js

I have to be missing something. I have a basic ecosystem.json layout, with maybe the one other post deploy procedure for webpack to build the production set.
I get no errors on pm2 deploy ecosystem.json staging, yet I never get the current ref. The only time I get the latest is when I wipe the directories out and pm2 deploy ecosystem.json staging setup.
I've tried pm2 deploy ecosystem.json staging update with no luck.
Deployment is stuck at the original deploy commit. I confirm this with pm2 deploy ecosystem.json current.
What am I missing?

Related

Deploy strapi to elastic beanstalk

Can someone please provide information on how to deploy Strapi to AWS Elastic Beanstalk?
I have found many resources on how to deploy Strapi on many other different platforms such as Digital Ocean and Heroku, but I am very curious about deploying Strapi to Elastic Beanstalk. Is that possible and how can I do with that?
First you need an EBS application & environment (Web Server) running Node version 12 (as of now). You'll also need to change the package.json in your Strapi project and update the engines part, like this (major version must match EBS Node version):
"engines": {
"node": "12.X.Y", // minor (X) & patch (Y) versions are up to you
...
},
You must switch your project to use NPM instead of Yarn (EBS currently only supports NPM out-of-the-box), to do this I recommend a tool like synp.
Then create a Procfile which will describe how you want EBS to run your app:
web: npm run start
Then to deploy manually, you could first (in the project root) run npm install, then npm run build to build the Strapi Admin (React) application. After the Strapi Admin has been built, make sure to remove the node_modules folder, because EBS will automatically install dependencies for you. (*)
Last step is to zip the whole project (again, in project root, run: zip -r application.zip .), upload the zip to AWS EBS & let it do it's magic. Hopefully it should then install dependencies and start your application automatically.
Side note: When using some specific dependencies in your project (one example is sharp), the EBS may fail to install your dependencies, to fix this, add a .npmrc file to your project root with the following contents:
unsafe-perm=true
Side note #2: You need to set some environment variables in the EBS configuration panel in order for Strapi to work (like database credentials etc.).
(*) Although you could include node_modules in your app and zip it and upload to EBS (which could work), sometimes zipping node_modules may break some dependencies, so I recommend removing it and let EBS install dependencies for you.
If you want to deploy Strapi on Elastic Beanstalk with AWS CodePipeline the following steps worked for me:
Navigate to Elastic Beanstalk and Create a new application with the corresponding Node version for the application
Platform: Node.js
Platform Branch: Node.js 12 funning on 64bit Amazon Linux 2
Platform Version: 5.4.6
Select Sample Application to start (we will connect this to AWS CodePipeline in a later step)
Set up the code repository on GitHub (if one doesn’t already exist)
Navigate to AWS CodeBuild and select create build project
In the Source Section connect to your Github Repository
In the Environment Section select the following configurations
Environment Image: Manage image
Operating System: Ubuntu
Runtimes: Standard
Image: aws/codebuild/standard:5.0
Role name: AWS will create one for you
Buildspec
Select “Use a buildspec file” - We will have to add a buildspec.yml file to our project in step 4
Leave the other default settings and continue with Create build project
Update your Strapi Code
Add the Procfile, .npmrc, and update the package.json file accordingly as suggested by Richárd Szegh
Add the .ebignore file for Elastic Beanstalk
Add the following buildspec.yml and .ebignore file into your project
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
pre_build:
commands:
- npm install
build:
commands:
- npm run build
post_build:
commands:
- rm -rf node_modules
artifacts:
files:
- '**/*'
.ebignore
# dependencies
node_modules/
# repository/project stuff
.idea/
.git/
.gitlab-ci.yml
README.md
# misc
.DS_Store
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# local env files
.env.local
.env.development.local
.env.test.local
.env.production.local
# non prod env files
.env.development
.env.test
Navigate to AWS CodePipeline
Click Create pipeline
Pipeline Settings
Pipeline name: Name accordingly
Service role: New Service Role
Role name: AWS will create a default name for you
Source Stage:
Connect to your repository in this case GitHub (Version 2)
Connect To Github
Repository Name: select repository accordingly
Branch Name: select branch accordingly
Build Stage:
Build Provider: AWS CodeBuild
Region: Select the region where the initial created the CodeBuild project Step 3
Project Name: Select the CodeBuild project you created
Environment Variables: Add any environment variables
Deploy Stage:
Deploy Provider: AWS Elastic Beanstalk
Region: Select the region where you initially created the EB
Application name: Select the Application Name you created in Step 1
Environment name: Select the Environment Name you created in Step 1
Create pipeline
Now you can push changes to the repository and CodePipeline will pick up the changes, run the build, and deploy to Elastic Beanstalk
This seem to work for me, AWS Elastic Beanstalk t3.small instance, I wanted to use Free tier t3.micro but it didn't work for me, it seems t3.micro 1GB memory was not enough, t3.small had 2GB memory.
1)
added deploy to scripts
package.json
"scripts": {
"deploy": "NODE_ENV=production npm run build && NODE_ENV=production npm run start"
},
create file .npmrc and add:
unsafe-perm=true
Create Procfile and add:
web: npm run deploy
I used AWS Pipeline to trigger EB deploy when I push update to Bitbucket (I can also disable Pipeline if not used to save $$$)
I used AWS RDS PostgreSQL Free tier, the latest version of PostgreSQL didn't have the Free tier version but previous version did have the Free tier option checkbox to select it
I used AWS S3 bucket to store images

Getting following error while deploying to node react app to heroku

List item
I am trying to deploy a node - react app on heroku
If you are deploying your app as a node.js application you can run into this issue. Create your heroku app using the create-react-app buildpack:
https://elements.heroku.com/buildpacks/mars/create-react-app-buildpack
Hope this helps!
A 503 error code means the server cannot handle the request because it is either unresponsive or overloaded, as stated on MDN. This could be a result of incorrectly initializing the application on Heroku. Run the following in your directory from the terminal to deploy to heroku.
cd my-project
git init
heroku git:remote -a app-name
git add .
git commit -am "comment"
git push heroku master
More details about deploying a Node app to Heroku can be found here

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

heroku deploy docker image with github

I have a nodejs express app serving a site. I deployed it with Heroku, using buildpack/nodejs and Github. Every time i push on Github, Heroku detects the push and runs the npm start script.
The problem is that I need to pass to a Docker image containing the nodejs app. I did it and it works locally, I can run it with docker run -d -p 8000:8000 exporter and it works.
I added the docker.yml file on the root folder and pushed on Github. But heroku still runs the npm script in the package.json, ignoring the docker.yml.
Is there a way to make heroku create the container from the Dockerfile every time I push to Github?
For Heroku to understand your heroku.yml file you need a few things.
First off you need to make sure that the Dockerfile is in the root directory.
Second, you need to ensure you are building and running the docker environment.
Finally, make sure you set your heroku stack to docker.
So, given that we want to ensure the directory tree looks like this:
|-my_app
|-app_contents
|-Dockerfile
|-heroku.yml
|-etc...
And that the heroku.yml file looks something like this:
build:
docker:
web: Dockerfile
run:
web: docker run -d -p 8000:8000 exporte
and finally run this in your heroku repo:
heroku stack:set container
Then just make sure you push your changes up.
If this doesn't help. I would recommend updating your post with the following:
The file tree
The Dockerfile
The heroku.yml file
Thanks to the answer of Taylor Cochran I managed to solve the problem.
I first tried to follow this link: https://devcenter.heroku.com/articles/container-registry-and-runtime
It worked but I had to do it from the cli.
After that I removed the entire project and remade it. I followed the indications of Taylor Cochran and pushed from heroku cli. I saw it worked and I then added the github deploy. And now every time I push on Github the new Docker container is automatically built and deployed by Heroku.
NB: I changed web: docker run -d -p 8000:8000 exporter to npm start

how to run an node npm script in deployment hook in codeship

As part of my codeship to heroku deployment hook, I'd like to run database updates/migrations before the app starts. How can I trigger an npm script or a command line script in heroku using the codeship deployment step?
I tried putting it in as part of my npm start script but it seems to have trouble connecting to the database then. e.g.
from package.json
"start": "./node_modules/.bin/knex migrate:latest && node server.js"
If you add a custom deploy script to Codeship after the Heroku deployment step, it should run after the app is running, so you'll have database access. You have access to the heroku toolkit, so you should be able to run: heroku run --app YOUR_APP_NAME -- ./node_modules/.bin/knex migrate:latest

Resources