I'm trying to use AWS CodeDeploy to deploy my application. Everything seems to be working fine but I'm getting the following error.
[stderr]/opt/codedeploy-agent/deployment-root/f1ea67bd-97bo-08q1-b3g4-7b14becf91bf/d-WJL0QLF9H/deployment-archive/scripts/start_server.sh:
line 3: pm2: command not found
Below is my start_server.sh file.
#!/bin/bash
pm2 start ~/server.js -i 0 --name "admin" &
I have tried using SSH to connect to my server as user ubuntu and running that bash file and it works perfectly with no errors. So I know that PM2 is installed and working correctly on that user.
Below is also my appspec.yml file.
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: ubuntu
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 300
runas: ubuntu
Also not sure if this will help but here is my stop_server.sh file.
#!/bin/bash
npm install pm2 -g
pm2 stop admin || true
pm2 delete admin || true
Any ideas?
Perform the below steps:
which node
sudo ln -s /home/ubuntu/.nvm/versions/node/v12.13.1/bin/node
(output of above step) /usr/bin/node
which pm2
sudo ln -s /home/ubuntu/.nvm/versions/node/v12.13.1/bin/pm2
(output of above step) /usr/bin/pm2
in start_server.sh and stop_server.sh use it as below (run start.sh as ubuntu):
sudo /usr/bin/pm2 status
Hope this will help you!!
All of the lifecycle events happens in the order if they have scripts to run:
ApplicationStop
DownloadBundle (reserved for CodeDeploy)
BeforeInstall
Install (reserved for CodeDeploy)
AfterInstall
ApplicationStart
ValidateService
If your deployment has reached to ApplicationStart step, which means your ApplicationStop lifecycle event is already succeeded. Can you make sure the "pm2 stop admin" is succeeded(means pm2 is installed).
Usually in cases like that is to use full path to pm2.
#!/bin/bash
/usr/local/bin/pm2 start ~/server.js -i 0 --name "admin" &
If you run
npm install pm2 -g
at ApplicationStop step, then it won't be run until the second time you deploy as ApplicationStop is run on the previous deployment archive bundle.
I just ran into this problem again.
I was able to solve it by ensuring that the following code is running at the beginning of all of my CodeDeploy script files.
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Related
In my gitlab script I'm trying to install and start a java application remotely. The installation part goes fine but when I'm trying to start the service using this command:
ssh $DEPLOY_USER#$DEPLOY_HOST "sudo service my-service start"
I get the next error:
dzdo: service: command not found
All the previous commands with sudo rights are executed successfully. What's wrong with this one?
Adding bash to the command solved the issue:
ssh $DEPLOY_USER#$DEPLOY_HOST "sudo bash service my-service start"
I have a configuration that runs a Node docker container on azure as a Webapp.
Everywhere i read it clearly says that App Settings ( environmental variables ) set to the Web app will be injected at runtime to the container, but it's clearly not the case for me.
"When your app runs, the App Service app settings are injected into
the process as environment variables automatically"
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#configure-environment-variables
Dockerfile:
FROM node
# Open SSH for Azure
RUN apt-get update && apt-get install -y ssh
RUN echo "root:Docker!" | chpasswd
RUN mkdir /run/sshd
COPY sshd_config /etc/ssh/
# Copy resources
WORKDIR /app
COPY package*.json ./
# Run install
RUN npm install
COPY . .
# Copy startup script and make it executable
COPY startup.sh /app
RUN ["chmod", "+x", "/app/startup.sh"]
startup.sh:
#!/bin/sh
echo "Start sshd"
/usr/sbin/sshd
echo "Start node"
node server.js
And still (as an example):
secret: process.env.ADDSEARCH_SECRET
And still process.env.ADDSEARCH_SECRET clearly returns null. As well as all the reset of the variables
What am I missing?
Using it with build_args during the build step of the docker image works, but I really don't want to do it that way for several reasons.
You need to prefix the environment variable with APPSETTING_, e.g. APPSETTING_ADDSEARCH_SECRET.
I deleted my previous question because it was not very clear, and the problem was not clearly exposed. I have an instance #aws, a repository #gitlab, and gitlab CI is setup.
I made a little app in node.js because I want to try all these new stuff.
But, when gitlab-ci runs the script, pm2 creates a "source" directory in my folder, then copied all my files in this directory, which is appearently the Current Working Directory (CWD).
That's a surprising behavior, and I'm not comfortable with it.
Anyone knows why ? Is it normal ? Why can't my files stay in ~/projet2/, as I set up ?
When I run pm2 show projet2, I can see the exec cwd is /home/ubuntu/projet2/source while source is a folder I've never created !
.git-ci.yml
# This file is a template, and might need editing before it works on your project.
# Official framework image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/node/tags/
image: node:alpine
stages:
- deploy
deploy:
stage: deploy
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apk add --update openssh )'
# Add bash
- apk add --update bash
# Add git
- apk add --update git
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- echo "$SSH_PRIVATE_KEY" > "./pk.pem"
- chmod 400 ./pk.pem
- echo "$SSH_PRIVATE_KEY" | ssh-add -
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
# In order to properly check the server's host key, assuming you created the
# SSH_SERVER_HOSTKEYS variable previously, uncomment the following two lines
# instead.
# - mkdir -p ~/.ssh
# - '[[ -f /.dockerenv ]] && echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
script:
- npm i -g pm2
- pm2 deploy ecosystem.config.js production setup
- pm2 deploy ecosystem.config.js production
only:
- master
ecosystem.config.js
module.exports = {
apps: [{
name: 'projet2',
script: '/home/ubuntu/projet2/index.js',
cwd: '/home/ubuntu/projet2/'
}],
deploy: {
production: {
user: 'ubuntu',
host: 'xxxxxxxxxxxx',
ref: 'origin/master',
repo: 'git#gitlab.com:xxxxxxx/projet2.git',
key: './pk.pem',
path: '/home/ubuntu/projet2/',
'post-deploy': 'npm install && pm2 startOrRestart /home/ubuntu/projet2/ecosystem.config.js'
}
}
}
The answer is: Yes! This is normal behavior!
It is to be expected, since you are running things with pm2 now, and pm2 knows how to handle it.
By running:
pm2 deploy ecosystem.config.js someName
the pm2 is making an SSH to the provided host, using the provided user and key. Then, on a successful connection to the provided host, pm2 proceeds to try and do a git pull from the provided referenced branch inside ref, which belongs to the provided repo. The pulled data will be placed in the provided path inside 'path', with the addition of a 'source' directory. After a successful pull, the post-deploy will be triggered, which is in charge of doing the npm install and then some more stuff (depending on what you tell it to do). But nevertheless, the creation of the source folder is something that is built-in to the pm2 mechanism, and is to be expected. It shouldn't bother you too much.
I am trying to start a node program using pm2 via ansible. The problem is that the pm2 start command is not idempotent under ansible. It gives error when run again.
This is my ansible play
- name: start the application
become_user: ubuntu
command: pm2 start app.js -i max
tags:
- app
Now if i run this the first time then it runs properly but when i run this again then i get the error telling me that the script is already running.
What would be the correct way to get around this error and handle pm2 properly via ansible.
Before starting the script you should delete previous, like this:
- name: delete existing pm2 processes if running
command: "pm2 delete {{ server_id }}"
ignore_errors: True
become: yes
become_user: rw_user
- name: start pm2 process
command: 'pm2 start -x -i 4 --name "{{server_id}}" server.js'
become: yes
become_user: rw_user
environment:
NODE_ENV: "{{server_env}}"
I would use
pm2 reload app.js -i max
It will allow you to reload configuration ;-)
I ended up on this page looking for a solution to start PM2 multiple times when I rerun my playbook. I also wanted PM2 to reload the server when it was already running and pickup the new code I might have deployed. It turns out that PM2 has such an interface:
- name: Start/reload server
command: '{{path_to_deployed_pm2}} startOrReload pm2.ecosystem.config.js'
The startOrReload command requires a so-called "ecosystem" file to be present. See the documentation for more details: Ecosystem File.
This is a minimal pm2.ecosystem.config.js that is working for me:
module.exports = {
apps : [{
script: 'app.js',
name: "My app"
}],
};
Here we can use the "register" module to perform a conditional restart/start.
register the output of following command:
shell: pm2 list | grep <app_name> | awk '{print $2}'
register: APP_STATUS
become: yes
and the use APP_STATUS.stdout to make a conditional start and restart tasks. This way we don't need a pm2 delete step.
I am trying to launch a node.js app on a production EC2 server with pm2 process manager.
When I ssh into the instance and run pm2 start app.js, PM2 starts just fine and has access to all environment variables. Everything good.
However, I want to run pm2 start app.js from a Codedeploy hook script called applicationstart.sh, the app fails with an errored status becasue it is missing all environment variables.
Here is where the script is added so it is launched with each deployment and calls pm2 start: appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/teller-install
hooks:
AfterInstall:
- location: scripts/afterinstall.sh
timeout: 1000
runas: root
ApplicationStart:
- location: scripts/applicationstart.sh
timeout: 300
runas: ubuntu
Here is the applicationstart script:
#!/bin/bash
echo "Running Hook: applicationstart.sh"
cd /home/ubuntu/teller-install/Server/
pm2 start app.js
exit 0
I am logged in as ubuntu when I run the script from ssh and I set the script in the appconfig.yml to run as ubuntu as well.
Why would there be any difference between starting pm2 from terminal and starting it from a launch hook script?
Here is running directly from ssh:
I can provide any information necessary in dire need of solution. Thanks!
I had to add source /etc/profile before I call pm2 start app.js to load in environment variables.
The script now looks like
#!/bin/bash
echo "Running Hook: applicationstart.sh"
cd /home/ubuntu/teller-install/Server/
source /etc/profile
pm2 start app.js
exit 0