I am writing an ansible playbook to perform various pm2 functions.
I have searched a bit and cannot find an example of someone setting up pm2-logrotate.
I believe I am close but I'm not sure my shell commands are working. When I ssh into the child node and run sudo pm2 ls it says In-memory PM2 is out-of-date, do: $ pm2 update even though I am running that command from my playbook. What am I missing here?
---
# RUN playbook
# ansible-playbook -K pm2-setup.yml
- name: Setup pm2 and pm2-logrotate
hosts: devdebugs
remote_user: ansible
become: true
tasks:
- name: Install/Update pm2 globally
community.general.npm:
name: pm2
global: yes
state: latest
- name: Update In-memory pm2
ansible.builtin.shell: pm2 update
- name: Install/Update pm2-logrotate globally
ansible.builtin.shell: pm2 install pm2-logrotate
- name: Copy pm2-logrotate config
ansible.builtin.copy:
src: /home/ubuntu/files/pm2-logrotate-conf.json
dest: /home/ubuntu/.pm2/module_conf.json
owner: root
group: root
mode: '0644'
...
Bonus question: is there a way to skip the shell commands if they aren't needed (i.e. if pm2-logrotate is already installed)?
I was mixing up the users on my server. I fixed this by specifying to run as ubuntu for the update command.
Related
I have the following ansible playbook
- name: "update apt package."
become: yes
apt:
update_cache: yes
- name: "update packages"
become: yes
apt:
upgrade: yes
- name: "Remove dependencies that are no longer required"
become: yes
apt:
autoremove: yes
- name: "Install Dependencies"
become: yes
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: Install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates Directory
become: yes
file:
path: ~/backend
state: directory
mode: 0755
- name: Copy backend dist files web server
become: yes
copy:
src: ~/project/artifact.tar.gz
dest: ~/backend/artifact.tar.gz
- name: Extract backend files
become: yes
shell: |
cd ~/backend
tar -vxf artifact.tar.gz
#pwd
- name: Executing node
become: true
become_method: sudo
become_user: root
shell: |
cd ~/backend
npm install
npm run build
pm2 stop default
pm2 start npm --name "backend" -- run start --port 3030
pm2 status
cd ~/backend/dist
pm2 start main.js --update-env
pm2 status
I have the following issue with the task Executing node. So on my remote server, if I run each of the commands manually, after logging onto the remote machine via SSH, as root user, pm2 starts the services as expected as I can also confirm from pm2 status. I can also verify that the service is bound to port 3030 and listening on it (as expected).
But when I try doing the same using Ansible playbook, PM2 does start the service, but for some reason it does not bind the service to the port 3030 as I can see there is nothing listening on 3030.
Can someone please help?
Major edit 1
I also tried breaking up the entire task into smaller ones and run them as individual commands through the command module. But the results are still the same.
Updated roles playbook:
- name: "update apt package."
become: yes
apt:
update_cache: yes
- name: "update packages"
become: yes
apt:
upgrade: yes
- name: "Remove dependencies that are no longer required"
become: yes
apt:
autoremove: yes
- name: "Install Dependencies"
become: yes
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: Install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates Directory
become: yes
file:
path: ~/backend
state: directory
mode: 0755
- name: Copy backend dist files web server
become: yes
copy:
src: ~/project/artifact.tar.gz
dest: ~/backend/artifact.tar.gz
#dest: /home/ubuntu/backend/artifact.tar.gz
- name: Extract backend files
become: yes
shell: |
cd ~/backend
tar -vxf artifact.tar.gz
- name: NPM install
become: true
command: npm install
args:
chdir: /root/backend
register: shell_output
- name: NPM build
become: true
command: npm run build
args:
chdir: /root/backend
register: shell_output
- name: PM2 start backend
become: true
command: pm2 start npm --name "backend" -- run start
args:
chdir: /root/backend
register: shell_output
- name: PM2 check backend status in pm2
become: true
command: pm2 status
args:
chdir: /root/backend
register: shell_output
- name: PM2 start main
become: true
command: pm2 start main.js --update-env
args:
chdir: /root/backend/dist
register: shell_output
- debug: var=shell_output
No change in the result even with the above:
while running the same manually, either individually each command on the bash shell or through a .sh script (as below) works just fine.
#!/bin/bash
cd ~/backend
pm2 start npm --name "backend" -- run start
pm2 status
cd dist
pm2 start main.js --update-env
pm2 status
pm2 save
Sometimes, even the manual steps also do not get the port up and listening while the service is still up. But this happens rarely and I am unable to consistently reproduce this issue.
Why is this not working is my primary question. What's am I doing wrong? My assumption is that, conceptually, it should work.
My secondary question is how can I make this work?
I am trying to start a node program using pm2 via ansible. The problem is that the pm2 start command is not idempotent under ansible. It gives error when run again.
This is my ansible play
- name: start the application
become_user: ubuntu
command: pm2 start app.js -i max
tags:
- app
Now if i run this the first time then it runs properly but when i run this again then i get the error telling me that the script is already running.
What would be the correct way to get around this error and handle pm2 properly via ansible.
Before starting the script you should delete previous, like this:
- name: delete existing pm2 processes if running
command: "pm2 delete {{ server_id }}"
ignore_errors: True
become: yes
become_user: rw_user
- name: start pm2 process
command: 'pm2 start -x -i 4 --name "{{server_id}}" server.js'
become: yes
become_user: rw_user
environment:
NODE_ENV: "{{server_env}}"
I would use
pm2 reload app.js -i max
It will allow you to reload configuration ;-)
I ended up on this page looking for a solution to start PM2 multiple times when I rerun my playbook. I also wanted PM2 to reload the server when it was already running and pickup the new code I might have deployed. It turns out that PM2 has such an interface:
- name: Start/reload server
command: '{{path_to_deployed_pm2}} startOrReload pm2.ecosystem.config.js'
The startOrReload command requires a so-called "ecosystem" file to be present. See the documentation for more details: Ecosystem File.
This is a minimal pm2.ecosystem.config.js that is working for me:
module.exports = {
apps : [{
script: 'app.js',
name: "My app"
}],
};
Here we can use the "register" module to perform a conditional restart/start.
register the output of following command:
shell: pm2 list | grep <app_name> | awk '{print $2}'
register: APP_STATUS
become: yes
and the use APP_STATUS.stdout to make a conditional start and restart tasks. This way we don't need a pm2 delete step.
I'm trying to use AWS CodeDeploy to deploy my application. Everything seems to be working fine but I'm getting the following error.
[stderr]/opt/codedeploy-agent/deployment-root/f1ea67bd-97bo-08q1-b3g4-7b14becf91bf/d-WJL0QLF9H/deployment-archive/scripts/start_server.sh:
line 3: pm2: command not found
Below is my start_server.sh file.
#!/bin/bash
pm2 start ~/server.js -i 0 --name "admin" &
I have tried using SSH to connect to my server as user ubuntu and running that bash file and it works perfectly with no errors. So I know that PM2 is installed and working correctly on that user.
Below is also my appspec.yml file.
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: ubuntu
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 300
runas: ubuntu
Also not sure if this will help but here is my stop_server.sh file.
#!/bin/bash
npm install pm2 -g
pm2 stop admin || true
pm2 delete admin || true
Any ideas?
Perform the below steps:
which node
sudo ln -s /home/ubuntu/.nvm/versions/node/v12.13.1/bin/node
(output of above step) /usr/bin/node
which pm2
sudo ln -s /home/ubuntu/.nvm/versions/node/v12.13.1/bin/pm2
(output of above step) /usr/bin/pm2
in start_server.sh and stop_server.sh use it as below (run start.sh as ubuntu):
sudo /usr/bin/pm2 status
Hope this will help you!!
All of the lifecycle events happens in the order if they have scripts to run:
ApplicationStop
DownloadBundle (reserved for CodeDeploy)
BeforeInstall
Install (reserved for CodeDeploy)
AfterInstall
ApplicationStart
ValidateService
If your deployment has reached to ApplicationStart step, which means your ApplicationStop lifecycle event is already succeeded. Can you make sure the "pm2 stop admin" is succeeded(means pm2 is installed).
Usually in cases like that is to use full path to pm2.
#!/bin/bash
/usr/local/bin/pm2 start ~/server.js -i 0 --name "admin" &
If you run
npm install pm2 -g
at ApplicationStop step, then it won't be run until the second time you deploy as ApplicationStop is run on the previous deployment archive bundle.
I just ran into this problem again.
I was able to solve it by ensuring that the following code is running at the beginning of all of my CodeDeploy script files.
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
What is the proper way to setup node.js app on Ubuntu server with Ansible?
Now I'm trying to register pm2 as a service like code below:
- name: install pm2
npm:
name: pm2
global: yes
state: present
- name: create pm2 init.d script
template:
src: pm2_init_config.j2
dest: "/etc/init.d/pm2"
backup: yes
- name: ensure pm2 service is started
service:ยท
name: pm2
state: started
enabled: yes
but meet strange error:
pm2 unrecognized service in Ansible console
The pm2_init_config is similar to this one
if I ssh to the box and run sudo service pm2 start everything works as expected
The change below fixed the problem:
- name: create pm2 init.d script
template:
src: pm2_init_config.j2
dest: "/etc/init.d/pm2"
backup: yes
mode: 0751
I don't know why it works. Can somebody explain this trick with mode?
I've come across a problem with Anisble hanging when trying to start a forever process on an Ansible node. I have a very simple API server I'm creating in vagrant and provisioning with Ansible like so:
---
- hosts: all
sudo: yes
roles:
- Stouts.nodejs
- Stouts.mongodb
tasks:
- name: Install Make Dependencies
apt: name={{ item }} state=present
with_items:
- gcc
- make
- build-essential
- name: Run NPM Update
shell: /usr/bin/npm update
- name: Create MongoDB Database Folder
shell: /bin/mkdir -p /data/db
notify:
- mongodb restart
- name: Generate Dummy Data
command: /usr/bin/node /vagrant/dataGen.js
- name: "Install forever (to run Node.js app)."
npm: name=forever global=yes state=latest
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: "forever_list.stdout.find('/vagrant/server.js') == -1"
But even though Ansible acts like everything is fine, no forever process is started. When I change a few lines to remove the when: statement and force it to run, Ansible just hands, possibly running the forever process (forever, I presume) but not launching the VM to where I can interact with it.
I've referenced essentially two points online; the only sources I can find.
as stated in the comments the variable content needs to be included in your question for anyone to provide a correct answer but to overcome this I suggest you can do it like so:
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list|grep '/vagrant/server.js'|wc -l
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: forever_list.stdout == "0"
which should prevent ansible from starting the JS app if it's already running.