How can I run Puppet 6 on-demand against multiple remote Linux nodes? - puppet

On Puppet 3, I used to use sudo mco puppet runonce -I /hostname-pattern-1/ -I /hostname-pattern-2/ to run Puppet agents on-demand against any node matching one of the hostname patterns.
As of Puppet 5.5.4, MCollective is deprecated, so I can no longer use the mco command.
With Puppet 6, how can I do what I used to be able to do with the mco command?
My server and all my nodes are running Ubuntu 20 (Linux). I'm specifically using Puppet 6.19.1 and Puppet Server 6.14.1.
I know puppet agent -t can be used to run Puppet on-demand, but that has to be done locally on each node, so how can I apply that command (or something equivalent) from the Puppet server to any node matching a pattern?
I know I could hardcode a bunch of hostnames in a Bash script and use SSH to remotely execute the command, but hardcoding hostnames is not as convenient as specifying hostname patterns.

Have you checked choria?
Puppet Bolt might be another alternative, you could write a task to do something more complex, or just run ad-hoc commands
e.g.
bolt command run 'puppet agent -t' --targets servers
You connect bolt to your puppetdb so you won't have to create and update a static inventory

Related

Puppet agent daemon should not start during startup

I have installed puppet agent in my servers.
1.My agent is running automatically and is stopping my apache which is installed via puppet.
puppet agent --configprint runinterval
1800
2.I can kill the process but do not want this process to start during server reboot.
Can someone have any ideas?
Hmm. After installing my agent for the first time, I ran it the first time, to request a certificate from master with
sudo puppet agent --verbose --no-daemonize --onetime
Which (among other things) instructs the agent to not continue running as a daemon.
Then after signing the cert on master I am able to run the agent on demand with.
sudo puppet agent -t
The -t (--test) flag on the agent effectively adds --onetime --verbose --no-daemonize --no-usecacheonfailure --detailed-exitcodes --no-splay --show_diff --no-use_cached_catalog to the agent run. Therefore this way the agent is always run --nodaemonize, and I have not run into the problem of the agent running automatically.
Not sure if this addresses your use case.

Update my node.js code in multiple instances

I have a Elastic Load Balancer in AWS. I have my node.js code deployed in 3 instances and I'm using pm2 to update my code, but I need to do manually on this way:
Connect by ssh in each machine
Git pull on each machine
pm2 reload all on each machine
How can I do to update all the code in ALL machines when I do a new commit to the master or other branch (like production branch)?
Thanks.
You can just write a script in for example bash to solve this:
# This will run your local script update.sh on the remote
ssh serverIp1 "bash -s" < ./update.sh
Then in you local update.sh you can add code to git pull and reload:
# This code will run on the remote
git pull
# Update
# Other commands to run on remote host
You can also have a script that does all of this for all your machines:
ssh serverIp1 "bash -s" < ./update.sh
ssh serverIp2 "bash -s" < ./update.sh
ssh serverIp3 "bash -s" < ./update.sh
or event better:
for ip in serverIp1 serverIp2 serverIp3; do
(ssh $ip "bash -s" < ./update.sh)
done
An alternative is ElasticBeanstalk, especially if you are using a "pure" Node solution (not a lot of extra services on the instances). With beanstalk, you supply a git ref or ZIP file of your project, and it handles the deployment (starting up new instances, health checks, getting them on the load balance, removing old instances, etc.) In some ways it is an automated deployment version of what you have now, because you still will have EC2 instances, a Load Balancer, etc.
Use a tool like Jenkins (self-hosted) or Travis CI to run your builds and deployments. Many alternatives are available FYI, Jenkins and Travis are just some of the most popular.
Ok, thanks for your answers but I think the best option for me is AWS CodeDeploy.
I don't know why I did not find this before make the question...

what does puppet do when a service's status fails?

I have this in my event log for one of my nodes in the puppet dash board:
Changed (1)
Service[openstack-keystone] (/etc/puppetlabs/puppet/modules/keystone/manifests/init.pp:129)
Property Message
ensure ensure changed 'stopped' to 'running'
But how can I see what actually command puppet is using to change the service's state from stopped to running?
And how can I change it, if I don't think puppet is doing the correct thing?
You can run puppet agent -t --debug to manually start a puppet run and see the commands being run.
To change the commands, you can consider specifying the provider or the start, stop, status, and restart commands on the service resource. Check out the type reference for more information on the service type's parameters.
1)If you want to see the background work of puppet means how it is applying the catalog.
step1) Stop the puppet master and client daemon. eg:/etc/init.d/puppetmaster stop.
step2) Run the puppet master and puppet agent as a foreground process to see the
- puppet master --no-daemonize (run master as foreground process)
- puppet master --debug --no-daemonize (To debug the puppet master)
- puppet agent --no-daemonize (run agent as foreground)
- puppet agent --debug --no-daemonize (run as foreground and debug)
2) If you think puppet is not doing this properly, you can write your own DSL with puppet types and provider or go with the EXEC to execute commands.Even though if you feel that it is not working as yo then you can write the script to execute on agent nodes.

Puppet agent on Debian client does not start on boot

I have a Debian VM with puppet client installed.
All is well when I manually run:
puppet agent
after I run it, I can see using "service puppet status" that the process is running OK.
I want this process (starting the puppet agent) to happen automatically on system boot.
I followed the instructions of changing /etc/init.d/puppet so that it starts
START=yes
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/puppet
DAEMON_OPTS=""
NAME=agent
DESC="puppet agent"
PIDFILE="/var/run/puppet/${NAME}.pid"
BUT - when I boot the system, this service does not start !
What am I doing wrong ?
I
You need to set START=yes in /etc/default/puppet instead of right in the initscript.
As an aside, this question would be more suitable on ServerFault.

How to simultaneously deploy Node.js web app on multiple servers with Jenkins?

I'm gonna deploy a Node.js mobile web application on two remote servers.(Linux OS)
I'm using SVN server to manage my project source code.
To simply and clearly manage the app, I decided to use Jenkins.
I'm new to Jenkins so it was a quite difficult task installing and configuring Jenkins.
But I couldn't find how to set up Jenkins to build remote servers simultaneously.
Could you help me?
You should look into supervisor. It's language and application type agnostic, it just takes care of (re-) starting application.
So in your jenkins build:
You update your code from SVN
You run your unit tests (definitely a good idea)
You either launch an svn update on each host or copy the current content to them (I'd recommend this because there are many ways to make SVN fail and this allows to include SVN_REVISION in the some .JS file for instance)
You execute on each host: fuser -k -n tcp $DAEMON_PORT, this will kill the currently running application with the port $DAEMON_PORT (the one you use in your node.js's app)
And the best is obviously that it will automatically start your node.js at system's startup (provided supervisor is correctly installed (apt-get install supervisor on Debian)) and restart it in case of failure.
A node.js supervisord's subconfig looks like this:
# /etc/supervisor/conf.d/my-node-app.conf
[program:my-node-app]
user = running-user
environment = NODE_ENV=production
directory = /usr/local/share/dir_app
command = node app.js
stderr_logfile = /var/log/supervisor/my-node-app-stderr.log
stdout_logfile = /var/log/supervisor/my-node-app-stdout.log
There are many configuration parameters.
Note: There is a node.js's supervisor, it's not the one I'm talking about and I haven't tested it.
per Linux OS, you need to ssh to your hosts to run command to get application updated:
work out the workflow of application update in shell script. Especially you need to daemonize your node app so that a completed jenkins job execution will not kill your app when exits. Here's a nice article to tell how to do this: Running node.js Apps With Upstart, or you can refer to pure nodejs tech like forever. Assume you worked out a script under /etc/init.d/myNodeApp
ssh to your Linux OS from jenkins. so you need to make sure the ssh private key file has been copied to /var/lib/jenkins/.ssh/id_rsa with the ownership of jenkins user
Here's an example shell step in jenkins job configuration:
ssh <your application ip> "service myNodeApp stop; cd /ur/app/dir; svn update; service myNodeApp restart"

Resources