Puppet agent on fail function? - puppet

Do Puppet agents have any type of on failure ability?
I want to create a rule that says if a puppet agent tries to check in fails due to an SSL issue it should remove its SSL certificates and attempt the check-in process again.
I know all the commands I want to run, it's just a matter of finding a way to execute a script on SSL failure.
Any suggestions on how to do this?

Do Puppet agents have any type of on failure ability?
Not a built-in one, no. They do log failures, of course. Or I guess the PE version might have something like that -- I wouldn't know.
I know all the commands I want to run, it's just a matter of finding a
way to execute a script on SSL failure.
Any suggestions on how to do this?
When run in --onetime mode, the agent's exit code conveys its success or failure, but you'd still have to analyze the log / console output to ascertain the nature of any failure. To use this for scheduled runs you would want to use an external scheduler such as cron to run the agent, rather than letting it run as a daemon itself. Some folks recommend that as a general good practice.

Related

Explain me the advantage to use Cronjob inside your code our outside your code?

i have to do a reptitive task in nodeJS and i've seen there is existing package like this one.
https://www.npmjs.com/package/node-cron
and the actual platform where i'm hosted propose inside cronjob.
https://www.netlify.com/docs/webhooks/
so my question is when it's more interessant to use the platform or a package.
thanks.
From the URL posted i didn't see any method of setting up a cron job using webhooks. Unless you were thinking of setting up a webhook that listens for a post which is sent using a linux cron job or the like?
Regardless, the actual question about using a platform or a package. They have pros and cons, but based purely on your question I would go with the platform.
If you choose to use a package you will have to write the code to call the package (which you need to test, maintain and run). You need to ensure that the node process is always up and running, if it dies or exits that it is re-spawned, that if the operating system reboots the node process gets kicked off again. All these problems are can be easily solved (PM2 for instance) but the fact is you need to think of the problems and solve them yourself or the cron job might not run when you want it to.
When using the platform you know that it is well tested, that it will work as documented, and that it will be resilient to failure modes that you might not be aware of.

Can I install two chef clients on a Linux server so that both two clients can manage the linux server?

I want to install one chef client on a linux server to manage the server by executing shell command. While there is a reboot command in one of the recipes in the run list, and the rest recipes don't continue to execute after the server reboot. As I haven't found a way to solve it, I wonder if I can install two chef clients on a Linux server and execute different recipes so that the rest recipes can continue to execute after the server reboot. Anyone can help? Thanks.
Putting two clients on a single device, or two Configuration Management tools on the same box in general, is a Bad Idea. Even if you could do it the cognitive load increase from determining when to update what where going forward is going to open you up to mistakes.
The proper approach is to put restart flags in your recipes; before you call the restart resource you set a flag (which can be a files contents or even existence, an environment variable, or any number of other persistent data objects) to indicate that a restart was performed. If its to be periodic you can instead look at something like the last time a file was accessed with its atime property. Then you do logic around your steps that require a reboot, guarding against a reboot if the flag no-restart flag is set or triggering it if you have a restart flag set, your choice. That way you'll have a chef converge with a restart that skips part of your runlist, then later another run that skips the unnecessary restart process.
Another good option is to just pay more attention to how you have your resources ordered. If the restart is in your last runlist item and is notified with the :delayed timing then it will be the last run resource, meaning the rest of your recipes would have already converged. If you need a complete converge every time then that is the option you should embrace.
Option 1 is a Ruby-centric solution and will require you to embrace dev work. Option 2 is more pure Chef with some Ruby sprinkled in and you can read up on notifying resources in the docs here: https://docs.chef.io/resource_common.html#notifications
There is an option 3 where you change the runlist during the chef run, which you could use to remove the recipes that require the reboot, but I think you'd benefit more from option 1 or 2.

How to create a nodejs instance to run cron jobs at set schedule?

I need to create a nodejs "server" which wont actually serve any assets or content, but will just run some scheduled job to fetch contents from one database and update another database. The schedule of the job should be configurable and should be able to cancel the job at any time. Basically what I need is to run a node script periodically. In past, I have created node/express projects, but I am having a hard time understanding how to implement such a node instance which will run on a remote machine and how to start or terminate it. I found a npm package called "node-schedule" which runs the job periodically, but how to put this package on a remote machine instance and run it?
One possibility that was considered was to schedule a cron job on remote machine which will execute "node updateDB.js" on set schedule, but it is a requirement to keep everything in node package and not depend on cron.
Sounds like a job for ssh.
Personally I wouldn't use NodeJS for this, this should be pretty trivial to do, with Node or otherwise, not sure why you are stuck, honestly. I have nothing against Node, but I don't see why it would be necessary for this task, but certainly you could use it for such a thing.
EDIT: After reading your comment I'm convinced someone thinks Node is a good tool for this task. I guess I don't understand where you are stuck. What part are you stuck on?
I think you should be able to puzzle this out pretty fast. The link below should be enough to put this together. http://book.mixu.net/node/ch9.html
If you need to execute ad hoc commands on a remote server you could use Node to call an Ansible playbook, in that case you'll need to share the public ssh key on the target instance(s) with the instance issuing the commands. There are other ways to skin this cat, but based on the information given, that's how I'd do it. I'd use Node and Ansible (requires python) + SSH.
Oh neato, maybe if I were forced to use NodeJS I'd use this package. https://www.npmjs.com/package/ssh2-exec
Did you find an answer to your problem? Share it here.

Is it possible to use Jenkins server to run custom tasks one by one?

Is it possible to use Jenkins server to run custom tasks one by one?
By task I mean to execute an external groovy program which designed as an independent performance and integration test for specific deployment.
If it is possible then how to:
To define tasks in Jenkins and group them so they can start by starting a group.
To see an output of each task (output log).
If there is a specific outcome like "-1" then stop execution of the whole group.
And all this should start automatically after software has been built and deployed.
I feel there has to be a way to do it with Jenkins utilising its out-of-the-box functionality, just not sure how. Or I am wrong and we are looking at custom plugin as a solution?
Thanks a lot!
P.S. I am not asking for detailed answer, just a general direction would be Ok. Also Jenkins is not a requirement, it can be another similar CI server.
It sounds like this could work by a simple Jenkins task with Execute shell commands.
The Console Output for the jobs will contain the output from the processes that you run externally, and the exit status of the script can cause the task to be in failure (any non-zero exit code will do this by default).
On unix systems, #! beginning the first line will denote the script environment to use.
To chain this together with the other Jenkins steps, you can use Build Triggers for Build after other projects are built and use your deployment step as the starting off point.
It is possible, but be careful. Normally Jenkins is used to run build jobs and to deploy software to a QA or staging server. It does not touch Production. But when you start doing this in Jenkins you increase the risk that someone will accidentally run a production job that should not have been run. So if you do decide to use Jenkins for this, set up an entirely separate instance of Jenkins that does nothing other than run these jobs. Then go to Manage Jenkins->Configure Global Security and set up login users. At the least, use "logged in users can do anything" but it would be better to set up "matrix-based security". Then run any jobs that you need by using an Execute Shell step. You can schedule jobs by using a Build Trigger, and you can connect jobs sequentially by setting up Build Other Projects in the post build section. If you want to do more complex job chaining, look into the Join Plugin.
Just keep this Jenkins entirely separate from the Jenkins which you use for CI.

Is there really no easy way to test puppet scripts on a remote machine?

I'm experimenting with Puppet scripts for deployment.
I find the hardest part about the process of writing those scripts is iteratively testing them.
I don't want to puppet apply on my local development machine, that liable to screw stuff up. I have a clean-slate remote box where I want to apply. I also don't see how a puppetmaster can help me; I might be using a puppetmaster at a later point for production deployments, but for now, I just want to get my code working.
So I put together a quick shell script that would rsync the different directories from my local puppet module path to /tmp on the remote machine, and then run puppet apply. This is terribly inconvenient. It's slow, especially if we're talking about a syntax error.
I think what I want really is something like a puppetd <-> puppetmaster connection, where puppetd on the remote machine receives an already compiled manifest. Just an adhoc-one over a SSH connection, without having to actual setup an Puppetmaster, dealing with certificates etc. puppet apply user#host.
There seems to be nothing of the sort, but how do other people deal with this? I experience of working on a Puppet script is incredibly frustrating to me, as is.
I'd recommend using Vagrant. If you're not testing the puppet master setup you can use the built in provisioner integration.
Once you have everything setup you can run vagrant provision or just run puppet apply on the vagrant vm.
Here's a related article you may find helpful as well.
I would also take a look at puppet rpsec tests, using rspec-puppet and puppetlabs-spec-helper. The rspec-puppet-init will break puppet doc and geppetto and maybe some other things due to the symlinks, and there are some issues with hiera, but the tests are easy to setup otherwise and work well, and can also be tied into jenkins/hudson.
I usually have two levels of testing for my Puppet scripts.
Unit tests for quick feedback: Written using rspec-puppet, these compile a Puppet catalog for the class/define/etc being tested, and make assertions about it. Run locally each time I make a minor change, and on the build server each time I check in. The tests run quickly (<10 seconds), and pick up syntax and dependency issues.
Functional tests to make sure it really works: Written using Cucumber with the Aruba library. When I'm finished implementing a feature and the unit tests for it pass, these tests provision a VM (using Vagrant) with the appropriate Puppet manifest(s), log in, and make assertions about the VM's state. The tests themselves look something like:
Given I am SSHed into Vagrant box "webserver"
When I type "php --version"
Then the output should include "PHP 5.4.11"
Vagrant is the most useful environment for rapid infrastructure development that I've found. It most closely (99%) will mirror your production setup, and you can account for those tiny differences in puppet so everything works as expected. It takes about 30 minutes to get going with it and will pay you back many times over in saved time messing around with file copy scripts :)
If it's helpful to visualize, on my desktop I have 3 terminals side by side:
Terminal 1) Editing puppet manifests, classes, ruby code, etc
Terminal 2) Running 'vagrant provision' which simply does a puppet apply along with any facts you want to pass, etc.
Terminal 3) 'vagrant ssh' into the box so I can poke around as puppet is doing its work
Hope this helps!
Why don't you want to run a puppetmaster? It's created for exactly this situation.
If you absolutely cannot run a puppetmaster, then you would have to wrap your puppet calls in another script that first downloads the file (with curl or wget) and apply them after a successful download. Given that the puppetmaster is a fairly simple application to run, I don't see how not using it would be any better.
I stumbled across rump while looking at another question. If you're using git, it might be useful. There's a slide deck available.
From the README.md: "Rump helps you run Puppet locally against a Git checkout."
You may be interested in citac, a toolkit for automated testing of Puppet scripts. It is available on Github: https://github.com/citac/citac
Citac systematically executes your Puppet manifest in various configurations, imitating transient system faults, different resource execution orders, and more. The generated test reports inform you about issues with non-idempotent resources, convergence-related issues, etc.
The tool uses Docker containers for execution, hence your system remains untouched while testing. State changes are tracked during execution of the Puppet script, and detailed test reports are generated.
To get an idea of which bugs the tool is able to detect, a large-scale evaluation with more than 150 public Puppet scripts has been performed. The results are available here: http://citac.github.io/eval/
Please feel free to provide feedback, pull requests, etc. Happy testing!

Resources