How To Get Percent Uptime Using Puppet Facts API - puppet

I am able to get the current uptime of a given set of nodes using the Puppet Facts REST Endpoint like this:
curl -X GET https://puppet:8081/pdb/query/v4/facts --data-urlencode 'query=["=", "name", "uptime"]' -k -H "X-Authentication: xxxxxxxxxxxxxxxxxx"
That provides a response that looks like this
{"certname":"xyz","environment":"production","name":"uptime","value":"107 days"}
But, it doesn't tell me the previous uptime, and so I don't have a way of calculating the downtime of this Node. Is there a way to use Puppet facts to see historical data over time such that I could see over the course of a given timeframe that a Node had x seconds of total uptime and y seconds of total downtime?

No, is the short answer.
Facts on the server are primarily there to provide Puppet with the information it needs to compile a catalog, it's not designed to give historical data, just a "this is what the server looks like right now".
The command you're looking for on linux is probably last reboot.
If you want to extract data from a machine remotely using Puppet it would be much more appropriate to use Puppet tasks, if you have Puppet Enterprise, if not the same functionality is available using Puppet Bolt tasks.
On Puppet Enterprise
Just create a tasks directory in your module and drop a script in it to do what you want.
Or use an existing one https://github.com/puppetlabs/puppetlabs-reboot look in the tasks directory here. On Puppet enterprise you'll be able to see those tasks on the console.
Bolt
You'll need to install Bolt, you could use the same module above. This is a good guide to getting started with Puppet Bolt.
https://www.youtube.com/watch?v=9Z7nYlspUJw

Related

Trigger puppet run on update of manifest / facts

I'm working on a tool which manages WordPress instances using puppet. The flow is the following: the user adds the data of the new WordPress installation in the web interface and then that web interface is supposed to send a message to the puppet master to tell it to deploy it to the selected machine.
Currently the setup is done via a manifest file which contains the declaration of all WordPress instances, and that is applied manually via puppet apply on the puppet agent. This brings me to my 2 questions:
Are manifests the correct way of doing this? If so, is it possible to apply them from the puppet master to a specific node instead of going to the agent?
Is it possible to automatically have a puppet run triggered once the list of instances is altered?
To answer your first question, yes there's absolutely a way of doing this via a puppetmaster, what you have at the moment is a masterless setup which assumes you're distributing your configuration with some kind of version control (like git) or manual process. This is a totally legitimate way of doing things if you don't want a centralized master.
If you want to use a master, you'll need to drop your manifest in the $modulepath of your master (it varies depending on your version, you can find it using puppet config print modulepath on your master) and then point the puppet agent at the master.
If you want to go down the master route, I'd suggest following the puppet documentation which will help you get started.
The second question brings me on to a philosphical argument of 'is this really want you want to do?'
Puppet traditionally (in my opinion) is a declarative config management tool that is designed to make your systems look a certain way. You write code to determine 'this is how I want it to look' and Puppet will converge to make it look that way. What you're looking to do is more of an orchestration task (ie when X do Y). There are ways of doing this with Puppet like using mcollective (to trigger a puppet run) which is managed by a webhook, but I think there are better tools for the job.
I'd suggest looking at ansible, saltstack or Chef's knife tool to do deploys like this.

Extract SAS Enterprise Guide into Unix Server runnable batch?

We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.

Setup Puppet at first place

I am trying to understand the best practice of setting up Puppet in the first place, let's say I have 1000 existing servers needs to be managed Puppet.
Do I manually install Puppet agent on each or there is a better way.
Sorry if this question is too generic just want to have some idea.
1000 servers could be a lot for a single master instance. of course it will depend on the master specs, and other factors related to the puppet runs.
There are few questions you need to answer first to determine how are you going to go about it such as
Puppet Enterprise or Open Source? What is the current configuration night mare you are trying to solve?
What is the current configuration data related to the challenge or
problem you have?
What are the current business roles (e.g. web server, load
balancer,database, ..etc) related to the problem you have? What
makes a role in terms of configurations?
I would suggest that you start first small to learn more about the puppet DSL, and its ECO system (master, agent, puppetdb, console/dashboard). I also recommend you start with the free 10 nodes puppet Enterprise as it will let you focus more on the problem at hand not how to configure the puppet masters, and agents, how to scale them, ..etc.
One more thing install puppet agent every where if you can in NOOP/disabled mode to get at least facts and run it in a masterless fashion using puppet apply when you need to. i find NOOP mode more useful as it tells you what needs to be changed, also you can enforce changes using --no-noop
hope that will get you started.
To answer your question: Yes, Puppet agent would need to be installed on every node. If you are managing 1000 nodes, I would assume you have your own OS image. In this case, its best to add it to the OS image, and use this image on 1000 nodes.

What is the puppet way to send configurations along with request from the puppet agent

I want to make some dynamic configurations details in the puppet master side before it makes a deployment on puppet agent. So I want to send significant amount of configuration details along with the request of the agent to master. Is there a proper way to do this in puppet ?
Regards,
Malintha Adiakri
Yes! There is facter. This is how I use it and what I find most robust but there are other ways to define new facts.
For example if you want to add role of the server then you can do
export FACTER_ROLE=jenkins
Now you can see that command facter role will print jenkins. Yay!
After running puppet agent all facts known to system will be passed to thenpuppetmaster. Be aware that service puppet will not know fact that you just defined because it runs in other scope.
I put my facts in file .facts and source it before apply.
This is my script that runs from cron:
#!/bin/bash
source /root/.facts
puppet agent -t --server puppetmaster.example.com --pluginsync
While the previous answer is correct, I'm opening this as a new one because it's significant. Defining FACTER_factname variables in the agent's environment is a nice and quick way to override some facts. If you wish to rely on your own facts for production purposes, you should look to custom facts instead.
In its basic form, you use it by deploying Ruby code snippets to your boxen. For an easier approach, take special note of external facts. Those are probably the best solution for your problem.
Also note that as of Facter 2 you can contain complex data structures in your facts and don't have to serialize everything into strings. If the amount of data from the agent is high, as you emphasize, that may be helpful.

Is there really no easy way to test puppet scripts on a remote machine?

I'm experimenting with Puppet scripts for deployment.
I find the hardest part about the process of writing those scripts is iteratively testing them.
I don't want to puppet apply on my local development machine, that liable to screw stuff up. I have a clean-slate remote box where I want to apply. I also don't see how a puppetmaster can help me; I might be using a puppetmaster at a later point for production deployments, but for now, I just want to get my code working.
So I put together a quick shell script that would rsync the different directories from my local puppet module path to /tmp on the remote machine, and then run puppet apply. This is terribly inconvenient. It's slow, especially if we're talking about a syntax error.
I think what I want really is something like a puppetd <-> puppetmaster connection, where puppetd on the remote machine receives an already compiled manifest. Just an adhoc-one over a SSH connection, without having to actual setup an Puppetmaster, dealing with certificates etc. puppet apply user#host.
There seems to be nothing of the sort, but how do other people deal with this? I experience of working on a Puppet script is incredibly frustrating to me, as is.
I'd recommend using Vagrant. If you're not testing the puppet master setup you can use the built in provisioner integration.
Once you have everything setup you can run vagrant provision or just run puppet apply on the vagrant vm.
Here's a related article you may find helpful as well.
I would also take a look at puppet rpsec tests, using rspec-puppet and puppetlabs-spec-helper. The rspec-puppet-init will break puppet doc and geppetto and maybe some other things due to the symlinks, and there are some issues with hiera, but the tests are easy to setup otherwise and work well, and can also be tied into jenkins/hudson.
I usually have two levels of testing for my Puppet scripts.
Unit tests for quick feedback: Written using rspec-puppet, these compile a Puppet catalog for the class/define/etc being tested, and make assertions about it. Run locally each time I make a minor change, and on the build server each time I check in. The tests run quickly (<10 seconds), and pick up syntax and dependency issues.
Functional tests to make sure it really works: Written using Cucumber with the Aruba library. When I'm finished implementing a feature and the unit tests for it pass, these tests provision a VM (using Vagrant) with the appropriate Puppet manifest(s), log in, and make assertions about the VM's state. The tests themselves look something like:
Given I am SSHed into Vagrant box "webserver"
When I type "php --version"
Then the output should include "PHP 5.4.11"
Vagrant is the most useful environment for rapid infrastructure development that I've found. It most closely (99%) will mirror your production setup, and you can account for those tiny differences in puppet so everything works as expected. It takes about 30 minutes to get going with it and will pay you back many times over in saved time messing around with file copy scripts :)
If it's helpful to visualize, on my desktop I have 3 terminals side by side:
Terminal 1) Editing puppet manifests, classes, ruby code, etc
Terminal 2) Running 'vagrant provision' which simply does a puppet apply along with any facts you want to pass, etc.
Terminal 3) 'vagrant ssh' into the box so I can poke around as puppet is doing its work
Hope this helps!
Why don't you want to run a puppetmaster? It's created for exactly this situation.
If you absolutely cannot run a puppetmaster, then you would have to wrap your puppet calls in another script that first downloads the file (with curl or wget) and apply them after a successful download. Given that the puppetmaster is a fairly simple application to run, I don't see how not using it would be any better.
I stumbled across rump while looking at another question. If you're using git, it might be useful. There's a slide deck available.
From the README.md: "Rump helps you run Puppet locally against a Git checkout."
You may be interested in citac, a toolkit for automated testing of Puppet scripts. It is available on Github: https://github.com/citac/citac
Citac systematically executes your Puppet manifest in various configurations, imitating transient system faults, different resource execution orders, and more. The generated test reports inform you about issues with non-idempotent resources, convergence-related issues, etc.
The tool uses Docker containers for execution, hence your system remains untouched while testing. State changes are tracked during execution of the Puppet script, and detailed test reports are generated.
To get an idea of which bugs the tool is able to detect, a large-scale evaluation with more than 150 public Puppet scripts has been performed. The results are available here: http://citac.github.io/eval/
Please feel free to provide feedback, pull requests, etc. Happy testing!

Resources