is it possible to use a node local config file (hiera?) that is used by the puppet master to compile the update list during a puppet run?
My usecase is that puppet will make changes to users .bashrc file and to the users home directory, but I would like to be able to control which users using a file on the actual node itself, not in the site.pp manifest.
is it possible to use a node local config file (hiera?) that is used
by the puppet master to compile the update list during a puppet run?
Sure, there are various ways to do this.
My usecase is that puppet will make changes to users .bashrc file and
to the users home directory, but I would like to be able to control
which users using a file on the actual node itself, not in the site.pp
manifest.
All information the master has about the current state of the target node comes in the form of node facts, provided to it by the node in its catalog request. A local file under local control, whose contents should be used to influence the contents of the node's own catalog, would fall into that category. Puppet supports structured facts (facts whose values have arbitrarily-nested list and/or hash structure), which should be sufficient for communicating the needed data to the master.
There are two different ways to add your own facts to those that Puppet will collect by default:
Write a Ruby plugin for Facter, and let Puppet distribute it automatically to nodes, or
Write an external fact program or script in the language of your choice,
and distribute it to nodes as an ordinary file resource
Either variety could read your data file and emit a corresponding fact (or facts) in appropriate form. The Facter documentation contains details about how to write facts of both kinds; "custom facts" (Facter plugins written in Ruby) integrate a bit more cleanly, but "external facts" work almost as well and are easier for people who are unfamiliar with Ruby.
In principle, you could also write a full-blown custom type and accompanying provider, and let the provider, which runs on the target node, take care of reading the appropriate local files. This would be a lot more work, and it would require structuring the solution a bit differently than you described. I do not recommend it for your problem, but I mention it for completeness.
Related
I have agent connected to master in puppet and I need to copy manifest file and some other resources from maseter using agent - is this possible ?
I'm not sure what your use-case is here, but I do not believe this is possible.
In a simple master-agent setup, the agent sends facts to its configured master. In exchange, the master combines those facts, site-specific hiera data, and resource definitions in applicable manifests, compiles a catalog, and sends that catalog to the agent–by design, I don't think agents can access uncompiled manifests. However, where I am more certain is in your ability to see which resources are under puppet's management in the agent's $vardir more info here. More specifically, inside $vardir/state. If you'd like to see the compiled catalog, that's available in $vardir/catalog.
Depending on what you're trying to achieve, maybe it would be enough to see the dependency model on a given agent. You can generate the directed acyclic graph with puppet agent -t --graph which will populate $vardir/state/graphs with graphviz dot files. With graphviz installed, you could generate visuals in formats like svg by running dot expanded_relationships.dot -Tsvg -o expanded_relationships.svg
Not quite the full output of the manifests used to compile an agent's catalog, but there's a lot to chew on there.
I'm working on a tool which manages WordPress instances using puppet. The flow is the following: the user adds the data of the new WordPress installation in the web interface and then that web interface is supposed to send a message to the puppet master to tell it to deploy it to the selected machine.
Currently the setup is done via a manifest file which contains the declaration of all WordPress instances, and that is applied manually via puppet apply on the puppet agent. This brings me to my 2 questions:
Are manifests the correct way of doing this? If so, is it possible to apply them from the puppet master to a specific node instead of going to the agent?
Is it possible to automatically have a puppet run triggered once the list of instances is altered?
To answer your first question, yes there's absolutely a way of doing this via a puppetmaster, what you have at the moment is a masterless setup which assumes you're distributing your configuration with some kind of version control (like git) or manual process. This is a totally legitimate way of doing things if you don't want a centralized master.
If you want to use a master, you'll need to drop your manifest in the $modulepath of your master (it varies depending on your version, you can find it using puppet config print modulepath on your master) and then point the puppet agent at the master.
If you want to go down the master route, I'd suggest following the puppet documentation which will help you get started.
The second question brings me on to a philosphical argument of 'is this really want you want to do?'
Puppet traditionally (in my opinion) is a declarative config management tool that is designed to make your systems look a certain way. You write code to determine 'this is how I want it to look' and Puppet will converge to make it look that way. What you're looking to do is more of an orchestration task (ie when X do Y). There are ways of doing this with Puppet like using mcollective (to trigger a puppet run) which is managed by a webhook, but I think there are better tools for the job.
I'd suggest looking at ansible, saltstack or Chef's knife tool to do deploys like this.
Well, question is not new but I still unable to find any nice solution.
I distributing binaries 100-300mb files via puppet fileserver, but it works really bad in case of performance I'm sure because of md5 checks. Now I have more than 100 servers and my puppet master works really hard to manage all that md5 computation checks. In puppet 3.x checksum for file{} does not work. I'm unable to update to puppet 4.x and I have no chance to change flow. files should came from puppet fileserver.
So I can't believe that there is no custom file type with fixed checksum option, but I can't find it :(
Or maybe there is any other way to download files from puppet fileserver ?
Please any advice will help!
rsync or pack as a native package impossible option to me.
It is indeed reasonable to suppose that using the default checksum algorithm (MD5) when managing large files will have a substantial performance impact. The File resource has a checksum attribute that is supposed to be usable to specify an alternative checksumming algorithm among those supported by Puppet (some of which are not actually checksums per se), but it was buggy in many versions of Puppet 3. At this time, it does not appear that the fix implemented in Puppet 4 has been backported to the Puppet 3 series.
If you need only to distribute files, and don't care about afterward updating them or maintaining their consistency via Puppet, then you could consider turning off checksumming altogether. That might look something like this:
file { '/path/to/bigfile.bin':
ensure => 'file',
source => 'puppet:///modules/mymodule/bigfile.bin',
owner => 'root',
group => 'root',
mode => '0644',
checksum => 'none',
replace => false
}
If you do want to manage existing files, however, then Puppet needs a way to determine whether a file already present on the node is up to date. That's the one of the two main purposes of checksumming. If you insist on distributing the file via the Puppet file server, and you are stuck on Puppet 3, then I'm afraid you are out of luck as far as lightening the load. Puppet's file server is tightly integrated with the File resource type, and not intended to serve general purposes. To the best of my knowledge, there is no third-party resource type that leverages it. In any case, the file server itself is a major contributor to the problem of File's checksum parameter not working -- buggy versions do not perform any type of checksumming other than MD5.
As an alternative, you might consider packaging your large file in your system's native packaging format, dropping it in your internal package repository, and managing the package (via a Package resource) instead of managing the file directly. That does get away from distributing it via the file server, but that's pretty much the point.
How efficient is puppet with handling large files? To give you a concrete example:
Let's assume we're dealing with configuration data (stored in files) in the order of gigabytes. Puppet needs to ensure that the files are up-to-date with every agent run.
Question: Is puppet performing some file digest type of operation beforehand, or just dummy-copying every config file during agent runs?
When using file { 'name': source => <URL> }, the file content is not sent through the network unless there is a checksum mismatch between master and agent. The default checksum type is md5.
Beware of the content property for file. Its value is part of the catalog. Don't assign it with contents of large files via the file() or template() functions.
So yes, you can technically manage files of arbitrary size through Puppet. In practice, I try to avoid it, because all of Puppet's files should be part of a git repo or similar. Don't push your tarballs inside there. Puppet can deploy them by other means (packages, HTTP, ...).
Im not entirely certain now puppets file server works in the latest update but in previous versions Puppet read the file into memory and thats why it was not recommended using the file server to transfer files larger than 1gb. I suggest you go through these answers and see if it makes sense https://serverfault.com/a/398133
I want to make some dynamic configurations details in the puppet master side before it makes a deployment on puppet agent. So I want to send significant amount of configuration details along with the request of the agent to master. Is there a proper way to do this in puppet ?
Regards,
Malintha Adiakri
Yes! There is facter. This is how I use it and what I find most robust but there are other ways to define new facts.
For example if you want to add role of the server then you can do
export FACTER_ROLE=jenkins
Now you can see that command facter role will print jenkins. Yay!
After running puppet agent all facts known to system will be passed to thenpuppetmaster. Be aware that service puppet will not know fact that you just defined because it runs in other scope.
I put my facts in file .facts and source it before apply.
This is my script that runs from cron:
#!/bin/bash
source /root/.facts
puppet agent -t --server puppetmaster.example.com --pluginsync
While the previous answer is correct, I'm opening this as a new one because it's significant. Defining FACTER_factname variables in the agent's environment is a nice and quick way to override some facts. If you wish to rely on your own facts for production purposes, you should look to custom facts instead.
In its basic form, you use it by deploying Ruby code snippets to your boxen. For an easier approach, take special note of external facts. Those are probably the best solution for your problem.
Also note that as of Facter 2 you can contain complex data structures in your facts and don't have to serialize everything into strings. If the amount of data from the agent is high, as you emphasize, that may be helpful.