How to download puppet manifest file from master using agent? - puppet

I have agent connected to master in puppet and I need to copy manifest file and some other resources from maseter using agent - is this possible ?

I'm not sure what your use-case is here, but I do not believe this is possible.
In a simple master-agent setup, the agent sends facts to its configured master. In exchange, the master combines those facts, site-specific hiera data, and resource definitions in applicable manifests, compiles a catalog, and sends that catalog to the agent–by design, I don't think agents can access uncompiled manifests. However, where I am more certain is in your ability to see which resources are under puppet's management in the agent's $vardir more info here. More specifically, inside $vardir/state. If you'd like to see the compiled catalog, that's available in $vardir/catalog.
Depending on what you're trying to achieve, maybe it would be enough to see the dependency model on a given agent. You can generate the directed acyclic graph with puppet agent -t --graph which will populate $vardir/state/graphs with graphviz dot files. With graphviz installed, you could generate visuals in formats like svg by running dot expanded_relationships.dot -Tsvg -o expanded_relationships.svg
Not quite the full output of the manifests used to compile an agent's catalog, but there's a lot to chew on there.

Related

how to get inferstrucutre snapshot using terraform

Is there a way to extract the infrastructure details by using terraform
e.g get a list of Linux server's version, firewall policy, opened ports, software packages installed etc..
My aim is to generate a block of code to describe the current server setup, then I can use a check list if validate against the code. therefor security loopholes can be identified and fixed
Not sure if I completely understand your question. But, there is not such an "automated" way to extract all the details of your not-terraformed infrastructure. Nevertheless, there exists a terraform import command with which you can import your existing resource (here the docs) to your state file.
Btw, if you are using Oracle Cloud, the Resource Discovery could be an option.

How to call a puppet provider method from puppet manifest?

I'm using the ibm_installation_manager module from the puppet forge and it is a bit basic because IBM wrote Installation Manager in a time where idempotency was done much.
ref: https://forge.puppet.com/puppetlabs/ibm_installation_manager
As such it does not cater nicely for upgrades - so the module will not detect if an upgrade is needed, stop existing processes, do the upgrade and then start the processes again. It will just detect if an upgrade is needed and try to install the desired version and if that constitutes an upgrade that's great, but it will probably fail due to running instances.
So I need to implement some "stop processes" pre-upgrade functionality.
I need to mention at this point I'm new to ruby and fairly new to puppet.
The provider that the module uses (imcl.rb) has an exists method.
The ideal way for me to detect if an upgrade is going to happen (and stop the instances if it is) would be for my puppet manifest to be able to somehow call the exists method. Is this possible?
Or how would you approach this problem?
Something like imcl.exists(ibm_pkg["my_imcl_pkg_resource"])
The ideal way for me to detect if an upgrade is going to happen (and stop the instances if it is) would be for my puppet manifest to be able to somehow call the exists method. Is this possible?
No, it is not possible, at least not in any useful way. Your manifests describe how to build a catalog of resources describing the target state of the machine. In a master / agent setup, this happens on the master. The catalog is then used as input to a separate step, in which it is transferred to the target machine and applied there. It is in this second step that providers are engaged.
To the extent that you want the contents of your catalogs to be influenced by the current state of the target machine, the Puppet mechanism for that is to convey the needed state details to the catalog builder in the form of facts. It is relatively straightforward to add your own facts. Indeed, there are at least two distinct, non-exclusive mechanisms, going under the names "external facts" and "custom facts".

Possible to have node local configuration file

is it possible to use a node local config file (hiera?) that is used by the puppet master to compile the update list during a puppet run?
My usecase is that puppet will make changes to users .bashrc file and to the users home directory, but I would like to be able to control which users using a file on the actual node itself, not in the site.pp manifest.
is it possible to use a node local config file (hiera?) that is used
by the puppet master to compile the update list during a puppet run?
Sure, there are various ways to do this.
My usecase is that puppet will make changes to users .bashrc file and
to the users home directory, but I would like to be able to control
which users using a file on the actual node itself, not in the site.pp
manifest.
All information the master has about the current state of the target node comes in the form of node facts, provided to it by the node in its catalog request. A local file under local control, whose contents should be used to influence the contents of the node's own catalog, would fall into that category. Puppet supports structured facts (facts whose values have arbitrarily-nested list and/or hash structure), which should be sufficient for communicating the needed data to the master.
There are two different ways to add your own facts to those that Puppet will collect by default:
Write a Ruby plugin for Facter, and let Puppet distribute it automatically to nodes, or
Write an external fact program or script in the language of your choice,
and distribute it to nodes as an ordinary file resource
Either variety could read your data file and emit a corresponding fact (or facts) in appropriate form. The Facter documentation contains details about how to write facts of both kinds; "custom facts" (Facter plugins written in Ruby) integrate a bit more cleanly, but "external facts" work almost as well and are easier for people who are unfamiliar with Ruby.
In principle, you could also write a full-blown custom type and accompanying provider, and let the provider, which runs on the target node, take care of reading the appropriate local files. This would be a lot more work, and it would require structuring the solution a bit differently than you described. I do not recommend it for your problem, but I mention it for completeness.

How to use multiple different puppet masters from one puppet agent?

There is the need that one puppet agent contacts some different puppet masters.
Reason: there are different groups that create different and independent sets of manifests.
Possible groups and their tasks
Application Vendor: configuration of application
Security: hardening
Operations: routing tables, monitoring tools
Each of these groups should run it's own puppet master - the data (manifests and appropriate data) should be strictly separated. If it is possible, one group should even not see / have access to the manifests of the others (we are using MAC on the puppet agent OSes).
Thoughts and ideas that all failed:
using (only) hira is not flexible as needed - there is the need to have different manifests.
r10k: supports more than one environment, but in each environment can only access one set of manifests.
multi but same puppet server using e.g. DNS round robin: this is the other way round. We need different puppet masters.
Some ways that might be possible but...
running multiple instances of puppet agents. That 'feels' strange. Advantage: the access rights can be limited in the way as needed (e.g. the application puppet agent can run under the application user).
patching puppet that it can handle more than one puppet master. Disadvantage: might be some work.
using other mechanisms to split responsibility. Example: use different git-repositories. Create one puppet master. The puppet master pulls all the different repositories and serves the manifests.
My questions:
Is there a straight forward way implementing this requirement with puppet?
If not, is there some best practice how to do this?
While I think what you are trying to do here is better tackled by incorporating all of your modules and data onto a single master, and that utilizing environments will be effectively the exact same situation (different masters will provide a different set of modules/data) this can be achieved by implementing a standard multi-master infrastructure (one CA master for cert signing, multiple compile masters with certs signed by the same CA master, configured to forward cert traffic elsewhere) and configure each master to have whatever you need. You then end up having to specify which master you want to check in to on each run (a cronjob or some other approach), and have the potential for one checkin to change settings set by another (kinda eliminating the hardening/security concept).
I would urge you to think deeper on how to collaborate your varied aspects (git repos for each division's hiera data and modules that have access control) so that a central master can serve your needs (and access to that master would be the only way to get data/modules from everywhere).
This type of setup will be complex to implement, but the end result will be more reliable and maintainable. Puppet inc. may even be able to do consultation to help you get it right.
There are likely other approaches too, just fyi.
I've often found it convenient to multi-home a puppet agent for development purposes, because with a local puppet server you can instantly test manifest changes - there's no requirement to commit, push and r10k deploy environment like there is if you're just using directory environments and a single (remote) puppet server.
I've found the best way to do that is to just vary the path configuration (otherwise you run into problems with e.g. the CA certs failing to verify against the other server) - a form of your "running multiple instances of puppet agents" suggestion. (I still run them all privileged, so they can all use apt package {} etc.)
For Puppet 3, I'd do this by varying the libdir with --libdir (because the ssldir was under the libdir), but now (Puppet 4+) it looks more sensible to vary the --confdir. So, for example:
$ sudo puppet agent -t # Runs against main puppet server
$ sudo puppet agent -t \
--server=puppet.dev.example.com \
--confdir=/etc/puppetlabs/puppet-dev # Runs against dev puppet server

Trigger puppet run on update of manifest / facts

I'm working on a tool which manages WordPress instances using puppet. The flow is the following: the user adds the data of the new WordPress installation in the web interface and then that web interface is supposed to send a message to the puppet master to tell it to deploy it to the selected machine.
Currently the setup is done via a manifest file which contains the declaration of all WordPress instances, and that is applied manually via puppet apply on the puppet agent. This brings me to my 2 questions:
Are manifests the correct way of doing this? If so, is it possible to apply them from the puppet master to a specific node instead of going to the agent?
Is it possible to automatically have a puppet run triggered once the list of instances is altered?
To answer your first question, yes there's absolutely a way of doing this via a puppetmaster, what you have at the moment is a masterless setup which assumes you're distributing your configuration with some kind of version control (like git) or manual process. This is a totally legitimate way of doing things if you don't want a centralized master.
If you want to use a master, you'll need to drop your manifest in the $modulepath of your master (it varies depending on your version, you can find it using puppet config print modulepath on your master) and then point the puppet agent at the master.
If you want to go down the master route, I'd suggest following the puppet documentation which will help you get started.
The second question brings me on to a philosphical argument of 'is this really want you want to do?'
Puppet traditionally (in my opinion) is a declarative config management tool that is designed to make your systems look a certain way. You write code to determine 'this is how I want it to look' and Puppet will converge to make it look that way. What you're looking to do is more of an orchestration task (ie when X do Y). There are ways of doing this with Puppet like using mcollective (to trigger a puppet run) which is managed by a webhook, but I think there are better tools for the job.
I'd suggest looking at ansible, saltstack or Chef's knife tool to do deploys like this.

Resources