Puppet: Pass variables from Puppet agent to master? - puppet

I'm trying to configure a selection of servers using a Puppet agent/master configuration.
My problem is this will include configuring an Apache virtual host for each server, as such I'd like to use the agent's unique hostname within the configuration file which is downloaded from the master.
Is there a way to pass "variables" from the Puppet agent to the Puppet master, or alternatively, is there a way for configurations to "inherit" each other so that I can write a specific custom virtual host configuration for each host, but otherwise all other directives are the same between hosts?

Related

How to make puppet and terraform work together?

I am creating a virtual machine in terraform that will appear in Azure. Broadly speaking, once that's created how can I tell puppet that the virtual machine exists and to do the basic config steps? I have puppet with the commands I want it to run when a virtual machine is made. Can I tell it to look for a resource with a name? I am pretty clueless and have not been able to find much information on how in code the two work together.
If I was doing it on a cloud infrastructure I'd install the agent, either from a local repo or downloading and installing from the Puppet downloads site https://puppet.com/try-puppet/puppet-enterprise/download/.
Then once the agent was installed I'd run puppet config set server <your puppet server>
Within 30 minutes the agent should run and contact the puppet server.
If you've configured autosign https://puppet.com/docs/puppet/7/ssl_autosign.html then the server will accept the certificate request and start managing the node.

Run gitlab on subdomain in local network?

I installed gitlab on a raspberry pi 4 in my local network and will use it only locally. When I configure in /etc/gitlab/gitlab.rb the external_url 'http://rpi4.local' (and execute sudo gitlab-ctl reconfigure afterwards) it works. I can even configure a different port here.
But the configuration external_url 'http://gitlab.rpi4.local' does not work. Do I need to configure something else, like my /etc/hosts file ?
You will need to make the name valid in DNS through some mechanism. There are multiple ways depending on your needs and your options for DNS.
As you mentioned, you can add the name to your /etc/hosts file. This should be done both on the GitLab server and on any workstation you wish to have access to GitLab (assuming Linux-based machines. The process differs for Mac or Windows).
Use a valid DNS name and add it to your DNS. Use a name such as gitlab.<a-domain-you-own> and add it to DNS. Many domain registrars offer DNS for free or you could use a dynamic DNS service if your Raspi has a dynamic internal address. The advantage of using this method is you won't have to modify any /etc/hosts files and all workstations will know how to access your GitLab instances without any changes.

Creating EC2 Instances from an AMI with different hostname for Splunk

I am trialling using Splunk to log messages from IIS across our deployment. I notice that when I spin up a new EC2 instance from a custom AMI/Image it has the same PC 'hostname' as the parent image it was created from.
If I have a splunk forwarder setup on this new server it will forward data under the same hostname as the original image, making a distinction for reporting impossible.
Does anyone know of anyway that I can either dynamically set the hostname when creating an EC2 instance OR configure it in splunk as such that I specify a hostname for new forwarders?
Many Thanks for any help you can give!
If you are building the AMI, just bake in a simple startup script that sets the machine hostname dynamically.
If using a prebuilt AMI, connect to the machine once it's alive and set the host name (same script).
OR
Via Splunk: hostname is configured in. Just need to update these or run the splunk setup after you've set the hostname.
$SPLUNKHOME/etc/system/local/inputs.conf
$SPLUNKHOME/etc/system/local/server.conf
script idea above also applies to this (guessing you are baking the AMI with spunk already in there).
Splunk has various "stale" configuration that should not be shared across multiple instances of Splunk Enterprise, or the Universal Forwarder.
You can cleanup this stale data using built in Splunk commands.
./splunk clone-prep-clear-config
See: http://docs.splunk.com/Documentation/Splunk/7.1.3/Admin/Integrateauniversalforwarderontoasystemimage

puppet configuration help needed

I need your help to understand the better implementation approach for the below requirement:
Suppose my puppet master server name is: server.example.com which I need to update in 500 puppet agent nodes to contact to puppet master server. One way is to add server=server.example.com in puppet.conf on all the agent nodes and second way is to run the command "puppet agent --test --server server.example.com" on all agent nodes. But this needs to be performed either manually or some kind of automation needs to be performed. Is there some better way?
Second option is I can create a CNAME with name 'puppet' on puppet master server so that all agent nodes automatically make the communication with the puppet master. But in case I have multiple puppet master in the same domain than how I can manage it?
I will highly appreciate if someone can throw some light on the best practice to achieve this.
Thanks,
Sanjiv
The best practice is to take full advantage of puppet automation by adding server=server.example.com which is the address of the master. Since you are dealing with 500 nodes, manual approach is not encouraged.
By default puppet agents communicate with the master every 30minutes. But in some cases if you want to force puppet agents to communicate with master within this default time period, then use a parallel ssh or similar tool to invoke puppet agent --test
If you are considering multiple puppet masters then you need to ensure that DNS or the proxy server is properly configured in the network and point to right puppet master at a given point of time.
This might be helpful: https://docs.puppetlabs.com/guides/scaling_multiple_masters.html
You can have the client's puppet.conf as a template where server can take a variable in puppet or reading it from hiera. The server name will get propagated to your clients during the next puppet run by agents.

Can my Vagrant VMs use manifests with storeconfigs without a Puppet master?

I'm trying to set up a multi-VM Vagrant environment that spins up an OpenStack controller and a few OpenStack compute nodes and then provisions a VM or two on the compute nodes.
Yes, I'm talking about VMs running on VMs. It sounds a little crazy, but this multi-VM Vagrant environment has already been created at https://github.com/lorin/openstack-ansible and it works perfectly, as I describe at http://wiki.greptilian.com/openstack
I can only draw inspiration from that GitHub repo and its corresponding tech talk, however, because it uses Ansible as the Vagrant provisioner while I have a requirement to use Puppet.
I'd like to use the official Puppet Labs module for OpenStack at https://github.com/puppetlabs/puppetlabs-openstack but it uses storeconfigs, so I get errors like this because my Vagrantfile doesn't include a VM to serve as a Puppet master:
warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 142 in file /tmp/vagrant-puppet/modules-0/nova/manifests/init.pp
Resource type anchor doesn't exist at /tmp/vagrant-puppet/modules-0/nova/manifests/db/mysql.pp:18 on node controller.example.com.
I suppose I could tweak my Vagrantfile to spin up a Puppet master along with the OpenStack hosts, but I'm not sure how I'd do that and it seems to introduce extra complexity into the Vagrant environment.
I'm wondering if can do this in with "masterless" Puppet instead. A post at http://semicomplete.com/presentations/puppet-at-loggly/puppet-at-loggly.pdf.html suggests it's possible, saying, "puppet –environment prerun manifests/prerun.pp ... makes storeconfigs work ... puppet –storeconfigs manifests/site.pp ... This is the main puppet run" but I'm confused about the implementation details.
Can anyone point me to a Vagrant repo that runs "masterless" Puppet but uses storeconfigs?
You'll need to configure your storeconfigs with a DB that all vagrant VMs can reach. Loggly used AmazonRDS, but you can use other DBs as puppet docs show. Assuming you have a DB that all VMs can reach and you run puppet with storeconfigs option and you have the correct DB connection info configured in puppet, you should be good.

Resources