I am deciding between using puppet or chef to provision a matching development and production environment. I plan to regularly add virtual hosts to apache. I have looked through the docs of both and I am not certain about this. If I add a virtual host, does the server need to be re-provisioned entirely (destroyed / rebuilt) for a new virtual host to be active? Or can I simply reboot the machine and new changes to the puppet or chef manifests will be applied?
Nope, provisioning can run over and over again, even without reboot.
Chef by default runs every 30 minutes.
Related
I am creating a virtual machine in terraform that will appear in Azure. Broadly speaking, once that's created how can I tell puppet that the virtual machine exists and to do the basic config steps? I have puppet with the commands I want it to run when a virtual machine is made. Can I tell it to look for a resource with a name? I am pretty clueless and have not been able to find much information on how in code the two work together.
If I was doing it on a cloud infrastructure I'd install the agent, either from a local repo or downloading and installing from the Puppet downloads site https://puppet.com/try-puppet/puppet-enterprise/download/.
Then once the agent was installed I'd run puppet config set server <your puppet server>
Within 30 minutes the agent should run and contact the puppet server.
If you've configured autosign https://puppet.com/docs/puppet/7/ssl_autosign.html then the server will accept the certificate request and start managing the node.
I've got several Virtual Hosts set up on my Apache (Linux) server. I'd like each Virtual Host to have its own set of cronjobs so that they can be managed independently without the risk of one Virtual Host conflicting with another. Is there a way to do something like this?
The only way for me is to create local users on the machine where those virtual hosts reside and give each user rights to manage particular virtual host (config, cron, etc)
I'm using Vagrant to deploy my VMs and my current setup looks like this:
server1 = VM1, VM2, VM3 ( main production server )
server2 = VM1, VM2, VM3 ( backup server )
My questions is, can I somehow sync the VMs across the different physical servers in case one fails so I can keep running the VMs on the second one without experiencing any downtime ?
I know there is the Synced Folders option within Vagrant but that is not what I need, I basically need to clone the VMs from server1 to server2 periodically in case of the downtime so they can keep on running on the backup server while the main one doesn't get up again.
Thanks a bunch.
Vagrant doesn't inherently support this, since it's intended audience is really development environments. It seems like you're looking for something more like what VMWare vSphere does.
I have just started studying Puppet. I understand that puppet can work in solo mode and in the master/agents configuration.
Ideal Use Case
I have a server on Digital Ocean, I would like to use the Puppet GUI to manage the remote server from my MAC, (if possible without running a VM). Is this possible or I am obliged to rent another server that run as a master (if I want to use the Puppet GUI)?
I'm trying to set up a multi-VM Vagrant environment that spins up an OpenStack controller and a few OpenStack compute nodes and then provisions a VM or two on the compute nodes.
Yes, I'm talking about VMs running on VMs. It sounds a little crazy, but this multi-VM Vagrant environment has already been created at https://github.com/lorin/openstack-ansible and it works perfectly, as I describe at http://wiki.greptilian.com/openstack
I can only draw inspiration from that GitHub repo and its corresponding tech talk, however, because it uses Ansible as the Vagrant provisioner while I have a requirement to use Puppet.
I'd like to use the official Puppet Labs module for OpenStack at https://github.com/puppetlabs/puppetlabs-openstack but it uses storeconfigs, so I get errors like this because my Vagrantfile doesn't include a VM to serve as a Puppet master:
warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 142 in file /tmp/vagrant-puppet/modules-0/nova/manifests/init.pp
Resource type anchor doesn't exist at /tmp/vagrant-puppet/modules-0/nova/manifests/db/mysql.pp:18 on node controller.example.com.
I suppose I could tweak my Vagrantfile to spin up a Puppet master along with the OpenStack hosts, but I'm not sure how I'd do that and it seems to introduce extra complexity into the Vagrant environment.
I'm wondering if can do this in with "masterless" Puppet instead. A post at http://semicomplete.com/presentations/puppet-at-loggly/puppet-at-loggly.pdf.html suggests it's possible, saying, "puppet –environment prerun manifests/prerun.pp ... makes storeconfigs work ... puppet –storeconfigs manifests/site.pp ... This is the main puppet run" but I'm confused about the implementation details.
Can anyone point me to a Vagrant repo that runs "masterless" Puppet but uses storeconfigs?
You'll need to configure your storeconfigs with a DB that all vagrant VMs can reach. Loggly used AmazonRDS, but you can use other DBs as puppet docs show. Assuming you have a DB that all VMs can reach and you run puppet with storeconfigs option and you have the correct DB connection info configured in puppet, you should be good.