Hiera lookup through wild cards /or regex - puppet

I have a question about puppet/hiera but before that I want to give a little bit of background about our infrastructure. Currently we have some file/registry resources which we are using to manage registry or file system on our azure virtual machines. In a resource group we may have one or many virtual machines. The naming convention of the virtual machines in a resource group is following DCC-123456-01A,....-02A etc. In hiera we have information as shown below.
- name: "Root file system Customer Specific"
path: "customer/%{trusted.certname}/file_system.json"
- name: "hotfixes customer specific"
path: "customer/%{trusted.certname}/hotfixes.json"
- name: "Customer Specific Registry Keys"
path: "customer/%{trusted.certname}/registry.json"
As you can see we have created customer specific .json files .I don't want to create multiple folders for each virtual machine in a resource group as I have done for couple of my in customer-specific folders instead if there is somehow a wild card or regex I can use in hiera and make only one entry.

Hiera supports globs for file paths. They are documented at https://puppet.com/docs/puppet/latest/hiera_config_yaml_5.html#specifying_file_paths.
With a glob, you should be able to do something like
- name: "customer specific files"
glob: "customer/%{trusted.certname}/*.json"

Related

How do I nest Terraform stacks beyond top level and module

Terraform addresses re-use of components via Modules. So I might put the definition for an AWS Autoscale Group in a module and then have multiple top level resource files use that ASG. Fine so far.
My question is: how to use Terraform to group and organize multiple top level resource files? In other words, what is the next level of organization?
We have a system that has multiple applications...each application would correspond to a TF resource file and those resource files would use the modules. We have different customers that use different sets of the applications so we need to keep them in their own resource files.
We're asking if there is a TF concept for deploying multiple top level resource files (applications to us).
At some point, you can't or it doesn't make sense to abstract any further. You will always have a top level resource file (i.e main.tf) describing modules to use. You can organize these top level resource files via:
Use Terraform Workspaces
You can use workspaces - in your case, maybe one per client name. Workspaces each have their own backing Terraform state. Then you can use terraform.workspace variable in your Terraform code. Workspaces can also be used to target different environments.
Use Separate Statefiles
Have one top level statefile for each of your clients, i.e clienta.main.tf, clientb.main.tf, etc. You could have them all in the same repository and use a script to run them all, individually, or in whatever pattern you prefer; or you could have one repository per client.
You can also combine workspaces with your separate statefiles to target individual environments, i.e staging,production, for each client. The Terraform docs go into more detail about workspaces and some of their downsides.

Copy files from one Azure VM to another with a file watch

I'm trying to set up a situation where I drop files into a folder on one Azure VM, and they're automatically copied to another Azure VM. I was thinking about mapping a drive from the receiver to the sender and using a file watch/copy program to send the files over the mapped drive.
What's a good recommendation for a file watch/copy program that's simple and efficient, and what security setups do I need to get the two Azure boxes to "talk" to each other? They're in the same account/resource group/etc, so I'm not going outside of a virtual network or anything like that.
By default, VMs in the same virtual network can talk to each other (this is true even if default NSGs are applied). So you wouldn't have to do anything special to get that type of communication working.
To answer the second part, you might want to consider just using built-in FCI rules to execute a short script to do the copy. See this link for a short intro into FCI rules.
Alternatively, you could use a service such as Azure files to have files shared between those servers using CIFS. It really depends on why you are trying to have a copy of the file on two servers.
Hope that helps!

How to reuse code with Dynamic hiera.yaml in several projects?

I have a single puppet master on which puppet modules reside.
I want to use the same code and single puppet master to deploy on the code on different environments for different projects. To store the data I am using hiera. The challenge is there are a few project specific data and the code is the same.
Is there a way to use the project specific file in hiera hierarchy at run time. If I am running puppet for project A it will pickup project A specific variables in hiera hierarchy and for project B it will pickup project B specific data.
By setting multiple puppet masters we can achieve this. How can we do that using a single puppet master?
It is entirely possible! In the hiera.yaml file, you can set up custom hierarchies based on facts, such as:
---
:hierarchy:
- "%{module_name}/%{::fqdn}"
- "%{module_name}/%{::domain}"
- "%{module_name}/global"
- "global"
In this case, if you were to give distinct domain names to your environments (such as dev.site, prod.site, test.site, etc.), different hiera files would be looked up. It works with any fact that could be useful (for example, the network or environment facts).

How do I create several private virtual machine images using Azure ARM?

I want to import a number of private virtual machines that only I can launch using the ARM REST API.
How do I do that? I cannot find instructions.
This question is a little unclear - do you have a number of already pre-defined virtual machine images that you want to start up, is it multiple copies of the same machine for a load balanced scenario or something else?
Also, you say "Only I can launch" what do you mean by that? By definition, when you describe your resources using Azure Resource Manager, you're essentially making a desired state configuration that you then deploy to Azure, and it will create all those machines for you.
If it's simply a question of creating the configuration file, you can try out cool stuff such as http://Armviz.io to set up your stuff. Alternatively, if you already have a group of resources that you'd like to capture into a script - go here:
http://capturegroup.azurewebsites.net

Can any "lxc-*" commands list the searching template path?

Can any lxc-* commands list the searching template path? Since in some OSs, the path is /usr/share/lxc/templates/, while in others, it may be /usr/local/share/lxc/templates/.
This can't be done with LXC command, since template storage depend on storage disks that you can choose their role.
So to create a container you should pickup a disk and a location

Resources