Files for puppet agents - puppet

How to manage files which are not static but dynamic. Different services can have files which needs different values based on their characteristics!
Like ssh configuration file the listening interface should the IP of the machine (agent), in the same way how can I use hostname ?
and if I have keepalived how will I provide different priority numbers to the two agent machine with the same file ???

You have to manage files as templates, more info on puppetlabs: http://docs.puppetlabs.com/guides/templating.html

Related

Zabbix. Why on two same linux servers, with the same configuration and same template, I have different items and triggers on zabbix server?

I set up two same servers with the same configuration. I link with hosts the same templates on the Zabbix server, but Hosts has different counts items, triggers, and graphs, why?
It depends on the no. of services, filesystems, processes etc discovered by the descovery templates. so, the no. of items and triggers will not be the same. Here is the documentation
https://www.zabbix.com/documentation/devel/en/manual/web_interface/frontend_sections/configuration/templates
https://www.zabbix.com/documentation/devel/en/manual/web_interface/frontend_sections/configuration/templates/items
https://www.zabbix.com/documentation/devel/en/manual/web_interface/frontend_sections/configuration/templates/triggers
https://www.zabbix.com/documentation/devel/en/manual/web_interface/frontend_sections/configuration/templates/discovery

How to share a file (data) across multiple docker containers in azure

I want to run several docker containers in different regions (asia, eu, us) which host a nginx server.
However, they should all have the same configuration because I need to updated hostnames at runtime dynamically (one domain for every new tenant).
So I guess it would be the easiest way to just share one config file among all containers and reload them...
So how can I share data/files among n containers on azure?
In general, unless you want to use proprietary solutions specific to the platform at hand, the best way to synchronise files between multiple systems is with the help of rsync.
For example, in DNS, there exists a specialised protocol for transferring domain zones directly within the DNS software, called AXFR; one of the authors of a newer DNS implementation suggests that this AXFR protocol is crap, and rsync over ssh works much better — http://cr.yp.to/djbdns/tcp.html — and the ssh part is a nice thing about rsync, in that it can work over plain old ssh protocol as far as interconnection between the hosts goes, not requiring any special firewall considerations.
Have you considered using the Azure file share.

Copy files from one Azure VM to another with a file watch

I'm trying to set up a situation where I drop files into a folder on one Azure VM, and they're automatically copied to another Azure VM. I was thinking about mapping a drive from the receiver to the sender and using a file watch/copy program to send the files over the mapped drive.
What's a good recommendation for a file watch/copy program that's simple and efficient, and what security setups do I need to get the two Azure boxes to "talk" to each other? They're in the same account/resource group/etc, so I'm not going outside of a virtual network or anything like that.
By default, VMs in the same virtual network can talk to each other (this is true even if default NSGs are applied). So you wouldn't have to do anything special to get that type of communication working.
To answer the second part, you might want to consider just using built-in FCI rules to execute a short script to do the copy. See this link for a short intro into FCI rules.
Alternatively, you could use a service such as Azure files to have files shared between those servers using CIFS. It really depends on why you are trying to have a copy of the file on two servers.
Hope that helps!

can we use single node configuration after i configure multi node?I am talking about hadoop

this question might be a silly one but since i am new in hadoop and there are very few material available online which can be used as a reference point so i thought this might be the best place to ask this question .
i have successfully configured few computers in multi node configuration. during the setup process i have to change many hadoop file .now i am wondering can i use every single computer as an single node configuration with out changing any settings or hadoop file ?
You can make your each node as separate instance. But you have to modify the configuration files surely and restart all the instances.
You can do that
Follow below steps
Remove IP or Hostname from masters file
Remove IP's or hostname's from slaves file
change fs.defaultFS property IP address in core-site.xml
As well as Resource Manager IP

Creating EC2 Instances from an AMI with different hostname for Splunk

I am trialling using Splunk to log messages from IIS across our deployment. I notice that when I spin up a new EC2 instance from a custom AMI/Image it has the same PC 'hostname' as the parent image it was created from.
If I have a splunk forwarder setup on this new server it will forward data under the same hostname as the original image, making a distinction for reporting impossible.
Does anyone know of anyway that I can either dynamically set the hostname when creating an EC2 instance OR configure it in splunk as such that I specify a hostname for new forwarders?
Many Thanks for any help you can give!
If you are building the AMI, just bake in a simple startup script that sets the machine hostname dynamically.
If using a prebuilt AMI, connect to the machine once it's alive and set the host name (same script).
OR
Via Splunk: hostname is configured in. Just need to update these or run the splunk setup after you've set the hostname.
$SPLUNKHOME/etc/system/local/inputs.conf
$SPLUNKHOME/etc/system/local/server.conf
script idea above also applies to this (guessing you are baking the AMI with spunk already in there).
Splunk has various "stale" configuration that should not be shared across multiple instances of Splunk Enterprise, or the Universal Forwarder.
You can cleanup this stale data using built in Splunk commands.
./splunk clone-prep-clear-config
See: http://docs.splunk.com/Documentation/Splunk/7.1.3/Admin/Integrateauniversalforwarderontoasystemimage

Resources