I am trialling using Splunk to log messages from IIS across our deployment. I notice that when I spin up a new EC2 instance from a custom AMI/Image it has the same PC 'hostname' as the parent image it was created from.
If I have a splunk forwarder setup on this new server it will forward data under the same hostname as the original image, making a distinction for reporting impossible.
Does anyone know of anyway that I can either dynamically set the hostname when creating an EC2 instance OR configure it in splunk as such that I specify a hostname for new forwarders?
Many Thanks for any help you can give!
If you are building the AMI, just bake in a simple startup script that sets the machine hostname dynamically.
If using a prebuilt AMI, connect to the machine once it's alive and set the host name (same script).
OR
Via Splunk: hostname is configured in. Just need to update these or run the splunk setup after you've set the hostname.
$SPLUNKHOME/etc/system/local/inputs.conf
$SPLUNKHOME/etc/system/local/server.conf
script idea above also applies to this (guessing you are baking the AMI with spunk already in there).
Splunk has various "stale" configuration that should not be shared across multiple instances of Splunk Enterprise, or the Universal Forwarder.
You can cleanup this stale data using built in Splunk commands.
./splunk clone-prep-clear-config
See: http://docs.splunk.com/Documentation/Splunk/7.1.3/Admin/Integrateauniversalforwarderontoasystemimage
Related
I have a packer ec2 instance where part of the provisioning entails updating the etc/hosts file for the instance. Among these entries is the one of the current running machine which is written in ip-00-00-00 format.
If you do this in Packer, the AMI is saved and launching it again results in a new hostname/ip being assigned and the old hostname is irrelevant. The hostname is used for an internal application which relies on its hostname entry as well as oracle client. For oracle client, there is an ORACLE_HOSTNAME environment variable entry that can be added.
So how do you manage such a process where you're building an AMI that requires dynamic changes to its host file?
I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.
I have a very basic question but I have been searching the internet for days without finding what I am looking for.
I currently run one instance on AWS.
That instance has my node server and my database on it.
I would like to make use of ELB by separating the one machine that hosts both the server and the database:
One machine that is never terminated, which hosts the database
One machine that runs the basic node server, which as well is never terminated
A policy to deploy (and subsequently terminate) additional EC2 instances that run the server when traffic demands it.
First of all I would like to know if this setup makes sense.
Secondly,
I am very confused about the way this should work in practice:
Do all deployed instances run using the same volume or is a snapshot of the volume is used?
In general, how do I set such a system? Again, I searched the web and all of the tutorials and documentations are so generalized for every case that I cannot seem to figure out exactly what to do in my case.
Any tips? Links? Articles? Videos?
Thank you!
You would have an AutoScaling Group with a minimum size of 1, that is configured to use an AMI based on your NodeJS server. The AutoScaling Group would add/remove instances to the ELB as instances are created and deleted.
EBS volumes can not be attached to more than one instance at a time. If you need a shared disk volume you would need to look into the EFS service.
Yes you need to move your database onto a separate server that is not a member of the AutoScaling Group.
How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?
You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/
You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.
You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.
It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432
We are looking at moving around 100 websites that we have on a dedicated web server, from our current hosting company; and host these sites on a EC2 Windows 2012 server.
I've looked at the type of EC2 instances available. Am I better going for a m1.small (or t1.micro with auto scaling). With regards auto scaling, how does it work, if I upload a file to the master instance, when are the other instances updated ? Is it when the instances are auto scaled again ?
Also, I will be needing to host a mail enable (mail server) application. Any thoughts on best practice for this ? Am I better off hosting 1 server for everything, or splitting it across instances...?
When you are working with EC2, you need to start thinking about how your applications are designed and deployed differently.
Autoscaling works best when your instances follow shared nothing architecture. The instances themselves should never store persistent data. They should also be able to be automatically set up at launch.
Some applications are not designed to work in this environment. They require local file storage, or other issues.
You probably wont be using micro instances. They are mostly designed for very specific low utilization workloads.
You can run a mail server on ec2, but you will have to use an Elastic IP and whitelist the instances sending mail. By default, EC2 instances are on the spamhaus block list.