Architecture logstash configuration file in ElasticStack - logstash

Hi guys i need ur idea in best ways in architecturing logstash configuration file. I have 3 environment sit, uat and prod and few host such as mq and webfitas. Each environment and host have few logs such as access.log, server.log and eai_new.log for (sit,uat and prod). In mq host i got server.log, mq.log and webfitas host i got fitas,log. Need ur advice how to architecture this in logstash.conf. i know the format of logstash.conf but not sure whether each environment and logs i should create a seoerate index. i will display it in charts later on in kibana. need your adivce guys

Related

Bridge to Kubernetes doesnt add entries in /etc/hosts

I need help with the Bridge to Kubernetes setup in my Linux(WSL) environment.
The debug starts as expected but it doesn't change my /etc/hosts, hence I can't connect to the other services in the cluster.
I believe the issue can be related to not having enough permissions, and I can't find endpointManager running in Linux.
https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes#additional-configuration
Any idea what this could be related to?

How to read multiple servers log files in graylog?

In our application we want to read logs from 2 different servers i.e apache tomcat and Jboss and want to monitor the logs. I have tried to search online to how to configure it but not albe to understand clearly about how can i implement it in graylog. Please help. Thank you.
You can send logs from an arbitrary number of applications and systems to Graylog (even on the same input).
Simply configure your applications and systems to send logs to Graylog and create an appropriate input for them.
See http://docs.graylog.org/en/2.1/pages/sending_data.html for some hints.
Hope you were able to send your logs to graylog server. Centralized logging using graylog will help newbies to get started with graylog and the article explains use cases like integrating apache, nginx, mysql slow query logs to graylog. This covers various ways like sending logs via syslog, graylog apache module, filebeat etc. which most articles miss out explaining in detail.

Jhipster app logging to remote ELK (elastic stack)

I've been asked to configure an ELK stack, in order to manage log from several applications of a certain client. I have the given stack hosted and working at a redhat 7 server (followed this cookbook), and a test instance in a virtual machine with Ubuntu 16.04 (followed this other cookbook), but I've hit a roadblock and cannot seem to get through it. Kibana is rather new for me and maybe I don't fully understand the way it works. In addition, the most important application of the client is a JHipster managed application, another tool I am not familiarized.
Up until now, all I've found about jhipster and logstash tells me to install the full ELK stack using Docker (which I haven't, and would rather avoid in orther to keep the configuration I've already made), so that Kibana deployed through that method already has configured a dashboard tunned for displaying the information that the application will send with the native configuration, activated in the application.yml logstash: enabled: true.
So... my questions would be... Can I get that preconfigured jhipster dashboard imported in my preexistent Kibana deploy. Where is the data, logged by the application, stored? can I expect a given humanly readable format? Is there any other way of testing the configuration is working, since I don't have any traffic going through the test instance into the VM?
Since that JHipster app is not the only one I care for, I want other dashboards and inputs to be displayed from other applications, most probably using file beat.
Any reference to useful information is appreciated.
Yes you can. Take a look at this repository: https://github.com/jhipster/jhipster-console/tree/master/jhipster-console
there are the exports (in JSON format) from kibana stored in the repository, along with the load.sh
The scripts adds the configuration by adding them via the API. As you can imply, any recent dashboard is not affected by this, so you can use your existing configuration.

Creating EC2 Instances from an AMI with different hostname for Splunk

I am trialling using Splunk to log messages from IIS across our deployment. I notice that when I spin up a new EC2 instance from a custom AMI/Image it has the same PC 'hostname' as the parent image it was created from.
If I have a splunk forwarder setup on this new server it will forward data under the same hostname as the original image, making a distinction for reporting impossible.
Does anyone know of anyway that I can either dynamically set the hostname when creating an EC2 instance OR configure it in splunk as such that I specify a hostname for new forwarders?
Many Thanks for any help you can give!
If you are building the AMI, just bake in a simple startup script that sets the machine hostname dynamically.
If using a prebuilt AMI, connect to the machine once it's alive and set the host name (same script).
OR
Via Splunk: hostname is configured in. Just need to update these or run the splunk setup after you've set the hostname.
$SPLUNKHOME/etc/system/local/inputs.conf
$SPLUNKHOME/etc/system/local/server.conf
script idea above also applies to this (guessing you are baking the AMI with spunk already in there).
Splunk has various "stale" configuration that should not be shared across multiple instances of Splunk Enterprise, or the Universal Forwarder.
You can cleanup this stale data using built in Splunk commands.
./splunk clone-prep-clear-config
See: http://docs.splunk.com/Documentation/Splunk/7.1.3/Admin/Integrateauniversalforwarderontoasystemimage

logstash for Windows Server 2012 / IIS8 access logs

I'm hoping to find a way to use logstash/ES/Kibana to centralize our Windows Server 2012 / IIS8 logs.
It would be great to not have to install Java on our production servers to get logstash to serve just as the shipper. I'm wondering how other windows/IIS sysadmins using logstash have addressed this issue?
E.G., are there other, lighterweight, clients that logstash can consume?
If not, I'll probably just write one in Python that reads and posts to the logstash indexer.
As you say, you need to write program to send the logs to the logstash indexer.
For example,
Logstash indexer use TCP plugin listen at a port. You program will send the logs to the logstash indexer port. In this way you no need to install java program.
As Bel mentioned you can use TCP or UDP input plugins for your architectural need and also you can configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it. Let me know if your facing any difficulty setting up any of the above mentioned steps. I can give you an example.

Resources