ELK for IBM Integration Bus 10 (IIB) - logstash

We would like to use ELK for monitoring our IBM Integration Bus.
We would like to preform 2 things:
Get the IIB log (the default broker log) from several Linux servers to logstash (is there any tutorial to do that? grok?)
Write messages that goes through the IIB to logstash and the view them on the kibana (any grok?)
groks and how-to explenations would be much appreciated.

This tutorial from "digitalocean" would be helpful.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04
install logstash and kibana on separate server
install and configure filebeat agents on IIB severs to transfer logs
monitor logs and define filters on kibana

Below is what I think might help you:
You'll have to use filebeat to ship logs from the /var/log/messages to ship it to ELK. But this file has more system log details along with IIB related logs. More effective approach would be to create an Centralized logging framework(could be a IIB flow) to log whatever interests you while processing data through the IIB flows and log it locally on the server. Use filebeat to ship this IIB specific log file to ELK
You can enable terminal events on a flow and send these events with message payload and write into a MQ queue or kafka. Then make Logstash read form IBM MQ or Kakfa and load Elastic.

Related

How to read multiple servers log files in graylog?

In our application we want to read logs from 2 different servers i.e apache tomcat and Jboss and want to monitor the logs. I have tried to search online to how to configure it but not albe to understand clearly about how can i implement it in graylog. Please help. Thank you.
You can send logs from an arbitrary number of applications and systems to Graylog (even on the same input).
Simply configure your applications and systems to send logs to Graylog and create an appropriate input for them.
See http://docs.graylog.org/en/2.1/pages/sending_data.html for some hints.
Hope you were able to send your logs to graylog server. Centralized logging using graylog will help newbies to get started with graylog and the article explains use cases like integrating apache, nginx, mysql slow query logs to graylog. This covers various ways like sending logs via syslog, graylog apache module, filebeat etc. which most articles miss out explaining in detail.

Logstash vs Rsyslog for log file aggregation

I am working on a solution for centralized log file aggregation from our CentOs 6.x servers. After installing Elasticsearch/Logstash/Kibana (ELK) stack I came across an Rsyslog omelasticsearch plugin which can send messages from Rsyslog to Elasticsearch in logstash format and started asking myself why I need Logstash.
Logstash has a lot of different input plugins including the one accepting Rsyslog messages. Is there a reason why I would use Logstash for my use case where I need to gather the content of logs files from multiple servers? Also, is there a benefit of sending messages from Rsyslog to Logstash instead of sending them directly to Elasticsearch?
I would use Logstash in the middle if there's something I need from it that rsyslog doesn't have. For example, getting GeoIP from an IP address.
If, on the other hand, I would need to get syslog or file contents indexed in Elasticsearch, I'd use rsyslog directly. It can do buffering (disk+memory), filtering, you can choose how the document will look like (you can put the textual severity instead of the number, for example), and it can parse unstructured data. But the main advantage is performance, on which rsyslog is focused on. Here's a presentation with some numbers (and tips and tricks) on Logstash, rsyslog and Elasticsearch:
http://blog.sematext.com/2015/05/18/tuning-elasticsearch-indexing-pipeline-for-logs/
I would recommend logstash. That would be easier to setup, more examples and they are tested to fit together.
Also, there are some benefits, in logstash you can filter and modify your logs.
You can extend logs with useful data: server name, timestamp, ...
Cast types, string to int, etc. (useful for correct Elastic index)
Filter out logs by some rules
Moreover, you can setup batch size to optimize saving to elastic.
Another feature, if something went wrong and there are crazy amount of logs per second that elastic can not process, you can setup logstash that it would save some queue of events or drop events that can not be saved.
If you go straight from the server to elasticsearch, you can get the basic documents in (assuming the source is json, etc). For me, the power of logstash is to add value to the logs by applying business logic to modify and extend the logs.
Here's one example: syslog provides a priority level (0-7). I don't want to have a pie chart where the values are 0-7, so I make a new field that contains the pretty names ("emerg", "debug", etc) that can be used for display.
Just one example...
Neither are a viable option if you really want to rely on the system to operate under load and be highly available.
We found that using rsyslog to send to a centralized location, archive it using redis of kafka and then using logstash to do its magic and ship to Elasticsearch is the best option.
Read our blog about it here - http://logz.io/blog/deploy-elk-production/
(Disclaimer - I am the VP product for logz.io and we offer ELK as a service)

how to get send, receive byte and network bandwidth from different servers to parse with elk stack (elasticsearch logstash kibana)

Can I get the send and receive bytes from unix servers? for sample centos server. I want to get the inbound and outbound bytes then parse it to logstash so that Im able to see the histogram in kibana. Thanks in advance. I am new to here.
There are lots of ways to make this happen.
You could run a script to gather the data on the machine, write it to a log file, use logstash-forwarder to send the logs to a centralized logstash, which then inserts into elasticsearch.
You could use some existing program to ship to logstash (collectd, etc).
Since you won't really need any of the features of logstash, you could write to elasticsearch directly via one of the client libraries (python, etc).
Unless you're limited to ELK, you could also use some "normal" network management system (snmp polling, etc).

logstash for Windows Server 2012 / IIS8 access logs

I'm hoping to find a way to use logstash/ES/Kibana to centralize our Windows Server 2012 / IIS8 logs.
It would be great to not have to install Java on our production servers to get logstash to serve just as the shipper. I'm wondering how other windows/IIS sysadmins using logstash have addressed this issue?
E.G., are there other, lighterweight, clients that logstash can consume?
If not, I'll probably just write one in Python that reads and posts to the logstash indexer.
As you say, you need to write program to send the logs to the logstash indexer.
For example,
Logstash indexer use TCP plugin listen at a port. You program will send the logs to the logstash indexer port. In this way you no need to install java program.
As Bel mentioned you can use TCP or UDP input plugins for your architectural need and also you can configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it. Let me know if your facing any difficulty setting up any of the above mentioned steps. I can give you an example.

distributed logging: JMS and log4j?

Been doing some searching for a solution to this problem: I need log entries from apps running on several machines to be sent to & aggregated on a remote server. Requirements:
logging in the app needs to be asynchronous (can't wait for log entry to traverse network)
logging in the app needs to be queued; if the network fails, log entries need to be queued locally and sent to
centralized server when the network becomes available again
I'm looking at using log4j and a JMSAppender. Assuming that's a suitable solution, are there any examples available? What process would be running on the centralized server to receive log entries in this scenario?
Thanks.
One simple setup I came to think about is to use Apache ActiveMQ
It is an open source messaging broker (JMS compatible) that is able to cluster queues among several physical machines and the ActiveMQ installation is rather lightweight. You simple install one ActiveMQ on each of your applications machines. Then on the logging server (Physical Server C in the picture) you would have another ActiveMQ. Your application would use a JMS appender (read more here) and you could actually just use the included apache camel to read from the queue and write a log on file or database without needing to write an application for that task.
It could be as simple as adding something like the following to the camel.xml in the activemq /conf installation and import the camel.xml in the activemq.xml configuration.
<route>
<from uri="activemq:queue:LogQueue"/>
<to uri="file:target/folder/?fileName=logfile.log&fileExist=Append"/>
</route>
You could use a myrriad of other frameworks, JMS servers and technologies, but I think this is a rather easy approach to achieve with very low cost and high stability.

Resources