I'm hoping to find a way to use logstash/ES/Kibana to centralize our Windows Server 2012 / IIS8 logs.
It would be great to not have to install Java on our production servers to get logstash to serve just as the shipper. I'm wondering how other windows/IIS sysadmins using logstash have addressed this issue?
E.G., are there other, lighterweight, clients that logstash can consume?
If not, I'll probably just write one in Python that reads and posts to the logstash indexer.
As you say, you need to write program to send the logs to the logstash indexer.
For example,
Logstash indexer use TCP plugin listen at a port. You program will send the logs to the logstash indexer port. In this way you no need to install java program.
As Bel mentioned you can use TCP or UDP input plugins for your architectural need and also you can configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it. Let me know if your facing any difficulty setting up any of the above mentioned steps. I can give you an example.
Related
I'm working on node application and my main goal is to maintain the logs (error, info) of the backend part in logstash so that I could do some analysis of which API is breaking and why. I'm new to logstash and I read some basics of the logstash and elastic stacks. I want to achieve the following -
Integrate logstash to maintain the logs.
Read the logs to analysis the breaking changes.
I don't want to integrate the elastic search and kibana. I tried winston-logstash but it's not working and this library source code is not maintainable either. If anyone knows how to implement the above thing in nodejs application, Please let me know.
If your nodejs app runs as a docker container, you can use the gelf logging driver and then just log to console/stdout in nodejs and it will get routed to logstash.
Keep in mind Logstash is really just for transformation/enrichment/filtering/etc. you still probably want to output the log events (from Logstash) to an underlying logging solution e.g. elasticsearch.
We would like to use ELK for monitoring our IBM Integration Bus.
We would like to preform 2 things:
Get the IIB log (the default broker log) from several Linux servers to logstash (is there any tutorial to do that? grok?)
Write messages that goes through the IIB to logstash and the view them on the kibana (any grok?)
groks and how-to explenations would be much appreciated.
This tutorial from "digitalocean" would be helpful.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04
install logstash and kibana on separate server
install and configure filebeat agents on IIB severs to transfer logs
monitor logs and define filters on kibana
Below is what I think might help you:
You'll have to use filebeat to ship logs from the /var/log/messages to ship it to ELK. But this file has more system log details along with IIB related logs. More effective approach would be to create an Centralized logging framework(could be a IIB flow) to log whatever interests you while processing data through the IIB flows and log it locally on the server. Use filebeat to ship this IIB specific log file to ELK
You can enable terminal events on a flow and send these events with message payload and write into a MQ queue or kafka. Then make Logstash read form IBM MQ or Kakfa and load Elastic.
We have an application deployed on 9 servers. The application logs to a file, for example, myapp.log, so on each server, we have the same myapp.log file. Each time the traffic only hits one of those 9 servers and only updates the myapp.log on that server. So for troubleshooting, I need to log into all those 9 servers and check the myapp.log file for each server, which is extremely time consuming...So I am wondering if there is a better way to do that? Or, if there is a standard process to do that? Thanks!
syslog is default standard for logging on linux.
Write your logs into syslog, and then configure all 9 daemons that implement syslog protocol to transfer logs into machine where you want to read logs.
In our application we want to read logs from 2 different servers i.e apache tomcat and Jboss and want to monitor the logs. I have tried to search online to how to configure it but not albe to understand clearly about how can i implement it in graylog. Please help. Thank you.
You can send logs from an arbitrary number of applications and systems to Graylog (even on the same input).
Simply configure your applications and systems to send logs to Graylog and create an appropriate input for them.
See http://docs.graylog.org/en/2.1/pages/sending_data.html for some hints.
Hope you were able to send your logs to graylog server. Centralized logging using graylog will help newbies to get started with graylog and the article explains use cases like integrating apache, nginx, mysql slow query logs to graylog. This covers various ways like sending logs via syslog, graylog apache module, filebeat etc. which most articles miss out explaining in detail.
Can I get the send and receive bytes from unix servers? for sample centos server. I want to get the inbound and outbound bytes then parse it to logstash so that Im able to see the histogram in kibana. Thanks in advance. I am new to here.
There are lots of ways to make this happen.
You could run a script to gather the data on the machine, write it to a log file, use logstash-forwarder to send the logs to a centralized logstash, which then inserts into elasticsearch.
You could use some existing program to ship to logstash (collectd, etc).
Since you won't really need any of the features of logstash, you could write to elasticsearch directly via one of the client libraries (python, etc).
Unless you're limited to ELK, you could also use some "normal" network management system (snmp polling, etc).