masterless puppet agent reporting on logstash - puppet

I am a beginner to logstash. I have a setup wherein I run masterless puppet. Each puppet agent on each node generates reports in YAML format.
I want to be able to use a centralized reporting and alerting (using Nagios and logstash filter) - does logstash accept YAML format logs? Has anyone explored using logstash for puppet reports?

Having a quick look around, it seems you can enable reporting on Masterless Puppet as explained here: https://groups.google.com/forum/#!topic/puppet-users/Z8HncQqEHbc
As for reporting, I do not know much about Nagios, but for Logstash, I am currently looking into the same integration for our systems. There is a Puppetmodule made by the Logstash team: Search github for "puppet-logstash-reporter" by "Logstash" (Can't post more than 2 links yet). This uses the TCP input method for Logstash.
For Nagios, a plugin has been mentioned on a Twitter feed about the same question (https://twitter.com/mitchellh/status/281934743887564800). I haven't used it so cannot comment on it.
Finally, I do not believe Logstash understands YAML, I am sure you can filter it with a Grok filter, but it would be easier to use the JSON reading ability if reading from a file described in the "inputs" section of the logstash docs. (Would link but restricted at the moment).
Hope this helps. I am also new to a few of these technologies but learning quickly so thought I'd pass on what I've found :)

Related

How to translate Okta System Log records into Elastic Common Schema version 1.5 using logstash pipeline configuration

I have an Okta instance which I get system logs out of using
logstash-input-okta_system_log plugin for Elastic Logstash.
The plugin works just fine. What I want is to translate the logs into Elastic Common Schema using a Logstash pipeline configuration. I can do that, but to be frank it is such a daunting task mapping,mutating,renaming the fields.
Now I am wondering if anyone has done this before and willing to share their filters?
I am not 100% sure if this goes against StackOverflow spirit, which I am sure many people will take issue with.
I have started working on it, if this is not something someone has done before I will post my solution as an answer for people looking for the same thing in the future.
I haven't found anything searching the Internet. Looking forward to hear from someone who has already done this.
filter {
mutate {
rename => {"displayMessage" => "message"}
.
.
.
}
}
For anyone interested, Elastic is releasing new Filebeat modules in a few weeks including one for Okta, which reads Okta system logs via the API and does the mapping to ECS.
That's going to be what I will be using.
Find details in the documents that are yet to be released: https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-module-okta.html

Telegraf input plugin: How to determine form which service to take inputs

I'm trying to use the TICK stack. What I'm really confused even after reading so much in google is how to simply set up an input plugin to monitor for example my apache server or whatever remote server.
It may be a simple config but for me the telegraph.conf didn't really help.
In short:
How to point the apache server as a source for gathering metrics in telegraf.conf
How to be sure that metrics (input plugins) are somehow linked with the source
in telegraf.conf
That's what Input Plugins are for.
When you have a service running in your machine, all you need to do is, you need to tell telegraf to collect data from that service using input (given that Telegraf has a plugin for it).
Suppose for instance, I want to collect data from postfix, I will first check if there is already a plugin for telegraf that is capable of doing it. There is, in fact a telegraf plugin that can collect postfix data, Postfix telegraf plugin
Now if I want to use this plugin, all I need to do is add [[inputs.postfix]] in telegraf.conf file and customize it using the options available on postfix plugin page.
Similarly, there is Apache input plugin as well which can be used simply by adding [[inputs.apache]] in telegraf.conf file and customizing options based on your requirements (given on plugin's page).

ELK: logstash dashboard

I am playing with the ELK module and other "beats". I realized there are cool default dashboards for Metricbeats and Heartbeat. But I couldn't find anything about logstash.
So I was wondering: Is there an example of a dashboard for Logstash in Kibana?
Logstash actually doesn't have any dashboard with it. It doest work as Heartbeat or Metricbeats on one task.
Logstash just powerful instrument to capture and modify data on the fly. It has many different plug-ins for it and can be used regardless elastic for example to capture data, parce it, create fields from raw data and send it to back-end which could be elastic, hive, sql or just e-mail.
So it doesn't, but you can create your own dashboard from data which coming from logstash

(rsyslog) filter out facility?

Trying to remotely log all syslogs, but cron. I've tried the following statement which seem to work, but I'm not sure if this is officially supported, because I do not find any documentation on how to do this.
*.*;cron.!* #remotehost.com:514
The RHEL docs say that filtering via rsyslog is supported: https://access.redhat.com/solutions/54363. The exact filter syntax is described in the rsyslog docs but it seems to support ! to filter out priorities.

log4j Log Indexing using Solr

We are finding it very hard to monitor the logs spread over a cluster of four managed servers. So, I am trying to build a simple log4j appender which uses solrj api to store the logs in the solr server. The idea is to use leverage REST of solr to build a better GUI which could help us
search the logs and the display the previous and the next 50 lines or so and
tail the logs
Being awful on front ends, I am trying to cookup something with GWT (a prototype version). I am planning to host the project on googlecode under ASL.
Greatly appreciate if you could throw some insights on
Whether it makes sense to create a project like this ?
Is using Solr for this an overkill?
Any suggestions on web framework/tool which will help me build a tab-based front end for tailing.
You can use a combination of logstash (for shipping and filtering logs) + elasticsearch (for indexing and storage) + kibana (for a pretty GUI).
The loggly folks have also built logstash, which can be backed by quite a few things, including lucene via elastic search. It can forward to graylog also.
Totally doable thing. Many folks have done the roll your own. A couple of useful links.. there is an online service, www.loggly.com that does this. They are actually based on Solr as the core storage engine! Obviously they have built a proprietary interface.
Another option is http://www.graylog2.org/. It is opensource. Not backed by Solr, but still very cool!

Resources