Trying to remotely log all syslogs, but cron. I've tried the following statement which seem to work, but I'm not sure if this is officially supported, because I do not find any documentation on how to do this.
*.*;cron.!* #remotehost.com:514
The RHEL docs say that filtering via rsyslog is supported: https://access.redhat.com/solutions/54363. The exact filter syntax is described in the rsyslog docs but it seems to support ! to filter out priorities.
Related
I have an Okta instance which I get system logs out of using
logstash-input-okta_system_log plugin for Elastic Logstash.
The plugin works just fine. What I want is to translate the logs into Elastic Common Schema using a Logstash pipeline configuration. I can do that, but to be frank it is such a daunting task mapping,mutating,renaming the fields.
Now I am wondering if anyone has done this before and willing to share their filters?
I am not 100% sure if this goes against StackOverflow spirit, which I am sure many people will take issue with.
I have started working on it, if this is not something someone has done before I will post my solution as an answer for people looking for the same thing in the future.
I haven't found anything searching the Internet. Looking forward to hear from someone who has already done this.
filter {
mutate {
rename => {"displayMessage" => "message"}
.
.
.
}
}
For anyone interested, Elastic is releasing new Filebeat modules in a few weeks including one for Okta, which reads Okta system logs via the API and does the mapping to ECS.
That's going to be what I will be using.
Find details in the documents that are yet to be released: https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-module-okta.html
I am new to Solr world. I got a core indexed by Solr 4.6 and I want Solr 5.2 to take the existing core for searching. I have spent hours to figure out auto-discovery feature but no documentation exists.
My effort has been done here:
1) Quick start guide only tells you how to create one in 5.2
2) Upgrading a Solr 4.x Cluster to Solr 5.0 in Official 5.2 documentation touches the topic but does not provide useful hints (what is ZK_HOST? Why do I need it by the way?) . Plus, I don't want to make it a service without knowing it will work via command line.
I believe there must be a command option to set the core location and let Solr finds it. Could you share some useful hints?
Thanks
The way that you manage your Solr service changed quite a bit between 4.6 and 5.2.
First off, you now have a solr script in bin directory to manage the instance. You also have a solr.in.sh file for configuring your solr instance and you now stick all the solr configs in there, things like port, jvm parameters, etc...
Anyway, down to your question. Core Auto discovery scans your SOLR_HOME directory (specified in solr.in.sh) In that directory it expects to find directories that have core.properties files in them. Solr will try to attach any core it finds in there.
Other parameters like ZK_HOST are related to zookeeper. If you are running Solr Classic (with basic replication), you don't need to worry about that. However, if you are moving to SolrCloud, you will need to learn a bit about zookeeper.
I am a beginner to logstash. I have a setup wherein I run masterless puppet. Each puppet agent on each node generates reports in YAML format.
I want to be able to use a centralized reporting and alerting (using Nagios and logstash filter) - does logstash accept YAML format logs? Has anyone explored using logstash for puppet reports?
Having a quick look around, it seems you can enable reporting on Masterless Puppet as explained here: https://groups.google.com/forum/#!topic/puppet-users/Z8HncQqEHbc
As for reporting, I do not know much about Nagios, but for Logstash, I am currently looking into the same integration for our systems. There is a Puppetmodule made by the Logstash team: Search github for "puppet-logstash-reporter" by "Logstash" (Can't post more than 2 links yet). This uses the TCP input method for Logstash.
For Nagios, a plugin has been mentioned on a Twitter feed about the same question (https://twitter.com/mitchellh/status/281934743887564800). I haven't used it so cannot comment on it.
Finally, I do not believe Logstash understands YAML, I am sure you can filter it with a Grok filter, but it would be easier to use the JSON reading ability if reading from a file described in the "inputs" section of the logstash docs. (Would link but restricted at the moment).
Hope this helps. I am also new to a few of these technologies but learning quickly so thought I'd pass on what I've found :)
We are finding it very hard to monitor the logs spread over a cluster of four managed servers. So, I am trying to build a simple log4j appender which uses solrj api to store the logs in the solr server. The idea is to use leverage REST of solr to build a better GUI which could help us
search the logs and the display the previous and the next 50 lines or so and
tail the logs
Being awful on front ends, I am trying to cookup something with GWT (a prototype version). I am planning to host the project on googlecode under ASL.
Greatly appreciate if you could throw some insights on
Whether it makes sense to create a project like this ?
Is using Solr for this an overkill?
Any suggestions on web framework/tool which will help me build a tab-based front end for tailing.
You can use a combination of logstash (for shipping and filtering logs) + elasticsearch (for indexing and storage) + kibana (for a pretty GUI).
The loggly folks have also built logstash, which can be backed by quite a few things, including lucene via elastic search. It can forward to graylog also.
Totally doable thing. Many folks have done the roll your own. A couple of useful links.. there is an online service, www.loggly.com that does this. They are actually based on Solr as the core storage engine! Obviously they have built a proprietary interface.
Another option is http://www.graylog2.org/. It is opensource. Not backed by Solr, but still very cool!
My company is looking at advanced search and reporting solutions, and are considering (among other options) creating something akin to JIRA's JQL for maximum flexibility.
My googling leads me to believe Atlassian built JQL from scratch, at least as a language with syntax and a parser, but I thought I'd try SO before concluding. Anyone know, at a high level, how they did it? Was there one or more Open Source project they based it on?
(Kudos to Atlassian either way - JQL is gorgeous!)
I think they did it from scratch. The underlying architecture is crisp but quite complex. It took me a good few hours to get it, just reading the source and minimal user docs.
~Matt
Atlassian built JQL on top of Apache Lucene. You might want to take a look at Elasticsearch or Solr, which are open source alternatives, also built on Lucene.
I have been using Jira for a year and I notice "Apache Lucene" on the the directory, and before this I got a job wherein I was force to learn apache solr. So in conclusion, Jira is using Apache Lucene as a searching library which is also used was being used in Solr.
for more info read this:
http://www.lucenetutorial.com/lucene-vs-solr.html