I need to send my log4j logs to Splunk. I found several solutions:
To use REST API (e.g. curl -k -u admin:changeme -d "name=/tmp/myfile.log" -d "sourcetype=syslog" https://localhost:8089/servicesNS/admin/search/data/inputs/monitor)
Install Splunk Universal Forwarder
Use log4j appender
such as:
Syslog appender
log4j.appender.splunk=org.apache.log4j.net.SyslogAppender
log4j.appender.splunk.SyslogHost=localhost:8089
log4j.appender.splunk.layout=org.apache.log4j.PatternLayout
log4j.appender.splunk.facility=LOCAL2
log4j.appender.splunk.layout.ConversionPattern=[%p] %t: %m%n
but it seems to me that 3rd solution wouldn't work if splunk server and log are located on separate machines.
2nd solution requires to install additional software
Can anyone propose any other solution?
PS I tried to use opensource java libs. But it didn't give a result.
Check out this great project from one of our community developers #damiendallimore - https://github.com/damiendallimore/SplunkJavaLogging
It provides a number of options for logging directly to Splunk.
It also uses the Splunk Java SDK: http://dev.splunk.com/view/java-sdk/SP-CAAAECN
Related
In our application we want to read logs from 2 different servers i.e apache tomcat and Jboss and want to monitor the logs. I have tried to search online to how to configure it but not albe to understand clearly about how can i implement it in graylog. Please help. Thank you.
You can send logs from an arbitrary number of applications and systems to Graylog (even on the same input).
Simply configure your applications and systems to send logs to Graylog and create an appropriate input for them.
See http://docs.graylog.org/en/2.1/pages/sending_data.html for some hints.
Hope you were able to send your logs to graylog server. Centralized logging using graylog will help newbies to get started with graylog and the article explains use cases like integrating apache, nginx, mysql slow query logs to graylog. This covers various ways like sending logs via syslog, graylog apache module, filebeat etc. which most articles miss out explaining in detail.
Our Sever is hosted in Solaris(OS) but we are not able to install Filebeat to forward the logs to desired port as Filebeat is not supported in Solaris. Can someone here suggest any way to solve this problem. Please note we are told not to install Logstash in the server hosted machine.
Your advices are highly anticipated . Please do the needful.
Filebeat can easily be compiled to run on Solaris 11/amd64, but that is not an officially supported platform based on Elastic's support matrix. All of the Filebeat project's tests pass on Solaris.
It may be be possible to compile Filebeat for Solaris/sparc using gccgo. Filebeat is written in Go, and the Go compiler supports Solaris/amd64 but not sparc which is why the gccgo compiler would be needed for sparc.
There is a filebeat-solaris-amd64 binary generated by Elastic's Jenkins server and published to S3 if you want to do a quick test, but otherwise I would recommend compiling it yourself from a release tag if you are going to be using it.
I wrote a script to unzip and start logstash as service. How do I check it is actually up and healthy? Is there a status port? I'm using version 1.4.2.
There's no information available from logstash, something they're promising to fix for version 2.0.
Depending on what you're using it for, "healthy" can mean different things.
If you're using it as a shipper, you can check the sincedb file against the actual files themselves to see if it's up to date.
I am going to write a J2EE application and application will be deployed in Tomcat.
The requirement is that the server and the application must send snmp trap to external NMS.
The details of my application is
J2EE application
Deployed in Tomcat v7
The Server is Redhat Linux 6.2
We need to send trap for all the above 3 (For the application, Tomcat and the linux server).
Can we write our own agent using snmp4j for the above requirement and how will snmp agent know when to send trap to NMS?
Thanks in advance for support.
Yes you can for that you need to extend the logger framework. For Instance you can use logback framework. where you can extend the logging with CustomAppender where you can write snmp-agent code and forward the log as an trap. Moreoever logback has nice and easy way to format, deny log if not necessary other other feature. And you can change the tomcat logging to logback is simple steps. However I'm not sure if you can really send a trap for any issues on linux server. I believe it would be a tedious task. You might look for some syslog server monitoring feature.
I'm hoping to find a way to use logstash/ES/Kibana to centralize our Windows Server 2012 / IIS8 logs.
It would be great to not have to install Java on our production servers to get logstash to serve just as the shipper. I'm wondering how other windows/IIS sysadmins using logstash have addressed this issue?
E.G., are there other, lighterweight, clients that logstash can consume?
If not, I'll probably just write one in Python that reads and posts to the logstash indexer.
As you say, you need to write program to send the logs to the logstash indexer.
For example,
Logstash indexer use TCP plugin listen at a port. You program will send the logs to the logstash indexer port. In this way you no need to install java program.
As Bel mentioned you can use TCP or UDP input plugins for your architectural need and also you can configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it. Let me know if your facing any difficulty setting up any of the above mentioned steps. I can give you an example.