How to read multiple servers log files in graylog? - graylog2

In our application we want to read logs from 2 different servers i.e apache tomcat and Jboss and want to monitor the logs. I have tried to search online to how to configure it but not albe to understand clearly about how can i implement it in graylog. Please help. Thank you.

You can send logs from an arbitrary number of applications and systems to Graylog (even on the same input).
Simply configure your applications and systems to send logs to Graylog and create an appropriate input for them.
See http://docs.graylog.org/en/2.1/pages/sending_data.html for some hints.

Hope you were able to send your logs to graylog server. Centralized logging using graylog will help newbies to get started with graylog and the article explains use cases like integrating apache, nginx, mysql slow query logs to graylog. This covers various ways like sending logs via syslog, graylog apache module, filebeat etc. which most articles miss out explaining in detail.

Related

How exactly Nagios server communicates with remote nodes i.e which protocol does it use in agent and agentless settings?

I installed Nagios Core and NCPA on a Mac. Implemented a few checks via custom plugins to understand how to use it. I am trying to understand the following:
Protocol that Nagios server actually use to communicate with NCPA agent and how exactly does NCPA return the result back to Nagios. Does it ssh into Nagios server and writes a file that server processes?
From application monitoring standpoint how can it be leveraged? Is it just to monitor that application is up and running (I read its not just for that it can do more but couldn't find any place where I could see how its actually implemented) or is there a restful API as well that we invoke from with in our application to send custom notification to Nagios server. I understand it might require some configuration at Nagios server end as well.
I came across Pager Duty and Sematext articles i.e PagerDuty Integration and SemaText Nagios Alert Integration where they have integrated their solution with Nagios I am trying to do something similar. Adding integration support for Nagios so that a user can utilise our applications UI to configure alerts/notification. For e.g. if a condition is met then alert or notify Nagios server to show a notification on its dashboard.
Can we generate an alert from within a spark streaming application based on a variable e.g. if its value is above a threshold or some condition is met send an alert to Nagios Server to display as notification on Nagios Dashboard. I came across a link where we can monitor status of a spark application but didn't find anything for something within a spark application.
I tried looking for answers to above questions but couldn't find anything useful or complete as such online. I would really appreciate if someone could help me understand above.
Nagios is highly configurable, and can communicate across many protocols. NCPA can return JSON or XML data. The most common agentless protocol is probably SNMP. If you can read Python, look directly at the /usr/local/nagios/libexec/check_ncpa.py file to see what's up.
Nagios can check whether a system is running a service, how much resources it is consuming, etc... There is a restful API.
Nagios offers an application with a more advanced graphical interface called Nagios XI. Perhaps that is what you are after.
I bet you probably could, yeah. It might take some development work to get the systems to communicate though.

Implementing server logs with splunk

folks !
i'm trying to log server logs over my splunk cloud, can you please explain how to implement this, i have setted up splunk with universal forwarder and my client side logs are working fine, but how to put server side logs, i have idea about log4j.properties file but what to write in it, and in other files to reflect server logs on splunk site as well.
If you could help in simple terms that would be helpful.
Thank You so Much. !!
I'm not sure I totally understand your question. Anyways, I think our Java Logging Libraries may be helpful for you - we support Log4j
You can log server logs similarly how you logging client logs. Update the server log file path in inputs.log.

how to get send, receive byte and network bandwidth from different servers to parse with elk stack (elasticsearch logstash kibana)

Can I get the send and receive bytes from unix servers? for sample centos server. I want to get the inbound and outbound bytes then parse it to logstash so that Im able to see the histogram in kibana. Thanks in advance. I am new to here.
There are lots of ways to make this happen.
You could run a script to gather the data on the machine, write it to a log file, use logstash-forwarder to send the logs to a centralized logstash, which then inserts into elasticsearch.
You could use some existing program to ship to logstash (collectd, etc).
Since you won't really need any of the features of logstash, you could write to elasticsearch directly via one of the client libraries (python, etc).
Unless you're limited to ELK, you could also use some "normal" network management system (snmp polling, etc).

logstash for Windows Server 2012 / IIS8 access logs

I'm hoping to find a way to use logstash/ES/Kibana to centralize our Windows Server 2012 / IIS8 logs.
It would be great to not have to install Java on our production servers to get logstash to serve just as the shipper. I'm wondering how other windows/IIS sysadmins using logstash have addressed this issue?
E.G., are there other, lighterweight, clients that logstash can consume?
If not, I'll probably just write one in Python that reads and posts to the logstash indexer.
As you say, you need to write program to send the logs to the logstash indexer.
For example,
Logstash indexer use TCP plugin listen at a port. You program will send the logs to the logstash indexer port. In this way you no need to install java program.
As Bel mentioned you can use TCP or UDP input plugins for your architectural need and also you can configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it. Let me know if your facing any difficulty setting up any of the above mentioned steps. I can give you an example.

how to send linux log files to central server and look at them through web interface?

I have a couple of linux servers and logrotate and rsyslog are taking care of all log files. Now I was wondering whether the following is possible:
keep log files locally (present)
send log events to a centralized server (should be possible with logrotate, right?)
make log events on centralized server browse & searchable
So here are my questions:
How do I have setup logrotate and rsyslog (on 'client' and 'server) to accomplish this configuration?
does someone know of a good (opensource) web interface that would work with this setup?
EDIT:
seems like what I want to accomplish exists for sysnlog-ng. http://www.debianhelp.co.uk/syslog-ng.htm
there are tons of log analysers with web interface for example
http://www.xpolog.com/
http://www.splunk.com
etc.

Resources