Not able to skip reading oids for down host in logstash Snmp file - logstash

I am using 7.6.2 version of logstash. We are fetching snmp data using walk in logstash conf file. I have used feature of multiple hosts. It is working fine but if any one host is down then it reads all oid for that host also and then proceed for another host and it takes too much time. Is there any way to skip reading of oids of down host. So that data will come without any lag.
I am expecting that if run that logstash config then it skips reading all oid of down host so that it will not take too much time to insert data and there would be no lag in insertion of data.

Related

speedup scapy execution - packet sniffing

I'm developing some application that shall monitor some data in real time.
The application shall collect data from the network, parse the relevant packets from my protocol and store it to the database.
When I start the application - everything seems to be OK, but then lags are starting to appear few seconds after that.
Checking my database, it seems that some data is not saved while others does stored (I'm using packet player to inject packets on my PC. Verifying with Wireshark, all the data which has is there ). The data is stored into several tables, and all of the tables have same issue and therefore I'm suspicious on scapy.
Checking the Wireshark statistics, I have about 200 packets per second.
Is there someway to improve the performance of it?
I'm using sniff(iface="Working", filter = "port 52000", prn=my_parsing_func, store = false) command
PS - I'm using win 10 OS, python 3.7.4

Logstash log processing from multiple source

I am new to elk stack. Let me explain what i am trying to do. I have a application that is running separately for different users i.e. 5 different users will have 5 independent instance of the same application. I am using filebeats to send the logs from the application to logstash where it will be processed first before being sent to the elasticsearch. What i want is to write the application which enables the users to view the logs of theirs instance of application only. Now what i tried to do is creating the logstash pipeline for each the user with different port which will process the log and send it to elasticsearch with the different index name.
Can you suggest me if this is the best practice or i am doing it wrong? Is there a more better way to do it without having separate pipeline for individual users with separate port? I think the way I am doing it is wrong and it will be harder for me to manage when the instances will grow in numbers.
Thank You
I would suggest if there's no skinning , validation and enrichment involved then skip logstash altogether. You can straight away pass filebeat logs to ES. Now there are two ways from here. Filebeat can additionally send a parameter (any fixed string) along with the scanned message to ES or you can store the meta (like ip) source which filebeat will send along with message. This string can then be used to identify the source of the log message and then on kibana you can configure to show dashboard based on that fixed string / user / meta. This simply the process and avoid unnecessary hops.

How to install logstash-forwarder for multiple logstash server?

Currently we are working on forwarding logs to 2 different logstash servers. We cannot figure out a way with which we can install logstash-forwarder on a single machine. Is it possible with logstash-forwarder forwarding logs to multiple logstash ??
Else how can we do it with filebeat ??
In the LSF config, you can specify a list of hosts, but it will pick one at random and only switch to another in case of failure.
FB has the same system, but it allows you to also load balance across the list of hosts.
AFAIK, neither allows you to send events to multiple logstash instances.
Logstash, on the other hand, will send events to all of its outputs, so you could have FB send to a single LS, and have that LS output to your other LS instances. Note that if one output is unavailable, the system will block.

How to collect spring batch remote partitioned application logs from all servers to one server using Rsyslog?

I have use case where my daily batch processing application (spring batch java application using remote partitioning) is deployed to 4 servers and application creates log in log file in daily batch folder.
e.g. batch with batch id 2014-07-15 stores log in /var/log/myapp/2014-07-15/batch.log
I want to collect logs from all servers and collect in one single log file on master (master is one of the 4 servers).
I am trying to use Rsyslog for this purpose. But if there is any other better solution, kindly suggest.
1) How do I read logs from /var/log/myapp/2014-07-15/batch.log file whose path is dynamic and send to master ?
2) How to collect logs coming from all servers of myapp application and store it in log file /var/log/myapp/2014-07-15/batch.log on master server (log file path will be same on all server for the batch).
I have reffered to the documentation and guides here
http://www.rsyslog.com/guides-for-rsyslog/
but can't understand how to send log read from file to other server ? How to use dynamic paths though I found
$template DynFile,"/var/log/%HOSTNAME%/%programname%.log"
I am new to rsyslog, so its bit difficult to put together all this info to achieve my use case. It would be great help if someone could guide me to achieve this.
Something like this ?
#On master server
$ModLoad imtcp
$InputTCPServerRun 10514
$template DynFile,"/var/log/$now/batch.log"
# where $now The current date stamp in the format YYYY-MM-DD
# referred this - http://ftp.ics.uci.edu/pub/centos0/ics-custom-build/BUILD/rsyslog-3.19.7/doc/property_replacer.html
if syslogtag isequal "myapp" then ?DynFile
#On slaves machins
$template DynFile,"/var/log/$now/batch.log"
module(load="imfile" PollingInterval="10")
input(type="imfile" File=?DynFile Tag="myapp" StateFile="/var/spool/rsyslog/statefile1")
. ##[SERVER_IP]
Also how should be StateFile configuration as there will be different batch log files for different batches and so they should use different StateFile for each batch log file ? How do I configure Dynamic StateFile ?
You might want to look at the logstash project (aka the ELK stack). It make managing your logs much more enjoyable and it provides you with some really useful searching features.

Suggestion on Logstash Setup

I've implemented logstash ( in testing ) as below mentioned architecture.
Component Break Down
Rsyslog client: By default syslog installed in all Linux destros, we just need to configure rsyslog to send logs to remote server.
Logstash: Logstash will received logs from syslog client and it will
store in Redis.
Redis: Redis will work as broker, broker is to hold log data sent by agents before logstash indexes it. Having a broker will enhance performance of the logstash server, Redis acts like a buffer for log data, till logstash indexes it and stores it. As it is in RAM its too fast.
Logstash: yes, two instance of logstash, 1st one for syslog server,
2nd for read data from redis and send out to elasticsearch.
Elasticsearch: The main objective of a central log server is to collect all logs at one place, plus it should provide some meaningful data for analysis. Like you should be able to search all log data for your particular application at a specified time period.Hence there must be a searching and well indexing capability on our logstash server. To achieve this, we will install another opensource tool called as elasticsearch.Elasticsearch uses a mechanism of making an index, and then search that index to make it faster. Its a kind of search engine for text data.
Kibana : Kibana is a user friendly way to view, search and visualize
your log data
But I'm little bit confuse with redis. using this scenario I'll be running 3 java process on Logstash server and one redis, this will take hugh ram.
Question
Can I use only one logstash and elastic search ? Or what would be the best way ?
I am actually in the midst of setting up logstash, redis, elasticsearch, kibana (aka ELK architecture) at my company.
I have the processes split between virtual machines. While you could put them on the same machine, what happens if a machine dies? Then you are left with your indexer and cluster down at the same time.
You also have the problem of not being able to properly replicate your shards on the Elasticsearch. Since you only have one server, the shards won't be replicated and your cluster health will always be yellow. You need to add enough servers to avoid the split-brain scenario.
Why keep Redis?
Since Redis can talk to multiple logstash indexers, one key point is that this makes the indexing transparent to your shippers in that if one indexer goes down, the alternates will pick up the load. This makes your setup high availability.
It's not just a matter of shipping logs and having them indexed and searchable. While your setup will likely work in a very small, rare situation, the stuff people are doing with ELK setups are hundreds of servers, even thousands, so the ELK architecture is meant to scale. All of these servers will also need to be remotely managed by something called Puppet.
Finally, if you have not read it yet, I suggest you read The Logstash Book by James Turnbull.
The following are some more recommended books that have helped me so far:
Pro Puppet, Second Edition
Elasticsearch Cookbook, Second Edition
Redis Cookbook
Redis in Action
Mastering Elasticsearch
ElasticSearch Server
Elasticsearch: The Definitive Guide
Puppet Types and Providers
Puppet 3 Cookbook
You can use only one logstash and elasticsearch if you put all the instance in a machine. Logstash directly read the syslog file by using file input plugin.
Otherwise, you have to use two logstash and redis. It is because logstash do not have any buffer mechanism, so it needs redis as its broker to buffer the log event. Redis do not use more ram. When logstash read the log event from it, the memory will release. If redis use large ram, you have to add the logstash workers for processing the logs faster.
You should only be running one instance of logstash. logstash by design has the ability to have multiple input channels and output channels.
input {
# input instances
syslog {
# add other settings accordingly
type => "syslog"
}
redis {
# add other settings accordingly
type => "redis"
}
}
filter {
# add other settings accordingly
}
output {
# output instances
if [type] == "syslog" {
redis {
# add other settings accordingly
}
}
else if [type] == "redis" {
elasticsearch {
# add other settings accordingly
}
}
}

Resources