I want to build a High availability ELK monitoring system with redis.But a little confuse for how to make redis HA.
Redis Sentinel provides high availability for Redis.
But i donot find any configuration for this on the document. https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html
So can i use it for logstash as input and output? Anyone has experience for this?
Logstash Input Redis Plugin supports only one host in a host option.
I think you have 2 ways to get HA:
1) Continue use Redis. You can create dns A record (or edit your host file), that will point to multiple Redis servers, then put the record to host option.
2) Moving from Redis to Kafka:
https://www.elastic.co/blog/logstash-kafka-intro
Related
I have setup redis on three seperate instances and have configured them in such a way that 1 instance is a master and 2 are replicas of master. I have used sentinels to make sure there is high availability of the setup. I have a nodejs application which needs to use the redis. How do i achieve the read and write splitting in my application as incase my redis master goes down one of my read replica becomes the master and the writes need to go to it.
As far has I know, ioredis is the only node redis client that supports sentinels.
"ioredis guarantees that the node you connected to is always a master even after a failover. When a failover happens, instead of trying to reconnect to the failed node (which will be demoted to slave when it's available again), ioredis will ask sentinels for the new master node and connect to it. All commands sent during the failover are queued and will be executed when the new connection is established so that none of the commands will be lost."
I am aggregating my logs using the ELK stack. Now I would like to show metrics and create alerts with it too like current CPU usage, number of requests Handled, number of DB queries etc
I can collect the metrics using Telegraf or StatsD but how do I plug them into Logstash? There is no Logstash input for either of these two.
Does this approach even make sense or should I Aggregate time series data in a different system? I would like to have everything under one hood.
I can give you some insight on how to accomplish this with Telegraf:
Option 1: Telegraf output TCP into Logstash. This is what I do personally, because I like to have all of my data go through Logstash for tagging and mutations.
Telegraf output config:
[[outputs.socket_writer]]
## URL to connect to
address = "tcp://$LOGSTASH_IP:8094"
Logstash input config:
tcp {
port => 8094
}
Option 2: Telegraf directly to Elasticsearch. The docs for this are good and should tell you what to do!
From an ideological perspective, inserting metrics into the ELK stack may or may not be the right thing to do - it depends on your use case. I switched to using Telegraf/InfluxDB because I had a lot of metrics and my consumers preferred the Influx query syntax for time-series data and some other Influx features such as rollups.
But there is something to be said about reducing complexity by having all of your data "under one hood". Elastic is also making the push toward being more suitable for time-series data with Timelion and there were a few talks at Elasticon concerning storing time-series data in Elasticsearch. Here's one. I would say that storing your metrics in ELK is a completely reasonable thing to do. :)
Let me know if this helps.
Here are various options for storing metrics from StatsD to ES:
Using statsd module of metricbeat. The metrics can be send to metricbeat in StatsD format. Then metricbeat transfers them to ES.
Example of metricbeat configuration:
metricbeat.modules:
- module: statsd
host: "localhost"
port: 8125
enabled: true
period: 2s
ElasticSearch as StatsD backend. The following project allows to save metrics from StatsD to ES:
https://github.com/markkimsal/statsd-elasticsearch-backend
I'm setting up a simple 1 Master - N Slaves Redis cluster (low write round, high read count). How to set this up is well documented on the Redis website, however, there is no information (or I missed it) about how the clients (Node.js servers in my case) handle the cluster. Do my servers need to have 2 Redis connections opened: one for the Master (writes) and one towards a Slave load-balancer for reads? Does the Redis driver handle this automatically and send reads to slaves and writes to the Master?
The only approach I found was using thunk-redis library. This library supports connecting to Redis master-slave without having a cluster configured or using a sentinel.
You just simply add multiple IP addresses to the client:
const client = redis.createClient(['127.0.0.1:6379', '127.0.0.1:6380'], {onlyMaster: false});
You don't need to specifically connect to particular instance, every instance in redis cluster has information of cluster. So even if you connect to one master, your client would to be connect to any instance in the cluster. So if you try to update a key present in different master(other than the one you connected), redis client takes care of it by using the redirection provided by the server.
To answer your second question, you can enable reads from slave by READONLY command
I'm currently attempting to use the logback-logstash-encoder to write my logs to two different logstash instances. Both of these instances will be writing to the same Elasticsearch instance.
I'm struggling to find a way to load balance between the two logstash instances.
After reading both the logback documentation and the log4j2 documentation, its clear that the TcpAppender that logback-logstash uses does not support 'load-balanced' urls (i.e. url1, url2). In log4j2, I can approximate this behavior with the FailoverAppender.
Is there similar functionality in logback? Or will I need to stand up another service to load-balance for logback?
AFAIK there is no support for load balancing within TcpAppenders. However, you could achieve load balancing by adding a hardware/software load balancer in front of the logstash boxes. In case a logstash process/box dies, the TCP connection gets reset, and the next log event should cause to re-establish the connection again.
This approach would require adding a load balancer to your setup. You would use the load balancer's address in your TcpAppender. The balancer handles the availability of logstash for you.
It depends on what you want to achieve. Is it not losing any log events? Then perhaps the better approach would be to write the logs into files and push the files by logstash-forwarder to your logstash boxes. If you're concerned about connection timeouts and slowdown of your application, then either async-appender or using UDP could be your choice.
HTH, Mark
I've implemented logstash ( in testing ) as below mentioned architecture.
Component Break Down
Rsyslog client: By default syslog installed in all Linux destros, we just need to configure rsyslog to send logs to remote server.
Logstash: Logstash will received logs from syslog client and it will
store in Redis.
Redis: Redis will work as broker, broker is to hold log data sent by agents before logstash indexes it. Having a broker will enhance performance of the logstash server, Redis acts like a buffer for log data, till logstash indexes it and stores it. As it is in RAM its too fast.
Logstash: yes, two instance of logstash, 1st one for syslog server,
2nd for read data from redis and send out to elasticsearch.
Elasticsearch: The main objective of a central log server is to collect all logs at one place, plus it should provide some meaningful data for analysis. Like you should be able to search all log data for your particular application at a specified time period.Hence there must be a searching and well indexing capability on our logstash server. To achieve this, we will install another opensource tool called as elasticsearch.Elasticsearch uses a mechanism of making an index, and then search that index to make it faster. Its a kind of search engine for text data.
Kibana : Kibana is a user friendly way to view, search and visualize
your log data
But I'm little bit confuse with redis. using this scenario I'll be running 3 java process on Logstash server and one redis, this will take hugh ram.
Question
Can I use only one logstash and elastic search ? Or what would be the best way ?
I am actually in the midst of setting up logstash, redis, elasticsearch, kibana (aka ELK architecture) at my company.
I have the processes split between virtual machines. While you could put them on the same machine, what happens if a machine dies? Then you are left with your indexer and cluster down at the same time.
You also have the problem of not being able to properly replicate your shards on the Elasticsearch. Since you only have one server, the shards won't be replicated and your cluster health will always be yellow. You need to add enough servers to avoid the split-brain scenario.
Why keep Redis?
Since Redis can talk to multiple logstash indexers, one key point is that this makes the indexing transparent to your shippers in that if one indexer goes down, the alternates will pick up the load. This makes your setup high availability.
It's not just a matter of shipping logs and having them indexed and searchable. While your setup will likely work in a very small, rare situation, the stuff people are doing with ELK setups are hundreds of servers, even thousands, so the ELK architecture is meant to scale. All of these servers will also need to be remotely managed by something called Puppet.
Finally, if you have not read it yet, I suggest you read The Logstash Book by James Turnbull.
The following are some more recommended books that have helped me so far:
Pro Puppet, Second Edition
Elasticsearch Cookbook, Second Edition
Redis Cookbook
Redis in Action
Mastering Elasticsearch
ElasticSearch Server
Elasticsearch: The Definitive Guide
Puppet Types and Providers
Puppet 3 Cookbook
You can use only one logstash and elasticsearch if you put all the instance in a machine. Logstash directly read the syslog file by using file input plugin.
Otherwise, you have to use two logstash and redis. It is because logstash do not have any buffer mechanism, so it needs redis as its broker to buffer the log event. Redis do not use more ram. When logstash read the log event from it, the memory will release. If redis use large ram, you have to add the logstash workers for processing the logs faster.
You should only be running one instance of logstash. logstash by design has the ability to have multiple input channels and output channels.
input {
# input instances
syslog {
# add other settings accordingly
type => "syslog"
}
redis {
# add other settings accordingly
type => "redis"
}
}
filter {
# add other settings accordingly
}
output {
# output instances
if [type] == "syslog" {
redis {
# add other settings accordingly
}
}
else if [type] == "redis" {
elasticsearch {
# add other settings accordingly
}
}
}