Really big retrieval lag for Logstash Kafka inputs producing data irregularly - logstash

I'm using logstash 2.4 with kafka input 5.1.6. In my config I created a field called input_lag in order to monitor how much time it takes logstash to process logs:
ruby {
code => "event['lag_seconds'] = (Time.now.to_f - event['#timestamp'].to_f)"
}
I listen to several kafka topics from a single logstash instance and for the topics that produce logs regularly everything is OK and the lag is small (several seconds). However, for the topics that produce small amount of logs irregularly I get really big lags. Sometimes it's tens of thousands of seconds.
My configuration for Kafka input is following:
input {
kafka {
bootstrap_servers => "kafka-broker1:6667,kafka-broker2:6667,kafka-broker3:6667"
add_field => { "environment" => "my_env" }
topics_pattern => "a|lot|of|topics|like|60|of|them"
decorate_events => true
group_id => "mygroup1"
codec => json
consumer_threads => 10
auto_offset_reset => "latest"
auto_commit_interval_ms => "5000"
}
}
The logstash instance seems healthy, as logs from other topics are being retrieved regularly. I've checked and if I connect to Kafka using its console consumer the delayed logs are there. I've also thought that it might be a problem with too many topics being served by a single logstash instance and extracted those small topics to separate logstash instances but the lag is exactly the same, so it's not the issue.
Any ideas? I suspect that logstash might be using some exponential delay for log retrieval, but have no idea how to confirm and fix that.

Still lack some information:
Kafaka client version?
What's the content of #timestamp?
What's the order of filter? Is ruby last one?
the delayed logs are there -- 'there' means in Kafaka?
Timestamp
If we didn't use date filter to change this field, #timestamp should be the time at which the log entry was read.
In this case, the lag ups to seconds, so I guess the date filter is used and timestamp here is the time when log generated.
Wait Before Fetch
When use Kafka input plugin to consume message, it will wait some time before server respond. This can be configured by two options:
fetch_max_wait_ms
poll_timeout_ms
You many check them in config file.
Wait Before Filter
Logstash handles input log in batch to improve performance, so if not enough logs comes, it will wait some time.
pipeline.batch.delay
You may check it in logstash.yml.
Metric
Logstash itself has the metric information generate, combined with Elasticseach and Kibana, can be very handy to use. So I suggest you to have a try.
Ref
Kafka Input
Logstash Config
ELK Monitoring

Related

communication filebeat --> logstash: messages wrong order

I try to feed a csv data file to logstash using filebeat. Unfortunately the messages are out of order. Is there any way to correct this?
Could this caused by TCP or any pipeline? Logstash started logstash.javapipeline / pipeline_id=>"main", "pipeline.workers"=>8
I tried:
filebeat - output to console - pass
filebeat - output to logstash (localhost) - logstash w/o filter; output to stdout - fail (wrong order of messages)
Per default order is not guaranteed in Logstash as the events in the batch can be reordered in the filter processing and some events can be processed faster than others.
If you need to order your events you will have to change the number of pipeline.workers to 1, which means that only 1 CPU will be used to process your messages.
Also, set pipeline.ordered to auto in logstash.yml.
Setting pipeline.workers to 1 will make logstash process the events in the orders they are received, but since it will use only 1 CPU, it can impact the performance if you have a high rate of events per second.
This is the part of the documentation about ordering events.

Logstash input event size calculation

I want to monitor Logstash event size per minute from a particular event source.
I am collecting events from multiple application and want to track how much data each application is sending to Logstash in bytes.
I am able to count number of events per application but now stuck with size/volume metric.
Is there any way we can achieve this in Logstash

Best approach to check if Spark streaming jobs are hanging

I have Spark streaming application which basically gets a trigger message from Kafka which kick starts the batch processing which could potentially take up to 2 hours.
There were incidents where some of the jobs were hanging indefinitely and didn't get completed within the usual time and currently there is no way we could figure out the status of the job without checking the Spark UI manually. I want to have a way where the currently running spark jobs are hanging or not. So basically if it's hanging for more than 30 minutes I want to notify the users so they can take an action. What all options do I have?
I see I can use metrics from driver and executors. If I were to choose the most important one, it would be the last received batch records. When StreamingMetrics.streaming.lastReceivedBatch_records == 0 it probably means that Spark streaming job has been stopped or failed.
But in my scenario, I will receive only 1 streaming trigger event and then it will kick start the processing which may take up to 2 hours so I won't be able to rely on the records received.
Is there a better way? TIA
YARN provides the REST API to check the status of application and status of cluster resource utilization as well.
with API call it will give a list of running applications and their start times and other details. you can have simple REST client that triggers maybe once in every 30 min or so and check if the job is running for more than 2 hours then send a simple mail alert.
Here is the API documentation:
https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_API
Maybe a simple solution like.
At the start of the processing - launch a waiting thread.
val TWO_HOURS = 2 * 60 * 60 * 1000
val t = new Thread(new Runnable {
override def run(): Unit = {
try {
Thread.sleep(TWO_HOURS)
// send an email that job didn't end
} catch {
case _: Exception => _
}
}
})
And in the place where you can say that batch processing is ended
t.interrupt()
If processing is done within 2 hours - waiter thread is interrupted and e-mail is not sent. If processing is not done - e-mail will be sent.
Let me draw your attention towards Streaming Query listeners. These are quite amazing lightweight things that can monitor your streaming query progress.
In an application that has multiple queries, you can figure out which queries are lagging or have stopped due to some exception.
Please find below sample code to understand its implementation. I hope that you can use this and convert this piece to better suit your needs. Thanks!
spark.streams.addListener(new StreamingQueryListener() {
override def onQueryStarted(event: QueryStartedEvent) {
//logger message to show that the query has started
}
override def onQueryProgress(event: QueryProgressEvent) {
synchronized {
if(event.progress.name.equalsIgnoreCase("QueryName"))
{
recordsReadCount = recordsReadCount + event.progress.numInputRows
//Logger messages to show continuous progress
}
}
}
override def onQueryTerminated(event: QueryTerminatedEvent) {
synchronized {
//logger message to show the reason of termination.
}
}
})
I'm using Kubernetes currently with the Google Spark Operator. [1]
Some of my streaming jobs hang while using Spark 2.4.3: few tasks fail, then the current batch job never progresses.
I have set a timeout using a StreamingProgressListener so that a thread signals when no new batch is submitted for a long time. The signal is then forwarded to a Pushover client that sends a notification to an Android device. Then System.exit(1) is called. The Spark Operator will eventually restart the job.
[1] https://github.com/GoogleCloudPlatform/spark-on-k8s-operator
One way is to monitor the output of the spark job that was kick started. Generally, for example,
If it writes to HDFS, monitor the HDFS output directory for last modified file timestamp or file count generated
If it writes to a Database, you could have a query to check the timestamp of the last record inserted into your job output table.
If it writes to Kafka, you could use Kafka GetOffsetShell to get the output topic's current offset.
Utilize
TaskContext
This provides contextual information for a task, and supports adding listeners for task completion/failure (see addTaskCompletionListener).
More detailed information such as the task 'attemptNumber' or 'taskMetrics' is available as well.
This information can be used by your application during runtime to determine if their is a 'hang' (depending on the problem)
More information about what is 'hanging' would be useful in providing a more specific solution.
I had a similar scenario to deal with about a year ago and this is what I did -
As soon as Kafka receive's message, spark streaming job picks up the event and start processing.
Spark streaming job sends an alert email to Support group saying "Event Received and spark transformation STARTED". Start timestamp is stored.
After spark processing/transformations are done - sends an alert email to Support group saying "Spark transformation ENDED Successfully". End timestamp is stored.
Above 2 steps will help support group to track if spark processing success email is not received after it's started and they can investigate by looking at spark UI for job failure or delayed processing (maybe job is hung due to resource unavailability for a long time)
At last - store event id or details in HDFS file along with start and end timestamp. And save this file to the HDFS path where some hive log_table is pointing to. This will be helpful for future reference to how spark code is performing over the period time and can be fine tuned if required.
Hope this is helpful.

Stream Analytics too slow (time slippage between two streams)?

Here's my stream analytics topology
EventHubSource => Job A (HoppingWindow every second) => EventHubA
EventHubSource => Job B (HoppingWindow every second) => EventHubB
Each job has a different consumer group in EventHubSource.
Each job is embarrassingly parallel and consumes only
14% SU resources.
When testing the JobA and JobC, the difference between the windowEnd and the original Event Time is just some few millisecond (~300), which is ok (latency from my producer + eventhub + stream analytics processing time).
But when I join both streams in a new Job C like this:
EventHubA
\
=> Job C (Join Datediff = 0 and timestamp by windowEnd)
/
EventHubB
This produces some output, but the problems comes here:
The real events are multiple minutes apart even if they were pushed at the same time by Job A and B (same windowEnd)
When I inspect the data coming out from EventHub A and B, the difference between the windowEnd and the real event timestamp ranges between 39 and 44 minutes, for all of them. But when testing like mentionned above, it was only 300ms.
The worst part here is that when I run it in prod, it only emits some dozen events and stops, even if the input count is still in the thousands.
It's been weeks I'm working on this and everytime I'm dealing with some cryptic behavior from ASA, my topology is quite simple and I'm only using simple hopping windows of 1s hop, this shouldn't take weeks of tweaking and trial errors without even understanding what's happening.
For people who used ASA and AWS Kinesis analytics, did you find Kinesis analytics simpler to work with ? What annoys me here in ASA is the unpredictable behavior and issues without error messages (I activated log analytics and no error was there...)
Sorry to hear you encountered some issues with ASA. I see you have a 1 second hopping windows, but what is the total size of the windows and what is your approximate throughput?
Regarding the delay: Looking are your question, I think your ASA job may not have enough CPU resources, and then the event processing is delayed. Unfortunately this is not visible in the current SU% metric, but we plan to show metrics for both CPU and memory in the future.
To confirm this is the root cause, you can check the number of backlogged events in the job diagram. If there are lot of events backlogged, you may need to increase the number of SUs for this job.
You also mentioned the job stops after a dozen output, do you see an error message in the logs?

Is logstash input stage multithreaded?

I was looking at the logstash pipeline.workers option which states that
-w, --pipeline.workers COUNT
Sets the number of pipeline workers to run. This option sets the number of workers that will, in parallel, execute the filter and output stages of the pipeline. If you find that events are backing up, or that the CPU is not saturated, consider increasing this number to better utilize machine processing power. The default is the number of the host’s CPU cores.
I was wondering if logstash input stage also uses all the cores of my machine:
input {
kafka {
bootstrap_servers=>"kfk1:9092,kfk2:9092"
topics => ["mytopic"]
group_id => "mygroup"
key_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
value_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
codec => avro {
schema_uri => "/apps/schema/rocana3.schema"
}
}
}
Does this input > kafka > codec > avro also utilizes all the cores of my machine or this a single threaded stage?
Logstash input pipelining has a few quirks in it. It can be multithreaded, but it takes some configuration. There are two ways to do it:
The input plugin has a workers parameter, not many do.
Each input {} block will run on its own thread.
So, if you're running the file {} input plugin, which lacks a worker config option, each file you define will be serviced by one and only one thread.
Codecs run in the context of the plugin that calls them, which typically are single-threaded per invocation.
Where most Logstash deployments I've run into end up using many cores is in the filter {} stage of the pipeline, not the input. That is why Logstash provides a way to configure the number of pipeline workers. For an input, or set of inputs, pulling thousands of events a second, you can load up a box pretty far solely on the filter {} to output {} pipeline.

Resources