Is logstash input stage multithreaded? - logstash

I was looking at the logstash pipeline.workers option which states that
-w, --pipeline.workers COUNT
Sets the number of pipeline workers to run. This option sets the number of workers that will, in parallel, execute the filter and output stages of the pipeline. If you find that events are backing up, or that the CPU is not saturated, consider increasing this number to better utilize machine processing power. The default is the number of the host’s CPU cores.
I was wondering if logstash input stage also uses all the cores of my machine:
input {
kafka {
bootstrap_servers=>"kfk1:9092,kfk2:9092"
topics => ["mytopic"]
group_id => "mygroup"
key_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
value_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
codec => avro {
schema_uri => "/apps/schema/rocana3.schema"
}
}
}
Does this input > kafka > codec > avro also utilizes all the cores of my machine or this a single threaded stage?

Logstash input pipelining has a few quirks in it. It can be multithreaded, but it takes some configuration. There are two ways to do it:
The input plugin has a workers parameter, not many do.
Each input {} block will run on its own thread.
So, if you're running the file {} input plugin, which lacks a worker config option, each file you define will be serviced by one and only one thread.
Codecs run in the context of the plugin that calls them, which typically are single-threaded per invocation.
Where most Logstash deployments I've run into end up using many cores is in the filter {} stage of the pipeline, not the input. That is why Logstash provides a way to configure the number of pipeline workers. For an input, or set of inputs, pulling thousands of events a second, you can load up a box pretty far solely on the filter {} to output {} pipeline.

Related

Dataflow exceeds number_of_worker_harness_threads

I deployed Dataflow job with param --number_of_worker_harness_threads=5 (streaming mode).
Next I send 20x PubSub messages triggering 20x loading big CSV files from GCS and start processing.
In the logs I see that job took 10 messages and process it in parallel on 6-8 threads (I checked several times, sometimes it was 6, sometimes 8).
Nevertheless all the time it was more than 5.
Any idea how it works? It does not seem to be expected behavior.
Judging from the flag name, you are using Beam Python SDK.
For Python streaming, the total number of threads running DoFns on 1 worker VM in current implementation may be up to the value provided in --number_of_worker_harness_threads times the number of SDK processes running on the worker, which by default is the number of vCPU cores. There is a way to limit number of processes to 1 regardless of # of vCPUs. To do so, set --experiments=no_use_multiple_sdk_containers.
For example, if you are using --machine_type=n1-standard-2 and --number_of_worker_harness_threads=5, you may have up to 10 DoFn instances in different threads running concurrently on the same machine.
If --number_of_worker_harness_threads is not specified, up to 12 threads per process are used. See also: https://cloud.google.com/dataflow/docs/resources/faq#how_many_instances_of_dofn_should_i_expect_dataflow_to_spin_up_

Slurm: can i create e sub-queue using a subset of resources in a single node?

I have a use case with slurm and I wonder if there is a way to handle it.
Constraints:
I would like to run several jobs (say 60 jobs).
Each one takes a few hours, e.g. 3h/job.
In the cluster managed by slurm, I use a queue with 2 nodes with 4 gpus each (so I can restrict my batch script to one node).
Each job takes 1 gpu.
Problem: if I put everything in the queue, I will block 4 gpus even if I specify only 1 node.
Desired solution: avoid blocking a whole machine by taking, say, 2 gpus only.
How can I put them in the queue without them taking all 4 gpus?
Could I create a kind of sub-file that would be limited to a subset of resources of a node for example?
You can use the Slurm consumable trackable resources plug-in (cons_tres enabled in your slurm.conf file- more info here: https://slurm.schedmd.com/cons_res.html#using_cons_tres) to:
Specify the --gpus-per-task=X
-or-
Bind a specific number of gpus to the task with --gpus=X
-or-
Bind the task to a specific gpu by its ID with --gpu-bind=GPUID

Does Spark configs spark.streaming.receiver.maxRate has any effect in a Kafka Beam pipeline

I was wondering if somebody has any experience with rate limiting in Beam KafkaIO component when the runner is a SparkRunner. The versions I am using are:Beam 2.29, Spark 3.2.0 and Kafka client 2.5.0?
I have the Beam parameter maxRecordsPerBatch set to a large number, 100000000. But even when the pipeline stops for 45 minutes, this value is never hit. But when there is a high burst of data above the normal, the Kafka lag increases till it eventually catches up. In the SparkUI I see that parameter batchIntervalMillis=300000 (5 min) is not reached, batches take a maximum of 3 min. It looks like the KafkaIO stops reading at some point, even when the lag is very large. My Kafka parameters --fetchMaxWaitMs=1000
--maxPollRecords=5000 should be able to bring plenty of data. Specially because KafkaIO creates one consumer per partition. In my system there are multiple topics with a total of 992 partitions and my spark.default.parallelism=600. Some partitions have very little data, while others have a large number. Topics are per region and when a region goes down the data is sent through another region/topic. That is when the lag happens.
Does the configuration values for spark.streaming.receiver.maxRate and spark.streaming.receiver.maxRatePerPartition plus spark.streaming.backpressure.enabled play any role at all?
For what I have seen, it looks like Beam controls the whole reading from Kafka with the operator KafkaIO. This component creates its own consumers, therefore the rate of the consumer can only be set by using consumer configs which include fetchMaxWaitMs and maxPollRecords.
The only way those Spark parameters could have any effect if in the rest of the pipeline after the IO source. But I am not sure.
So I finally figure out how it all works. First I want to state that the Spark configuration values: spark.streaming.receiver.maxRate, spark.streaming.receiver.maxRatePerPartition, spark.streaming.backpressure.enabled do not play a factor in Beam because they only work if you are using the source operators from Spark to read from Kafka. Since Beam has its own operator KafkaIO they do not play a role.
So Beam has a set of parameters defined in the class SparkPipelineOptions that are used in the SparkRunner to setup reading from Kafka. Those parameters are:
#Description("Minimum time to spend on read, for each micro-batch.")
#Default.Long(200)
Long getMinReadTimeMillis();
#Description(
"A value between 0-1 to describe the percentage of a micro-batch dedicated to reading from UnboundedSource.")
#Default.Double(0.1)
Double getReadTimePercentage();
Beam create a SourceDStream object that it will pass to spark to use as a source to read from Kafka. In this class the method boundReadDuration returns the result of calculating the larger of two reading values:
proportionalDuration and lowerBoundDuration. The first one is calculated by multiplying BatchIntervalMillis from readTimePercentage. The second is just the value in mills from minReadTimeMillis. Below is the code from SourceDStream. The time duration returned from this function will be used to read from Kafka alone the rest of the time will be allocated to the other tasks in the pipeline.
Last but no least the following parameter also control how many records are process during a batch maxRecordsPerBatch. The pipeline would not process more than those records in a single batch.
private Duration boundReadDuration(double readTimePercentage, long minReadTimeMillis) {
long batchDurationMillis = ssc().graph().batchDuration().milliseconds();
Duration proportionalDuration = new Duration(Math.round(batchDurationMillis * readTimePercentage));
Duration lowerBoundDuration = new Duration(minReadTimeMillis);
Duration readDuration = proportionalDuration.isLongerThan(lowerBoundDuration) ? proportionalDuration: lowerBoundDuration;
LOG.info("Read duration set to: " + readDuration);
return readDuration;
}

Really big retrieval lag for Logstash Kafka inputs producing data irregularly

I'm using logstash 2.4 with kafka input 5.1.6. In my config I created a field called input_lag in order to monitor how much time it takes logstash to process logs:
ruby {
code => "event['lag_seconds'] = (Time.now.to_f - event['#timestamp'].to_f)"
}
I listen to several kafka topics from a single logstash instance and for the topics that produce logs regularly everything is OK and the lag is small (several seconds). However, for the topics that produce small amount of logs irregularly I get really big lags. Sometimes it's tens of thousands of seconds.
My configuration for Kafka input is following:
input {
kafka {
bootstrap_servers => "kafka-broker1:6667,kafka-broker2:6667,kafka-broker3:6667"
add_field => { "environment" => "my_env" }
topics_pattern => "a|lot|of|topics|like|60|of|them"
decorate_events => true
group_id => "mygroup1"
codec => json
consumer_threads => 10
auto_offset_reset => "latest"
auto_commit_interval_ms => "5000"
}
}
The logstash instance seems healthy, as logs from other topics are being retrieved regularly. I've checked and if I connect to Kafka using its console consumer the delayed logs are there. I've also thought that it might be a problem with too many topics being served by a single logstash instance and extracted those small topics to separate logstash instances but the lag is exactly the same, so it's not the issue.
Any ideas? I suspect that logstash might be using some exponential delay for log retrieval, but have no idea how to confirm and fix that.
Still lack some information:
Kafaka client version?
What's the content of #timestamp?
What's the order of filter? Is ruby last one?
the delayed logs are there -- 'there' means in Kafaka?
Timestamp
If we didn't use date filter to change this field, #timestamp should be the time at which the log entry was read.
In this case, the lag ups to seconds, so I guess the date filter is used and timestamp here is the time when log generated.
Wait Before Fetch
When use Kafka input plugin to consume message, it will wait some time before server respond. This can be configured by two options:
fetch_max_wait_ms
poll_timeout_ms
You many check them in config file.
Wait Before Filter
Logstash handles input log in batch to improve performance, so if not enough logs comes, it will wait some time.
pipeline.batch.delay
You may check it in logstash.yml.
Metric
Logstash itself has the metric information generate, combined with Elasticseach and Kibana, can be very handy to use. So I suggest you to have a try.
Ref
Kafka Input
Logstash Config
ELK Monitoring

Logstash File Input Latency on Linux

Logstash is running.
How long takes it from adding a single line to a log file until Logstash recognize the new line and start to transform and output it.
With a simple BASH script I measure from 99 msec up to 800 msec including a transformation. It's clear that the latency depends on the Logstash transformation, HD, OS and the CPU. But how recognize Logstash the file change? Is there an internal timer? Pulls logstash from file?
Logstash's file input polls the files being watched at the interval set in the stat_interval parameter, which currently (Logstash 1.5) defaults to 1, i.e. every second.
In other words, assuming that
Logstash isn't behind on the reading any of the log files monitored by a particular file input and
the Logstash process isn't CPU-starved (it usually runs at priority 19 so heavy CPU usage by other processes could cause scheduling delays),
new events will on average get picked up within 500 ms and in the worst case within 1000 ms.

Resources