I am currently using VoltDB kafka importer to import data from multiple kafka topics. I am facing performance issue with the loader.
I read the VoltDB documentation but unable to find how to fine tune the importer.
How can specify specific partition of topic?
My current setup
6 nodes of VoltDB cluster and Kafka importers on the nodes with custom procedure for insert.
Kafka importer config
Host: 172.x.x.x:9092
Topic: mytopic_1,mytopic_2,...mytopic_10
Procedure: tinsert
Create procedure tinsert INSERT INTO tinsert (sensor_id, column2,
column3, received_time) VALUES (?, ?, ?,now());
Table is partitioned and partition key is sensor_id
The problem is importer is not pulling data as fast as it is generated.
Message publication rate is 10,000 records per sec
Any help would be appreciated.
There are a few things you could adjust that would affect the rate that Kafka Importer can ingest data into VoltDB.
The number of partitions for the topic in Kafka. VoltDB will run a consumer thread for each partition. More partitions = more threads.
The VoltDB importer retrieves a batch of records from Kafka from its current offset, then calls the procedure for each record. It waits for the procedure callbacks to return so it knows everything was processed. Then it advances the offset in Kafka and retrieves another batch. This process may be limiting the rate it can handle. If you set the property commit.policy=10, then it would just advance the offset to whatever it has read every 10 milliseconds. That may allow faster data flow, at the risk of potentially having a small gap if there is a failure and restart (e.g. the offset advanced beyond records that were read but not inserted).
For the configuration options, see: https://docs.voltdb.com/UsingVoltDB/exportimportkafka.php
The performance / scale of the procedure being called on the cluster. If the table is partitioned, and the procedure is only inserting, it probably isn't the limiting factor.
Disclosure: I work at VoltDB
Related
I am using Kafka sink connector to write data from Kafka to s3. The output data is partitioned into hourly buckets - year=yyyy/month=MM/day=dd/hour=hh. This data is used by a batch job downstream. So, before starting the downstream job, I need to be sure that no additional data will arrive in a given partition once the processing for that partition has started.
What is the best way to design this? How can I mark a partition as complete? i.e. no additional data will be written to it once marked as complete.
EDIT: I am using RecordField as timestamp.extractor. My kafka messages are guaranteed to be sorted within partitions by the partition field
Depends on which Timestamp Extractor you are using in the Sink config.
You would have to guarantee the no records can have a timestamp earlier than the time you consume it.
AFAIK, the only way that's possible is using the WallClock Timestamp Extractor. Otherwise, you are consuming a Kafka Record timestamp, or some timestamp within each message. Both of which can be overwritten on the Producer end to some event in the past
I have a question regarding the usage of Cassandra for temporary data (Data which is written once to the database, which is read once from the database and then deleted).
We are using Cassandra, to exchange data between processes which are running on different machines / different containers. Process1 is writing some data to the Cassandra, Process2 is reading this data. After that, data can be deleted.
As we learned that Cassandra doesn't like writing and deleting data very often in one table because of tombestones and performance issues, we are creating temporary tables for this.
Process1 : Create table, write data to table.
Process2 : Read data from table, drop table.
But doing this in a very high number (500-1000 tables create and drop per hour) we are facing problems on our schema synchronization between our nodes (we have cluster with 6 nodes).
The Cassandra cluster got very slow, we got a lot of timeout warnings, we got errors about different schemas on the nodes, the CPU load on the cluster nodes grew up to 100% and then the cluster was dead :-).
Is Cassandra the right database for this usecase ?
Is it a problem of how we configured our cluster ?
Will it be a better solution to create temporary keyspaces for this ?
Has anyone experience of how to handle such usecase with Cassandra ?
You don't need any database here. Your use case is to enable your applications to handshake with each other to share data asynchronously. There are two possible solutions:
1) For Batch based writes and reads consider using something like HDFS for intermediate storage. Process 1 writes data files in HDFS directories and Process 2 reads it from HDFS.
2) For message based system consider something like Kafka. Process 1 process the data stream and writes into Kafka Topics and Process 2 consumers reads data from Kafka Topics. Kafka do provides Ack/Nack features.
Continuously creating and deleting number of tables in Cassandra is not a good practice and is never recommended.
I have a question regarding reading data with Spark Direct Streaming (Spark 1.6) from Kafka 0.9 saving in HBase.
I am trying to do updates on specific row-keys in an HBase table as recieved from Kafka and I need to ensure the order of events is kept (data received at t0 is saved in HBase for sure before data received at t1 ).
The row key, represents an UUID which is also the key of the message in Kafka, so at Kafka level, I am sure that the events corresponding to a specific UUID are ordered at partition level.
My problem begins when I start reading using Spark.
Using the direct stream approach, each executor will read from one partition. I am not doing any shuffling of data (just parse and save), so my events won't get messed up among the RDD, but I am worried that when the executor reads the partition, it won't maintain the order so I will end up with incorrect data in HBase when I save them.
How can I ensure that the order is kept at executor level, especially if I use multiple cores in one executor (which from my understanding result in multiple threads)?
I think I can also live with 1 core if this fixes the issue and by turning off speculative execution, enabling spark back pressure optimizations and keeping the maximum retries on executor to 1.
I have also thought about implementing a sort on the events at spark partition level using the Kafka offset.
Any advice?
Thanks a lot in advance!
I have a spark streaming job with a batch interval of 2 mins(configurable).
This job reads from a Kafka topic and creates a Dataset and applies a schema on top of it and inserts these records into the Hive table.
The Spark Job creates one file per batch interval in the Hive partition like below:
dataset.coalesce(1).write().mode(SaveMode.Append).insertInto(targetEntityName);
Now the data that comes in is not that big, and if I increase the batch duration to maybe 10mins or so, then even I might end up getting only 2-3mb of data, which is way less than the block size.
This is the expected behaviour in Spark Streaming.
I am looking for efficient ways to do a post processing to merge all these small files and create one big file.
If anyone's done it before, please share your ideas.
I would encourage you to not use Spark to stream data from Kafka to HDFS.
Kafka Connect HDFS Plugin by Confluent (or Apache Gobblin by LinkedIn) exist for this very purpose. Both offer Hive integration.
Find my comments about compaction of small files in this Github issue
If you need to write Spark code to process Kafka data into a schema, then you can still do that, and write into another topic in (preferably) Avro format, which Hive can easily read without a predefined table schema
I personally have written a "compaction" process that actually grabs a bunch of hourly Avro data partitions from a Hive table, then converts into daily Parquet partitioned table for analytics. It's been working great so far.
If you want to batch the records before they land on HDFS, that's where Kafka Connect or Apache Nifi (mentioned in the link) can help, given that you have enough memory to store records before they are flushed to HDFS
I have exactly the same situation as you. I solved it by:
Lets assume that your new coming data are stored in a dataset: dataset1
1- Partition the table with a good partition key, in my case I have found that I can partition using a combination of keys to have around 100MB per partition.
2- Save using spark core not using spark sql:
a- load the whole partition in you memory (inside a dataset: dataset2) when you want to save
b- Then apply dataset union function: dataset3 = dataset1.union(dataset2)
c- make sure that the resulted dataset is partitioned as you wish e.g: dataset3.repartition(1)
d - save the resulting dataset in "OverWrite" mode to replace the existing file
If you need more details about any step please reach out.
I am new to Spark/ Spark Cassandra Connector. We are trying spark for the first time in our team and we are using spark cassandra connector to connect to cassandra Database.
I wrote a query which is using a heavy table of the database and I saw that Spark Task didn't start until the query to the table fetched all the records.
It is taking more than 3 hours just to fetch all the records from the database.
To get the data from the DB we use.
CassandraJavaUtil.javaFunctions(sparkContextManager.getJavaSparkContext(SOURCE).sc())
.cassandraTable(keyspaceName, tableName);
Is there a way to tell spark to start working even if all the data didn't finish to download ?
Is there an option to tell spark-cassandra-connector to use more threads for the fetch ?
thanks,
kokou.
If you look at the Spark UI, how many partitions is your table scan creating? I just did something like this and I found that Spark was creating too many partitions for the scan and it was taking much longer as a result. The way I decreased the time on my job was by setting the configuration parameter spark.cassandra.input.split.size_in_mb to a value higher than the default. In my case it took a 20 minute job down to about four minutes. There are also a couple more Cassandra read specific Spark variables that you can set found here.
These stackoverflow questions are what I referenced originally, I hope they help you out as well.
Iterate large Cassandra table in small chunks
Set number of tasks on Cassandra table scan
EDIT:
After doing some performance testing with regards to fiddling with some Spark configuration parameters, I found that Spark was creating far too many table partitions when I wasn't giving the Spark executors enough memory. In my case, upping the memory by a gigabyte was enough to render the input split size parameter unnecessary. If you can't give the executors more memory, you may still need to set spark.cassandra.input.split.size_in_mbhigher as a form of workaround.