Read Batch Data from Azure EventHub on a daily basis - apache-spark

​Hi Everyone,
I've requirement to read streaming data from Azure EventHub and dump it to blob location. As per the cost optimization, i cannot prefer either Stream Analytics or Spark Streaming. I can only go with Spark batch job, that i need to explore how to read data from Azure EventHub as a batch(preferably previous day's data) and dump it to blob. My Azure EventHub holds 4 days of data, i need to make sure that i should avoid duplicates every-time i read the data from Azure EventHub.
I'm planning to read the data from azure event-hub once in a day using spark, is there a way i can maintain some sequence every time i read the data so to avoid duplicates.
Any help would be greatly appreciated.

The Azure client libraries for Event Hubs have an EventProcessor. This processor processes events from supports a checkpoint store that persists information about what events have been processed. Currently, there is one implementation of a checkpoint store that persists checkpoint data to Azure Storage Blobs.
Here is the API documentation for the languages I know it is supported in. There are also samples in the GitHub repository and samples browser.
.NET documentation
Java documentation
Python documentation
TS/JS documentation
If you are looking for just transferring events into "a blob location", Event Hubs supports capture into Azure Storage Blobs.

If stream process is all about dumping events to Azure Storage then you should consider enabling capture instead where service can dump events to your choice of storage account as events arrive. https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-overview

In a brief, I've achieved this by Spark Structured Streaming + Trigger.Once.
processedDf
.writeStream
.trigger(Trigger.Once)
.format("parquet")
.option("checkpointLocation", "s3-path-to-checkpoint")
.start("s3-path-to-parquet-lake")

Related

Azure solution to save stream to blob files as parquet

I read about few different azure services - Events hub capture, Azure data factory, events hub, and more. I am trying to find several ways using azure services to do:
Write data to some "endpoint" or place from my application (preferably service of azure)
The data would be batched and saved in files to BLOB
Eventually, the format should be parquet in the BLOB files
My questions are:
I read that events hub capture only saves files as AVRO. So I might also consider second pipeline of copy from original AVRO BLOB to destination parquet BLOB. Is there a service in AZURE that can listen to my BLOB, convert all files to parquet and save again (I'm not sure from the documentation if the data factory can do this)?
What other alternatives would you consider (except Kafka that I know about) to save stream of data to batches of parquet in BLOB?
Thank you!
For the least amount of effort you can look into a combination of an Event Hub as your endpoint and then connect Azure Stream Analytics to that. It can natively write parquet to blob: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-outputs#blob-storage-and-azure-data-lake-gen2

How to write into RabbitMQ from Azure databricks?

I am currently learning to use RabbitMQ. I am trying to publish a message to RabbitMQ from Azure Databricks using pyspark. Any idea about how would that be achievable?
Unfortunately, RabbitMQ is not supported as a source in Azure Databricks.
Azure Databricks - Streaming Data Sources and Sinks
Structured Streaming has built-in support for a number of streaming data sources and sinks (for example, files and Kafka) and programmatic interfaces that allow you to specify arbitrary data writers.
Apache Kafka
Azure Event Hubs
Delta Lake Tables
Read and Write Streaming Avro Data with DataFrames
Write to Arbitrary Data Sinks
Optimized Azure Blob Storage File Source with Azure Queue Storage
As per my research, I have found a third-party tool named "Panoply" which integrate Databricks and RabbitMQ using Panoply.
Hope this helps.

Batch processing with spark and azure

I am working for an energy provider company. Currently, we are generating 1 GB data in form of flat files per day. We have decided to use azure data lake store to store our data, in which we want to do batch processing on a daily basis. My question is that what is the best way to transfer the flat files into azure data lake store? and after the data is pushed into azure I am wondering whether it is good idea to process the data with HDInsight spark? like Dataframe API or SparkSQL and finally visualize it with azure?
For a daily load from a local file system I would recommend using Azure Data Factory Version 2. You have to install Integration Runtimes on Premise (more than one for High Avalibility). You have to consider several security topics (local firewalls, network connectivity etc.) A detailed documentation can be found here. There are also some good Tutorials available. With Azure Data Factory you can trigger your upload to Azure with a Get-Metadata-Activity and use e. g. an Azure Databricks Notebook Activity for further Spark processing.

Connect Azure Event Hubs with Data Lake Store

What is the best way to send data from Event Hubs to Data Lake Store?
I am assuming you want to ingest data from EventHubs to Data Lake Store on a regular basis. Like Nava said, you can use Azure Stream Analytics to get data from EventHub into Azure Storage Blobs. Thereafter you can use Azure Data Factory (ADF) to copy data on a scheduled basis from Blobs to Azure Data Lake Store. More details on using ADF are available here: https://azure.microsoft.com/en-us/documentation/articles/data-factory-azure-datalake-connector/. Hope this helps.
==
March 17, 2016 update.
Support for Azure Data Lake Store as an output for Azure Stream Analytics is now available. https://blogs.msdn.microsoft.com/streamanalytics/2016/03/14/integration-with-azure-data-lake-store/ . This will be the best option for your scenario.
Sachin Sheth
Program Manager, Azure Data Lake
In addition to Nava's reply: you can query data in a Windows Azure Blob Storage container with ADLA/U-SQL as well. Or you can use the Blob Store to ADL Storage copy service (see https://azure.microsoft.com/en-us/documentation/articles/data-lake-store-copy-data-azure-storage-blob/).
One way would be to write a process to read messages from the event hub event hub API and writes them into a Data Lake Store. Data Lake SDK.
Another alternative would be to use Steam Analytics to get data from Event Hub into a Blob, and Azure Automation to run a powershell that would read the data from the blob and write into a data lake store.
Not taking credit for this, but sharing with the community:
It is also possible to archive the Events (look into properties\archive), this leaves an Avro blob.
Then using the AvroExtractor you can convert the records into Json as described in Anthony's blob:
http://anthonychu.ca/post/event-hubs-archive-azure-data-lake-analytics-usql/
One of the ways would be to connect your EventHub to Data Lake using EventHub capture functionality (Data Lake and Blob Storage is currently supported). Event Hub would write to Data Lake every N mins interval or once data size threshold is reached. It is used to optimize storage "write" operations as they are expensive on a high scale.
The data is stored in Avro format, so if you want to query it using USQL you'd have to use an Extractor class. Uri gave a good reference to it https://anthonychu.ca/post/event-hubs-archive-azure-data-lake-analytics-usql/.

Where is Azure Event Hub messages stored?

I generated a SAS signature using this RedDog tool and successfully sent a message to Event Hub using the Events Hub API refs. I know it was successful because I got a 201 Created response from the endpoint.
This tiny success brought about a question that I have not been able to find an answer to:
I went to the azure portal and could not see the messages I created anywhere. Further reading revealed that I needed to create a storage account; I stumbled on some C# examples (EventProcessorHost) which requires the storage account creds etc.
Question is, are there any APIs I can use to persist the data? I do not want to use the C# tool.
Please correct me if my approach is wrong, but my aim is to be able to post telemetries to EventHub, persist the data and perform some analytics operations on it. The telemetry data should be viewable on Azure.
You don't have direct access to the transient storage used for EventHub messages, but you could write a consumer that reads from the EventHub continuously and persist the messages to Azure Table or to Azure Blob.
The closest thing you will find to a way to automatically persist messages (as with Amazon Kinesis Firehose vs Amazon Kinesis which EventHubs are basically equivalent to), would be to use Azure Streaming Analytics configured to write the output either to Azure Blob or to Azure Table. This example shows how to set up a Streaming Analytics job that passes the data through and stores it in SQL, but you can see the UI where you can choose a choice such as Azure Table. Or you can get an idea of the options from the output API.
Of course you should be aware of the requirements around serialization that led to this question
The Event Hub stores data for maximum of 7 days; that’s too in standard pricing tier. If you want to persist the data for longer in a storage account, you can use the Event Hub Capture feature. You don’t have to write a single line of code to achieve this. You can configure it through Portal or ARM template. This is described in this document - https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-overview
The event hub stores it’s transient data in Azure storage. It doesn’t give any more detail in relation to the data storage. This is evident from this documentation - https://learn.microsoft.com/en-us/azure/event-hubs/configure-customer-managed-key
The storage account you need for EventProcessorHost is only used for checkpointing or maintaining the offset of the last read event in a partition.

Resources