Probably a stupid question.
Generally what is the best approach to have a program listening to an MQTT feed (done), placing messages onto a queue or service bus and then have those automated processed via Azure?
How would I process the messages on a queue? Is there a way for some Azure function/feature to automatically then put that into a storage account and a database after some manipulation? Generally what's the best approach? Ideally using C#.
Feed listens for data feeds (done)
Puts message onto queue or service bus (easily done)
Something on Azure will take that item and put it on a Storage Account and Cosmos database. (stuck on best appoach)
Thanks.
You just need to add a message no a Service Bus Queue or Storage Account Queue. Both provide bindinds for Azure Functions, which would be the consumer. Also using Azure functions, you can use output bindings and persist to Storage Account (blob) or Cosmos DB.
Here are useful links:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb?tabs=csharp
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus
Related
We designed an eventhub trigger which reads the messages from event hub and inserts the messages into cosmos. During this process if any unhandled exceptions/throttling for cosmos we are moving these messages to blob.
Do we have any way we can move the messages back to event hub from blob through azure portal?. This helps azure admin to move the messages to eventhub in the production
No I don't think there is a way to move the message back in the portal UI.
I would use a combination of approaches here. First I would use autoscaling of Cosmos DB to lower the risk of you being throttled and to keep you from overprovisioning and thus overspending on Cosmos DB. Secondly I would implement a retry logic with exponential back-off in your trigger to further minimize the risk of throttling being a problem.
If you still get failed events, you might not have to push them to separate storage after all. All events remain by default in Event Hubs for seven days. You can just reread the entire thing if you want.
If that is not a good approach, I would push the failed messages to a queue (Storage queue or Service Bus Queue) and have an Azure Function on a timer trigger to process the queue and send the messages back to the Event Hub again. Then it would be fully automatic and admin does not have to do anything.
Do we have any way we can move the messages back to event hub from blob through azure portal?
One of the workarounds is to use logic apps where you can create a workflow from Azure blob storage to event hub. Here is a sample flow of my logic app.
We've have product data in Azure event hub which is coming from external system, now our requirement is to send this data to Azure Redis cache from event hub.
Is there any Out of the box way or standard function in Azure to implement it.
Thanks,
Kuldeep
There is no out-of-the-box support, but it should be very easy to achieve this with an EventHub-triggered Azure Function that writes into Redis.
Currently I am using Azure Service Bus as a means to communicate and keep data consistent among the different services in my platform. However, let's say that one of my services (subscribers) goes down for an extended period of time and is unable to receive any events. Suddenly this service is in an inconsistent state.
Does Azure Service Bus have any type of "event sourcing" solution in place in order to replay my events? I understand that Azure Event Hubs has this feature where I can store events in an append only fashion to azure blob storage. However, the only thing I am finding for Azure Service Bus is the dead letter queue and my understanding that this is only used when no subscribers are capable of processing an event.
Is this something that I will have to build myself?
All events stored in a subscription will be delivered once the consumer is up and running unless the subscription has DefaultMessageTimeToLive (TTL) set to purge messages.
Is it possible for an Azure U-SQL script to put messages on an Azure Service Bus Queue or an Azure Event Hub? Please cite some documentation, if you can find it (since I can't find it).
As stated, this is not allowed.
A possible workaround would be to have the u-sql script output a file with messages to blob storage and have an azure function pick those up and send them to an Azure Service Bus Queue or an Azure Event Hub.
I got my answer here.
U-SQL scripts cannot access any external services, including Azure services such as web apps (with only a few exceptions like ADLS and WASB storage). This is to prevent an unintended DDOS attack, since U-SQL will automatically scale that request across potentially hundreds or thousands of nodes, all running over potentially millions of rows and requesting simultaneously. Please see Michael Rys' answer here for more information.
I'm simply trying to work out how best to retrieve messages as quickly as possible from an Azure Service Bus Queue.
I was shocked that there wasn't some way to properly subscribe to the queue for notifications and that I'm going to have to poll. (unless I'm wrong in which case the documentation is terrible).
I got long polling working, but checking a single message every 60 seconds looks like it'll cost around £900 per month (again, unless I've misunderstood that). And if I add a redundant/second service to poll it'll double.
So I'm wondering what the best/most cost efficient way of doing it is.
Essentially I just want to take a message from the queue, perform an API lookup on some internally held data (perhaps using hybrid services?) and then perhaps post a message back to a different queue with some additional information .
I looked at worker roles(?) -- is that something that could do it?
I should mention that I've been looking at doing this with node.js.
Check out these videos from Scott Hanselman and Mark Simms on Azure Queues.
It's C# but you get the idea.
https://channel9.msdn.com/Search?term=azure%20queues%20simms#ch9Search
Touches on:
Storage Queues vs. Service Bus Queues
Grabbing messages in bulk vs. one by one (chunky vs. chatty)
Dealing with poison messages (bad actors)
Misc implementation details
Much more stuff i can't remember now
As for your compute, you can either do a VM, a Worker Role (Cloud Services), App Service Webjobs, or Azure Functions.
The Webjobs SDK and Azure Functions bot have a way to subscribe to Queue events (notify on message).
(Listed from IaaS to PaaS to FaaS - Azure Functions - if such a thing exists).
Azure Functions already has sample code provided as templates to do all that with Node. Just make a new Function and follow the wizard.
If you need to touch data on-prem you either need to look at integrating with a VNET that has site-to-site connectivity back to your prem, or Hybrid Connections (App Service only!). Azure Functions can't do that yet, but every other compute is a go.
https://azure.microsoft.com/en-us/documentation/articles/web-sites-hybrid-connection-get-started/
(That tutorial is Windows only but you can pull data from any OS. The Hybrid Connection Manager has to live on a Windows box, but then it acts as a reverse proxy to any host on your network).
To deal with Azure ServiceBus Queue easily, the best option seems to be Azure Webjob.
There is a ServiceBusTrigger that allows you to get messages from an Azure ServiceBus queue.
For node.js integration, you should have a look at Azure Function. It is built on top of the webjob SDK and have node.js integration :
Azure Functions NodeJS developer reference
Azure Functions Service Bus triggers and bindings for queues and topics
In the second article, there is an example on how get messages from a queue using Azure Function and nodejs :
module.exports = function(context, myQueueItem) {
context.log('Node.js ServiceBus queue trigger function processed message', myQueueItem);
context.done();
};