I have an issue when I am trying to create a new event hub in azure.
Should be able to create only 1 instance with 1 partition only. However, the process does not let me proceed.
Two issues:
-Impossible to create one hub with one partition (min 2 proposed)
-Impossible to generate de hub (error of ID and charge in relation)
Did you ever get the same issue
New EventHub 2 partitions min
ID problem when commit
I tried with an account having more credentials and it worked. The free version of azure does not allow to create an event hub it seems...
Related
Problem: I have a Azure Topic Topic1 with Subscriptions Subscription-A, Subscription-B, Subscription-C
Each Subscription is listen to by an Azure function and it uses the message to perform unique action like charging customer or Updating another subsystem etc.
If the execution in one of the functions results in error , I would like to requeue the message into that specific subscription for execution after a delay say 1 min. not right way. [ Currently, If the function results in error, message is retried right away couple of times before pushing to dead letter. Looks like default behavior by Azure service bus and functions]
I can not resend a message to Topic1 as that message would be read by all the 3 subscriptions. I don't want this as the other message were processed successfully by the respective subscription/functions. This requeued message should be filtered out by other subscriptions on this topic.
Any ideas or suggestion would be of great help.
from Microsoft EventHub Java SDK examples (https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send), these are the steps that needs to be taken to be able to consume messages from an Even-Hub via the java SDK :
1.Create a storage account
2.Create a new class called EventProcessorSample. Replace the placeholders with the values used when you created the event hub and storage account:
3.
String consumerGroupName = "$Default";
String namespaceName = "----NamespaceName----";
String eventHubName = "----EventHubName----";
String sasKeyName = "----SharedAccessSignatureKeyName----";
String sasKey = "----SharedAccessSignatureKey----";
String storageConnectionString = "----AzureStorageConnectionString----";
String storageContainerName = "----StorageContainerName----";
String hostNamePrefix = "----HostNamePrefix----";
ConnectionStringBuilder eventHubConnectionString = new ConnectionStringBuilder()
.setNamespaceName(namespaceName)
.setEventHubName(eventHubName)
.setSasKeyName(sasKeyName)
.setSasKey(sasKey);
There are several things i don't understand about this flow -
A. Why is a storage account required? Why does it needs to be created only when creating a consumer and not when creating the event hub itself?
B. What is 'hostNamePrefix' and why is it required?
C. More of a generalaztion of A, but i am failing to understand why is this flow so complicated and needs so much configuration. Event Hub is the default and only way of exporting metrics/monitoring data from Azure which is a pretty straightforward flow - Azure -> Event Hub -> Java Application. Am i missing a simpler way or a simpler client option?
All your questions are around consuming events from Event hub.
Why is a storage account required?
Read the event only once: Whenever your application will read event from event hub, you need to store the offset(identifier for the amount of event already read) value somewhere. The storing of this information is known as 'CheckPointing' and this information will be stored in Storage Account.
Read the events from starting everytime your app connects to it: In this case, your application will keep on reading the event from very beginning whenever it will start.
So, the storage account is required to store the offset value while consuming the events from event hub in case if you want to read event only once.
Why does it needs to be created only when creating a consumer and not
when creating the event hub itself?
As it depends upon the scenario, whether you want to read your events only once or every time your app starts and that's why storage account is not required while creating event hub.
What is 'hostNamePrefix' and why is it required?
As the name states 'hostNamePrefix', this is name for your host. The host means the application which is consuming the events. And it's a good practice to make use of GUID as a hostNamePrefix. HostNamePrefix is required by the event hub to manage the connection with the host. In case, if you have 32 partitions, and you have deployed 4 instances of your same application then 8 partition each will be assigned to your 4 different instances and that's where the host name helps the event hub to manage the information about the connection of the respective partitions to their host.
I will suggest you to read this article on event hub for clear picture of the event processor host.
I have a Azure Function scenario as follows:
1) If product = 123, then use Service Bus Topic1
2) If product = 456, then use Service Bus Topic2
I think there are 2 options to solve this:
OPTION 1: Have same Azure Function deploy 2 times (2 diff names) and each having different input/output mapping
OPTION 2: Have only 1 Azure Function but in Application Setting specify the input/output mapping. **From my understanding Application Setting is Key Value. Is this correct? if not, how can I specify complex value in this parameter **.
What is the best way to have this?
What I am thinking about is deploy same Azure Functions 2 times with different settings as follows:
Azure Function 1 with Application Settings as "productid" = 123 and "sbTopic" = topic1
Azure Function 2 with Application Settings as "productid" = 456 and "sbTopic" = topic2
I am wondering if there is a better way such that same Azure Function can run for any of my input/output mapping. If so, where and how do I specify my input (productid) and output (sbTopic) mapping?
EDIT 1: this is CosmosDB Trigger. Whenever we get products in Cosmos DB, i want to send to correct SB Topic
EDIT 2: I have something similar as follows:
Cosmos DB Trigger --> Azure Function --> Service Bus Topic for id=123
I am debating if I should have as follows
Cosmos DB Trigger --> Azure Function1 --> Service Bus Topic for id=123
Cosmos DB Trigger --> Azure Function2 --> Service Bus Topic for id=456
Cosmos DB Trigger --> Azure Function3 --> Service Bus Topic for id=789
which means I would have 3 AF duplicated
etc
or
Cosmos DB Trigger --> 1 Azure Function. Specify mappings (product id and sb topic) in App Settings and --> Add logic in AF such that: if id=123 send message to topic1 ; if id=456 send message to topic 2 etc.
You should have your Pub/Sub model driven by Topic based on the Subscription Rules. The following screen snippet shows an example of this model:
In the above Pub/Sub model, the Publisher fire a Topic with the application properties for additional details such as productid, type, etc. The Topic on the Service Bus can have more multiple Subscriptions entities for specific filters (Rules).
The Subscriber (in this example, the ServiceBusTrigger Function) can be triggered on the TopicName and SubscriptionName configured in the app settings.
In other words, the Subscription entity (Rules) can decided which subscriber will consume the event message. Also advantage of this Pub/Sub model is for continuous deployment, where each environment such as dev, stage, QA, pre-production, production has the same model (topicName, subscriptionName, Rules, etc.) and only connection string will specify which azure subscription (environment) is a current one.
The option suggested by #RomanKiss makes sense, and that's a "canonic" use of Service Bus topics & subscriptions.
But if you need to do what you are asking for, you can achieve that with a single Azure Function and imperative bindings.
First, do NOT declare your output Service Bus binding in function.json.
Implement your Function like this:
public static void Run([CosmosDBTrigger("...", "...")] Product product, Binder binder)
{
var topicName = product.Id == "123" ? "topic1" : "topic2";
var attribute = new ServiceBusAttribute(topicName);
var collector = binder.Bind<ICollector<string>>(attribute);
collector.Add("Your message");
}
By the way, if you deploy your function two times, you will have to put them on different lease collections, otherwise they will compete to each other, not do the processing twice. And you'll have to take care of NOT sending the message to Topic 1 when product is 123 and vice versa.
I have a multi instance application, i'm using service bus event hubs to put into it some messages and it broadcast to all other instances, the condition to get the message to all instances is that every instance need to be in a separate consumers group otherwise an instance is going to get the message and delete it so other instance won't get the message, so my solution was that first every instance creates it's own consumers group and then listen to the event hub, but the problem here is that i will have a lot of consumers groups not in use after a while due to instances crashes,
my question is : is it possible to detect and get all consumers groups that are not in use to delete theme ?
P.S : i tried with topic/subscription also, it works well, but i have the same problem just replace above consumer group by subscription :) .
I got the answer : no you can't detect consumers groups witch are not in use but in the other hand you can detect that a subscription is not in use because it has a property "LastAccessDate" so using this field i can see if a subscription is in use or not and when was the last accessedDate, so i change from my service bus "event hub" to "topic/subscription" and it works :).
The scenario I have in mind is this: Service Bus is used for instance-to-instance communication, so a Subscription is unique per service instance. The end result is that if an instance does not shut down gracefully, its subscription does not get deleted.
When a service instance "dies" and restarts, previous contents of the subscription are irrelevant and can be discarded.
So, is there a way to set a "time to live" for Service Bus Subscription or simulate something similar, without having to resort to some custom orphan detection mechanism?
Starting with Azure SDK 2.0 this works as expected.
Also, contrary to other reports, in my testing, subscription does not get deleted as long as there is a pending receiver listening to that subscription.
var description = new SubscriptionDescription(topicPath, subscriptionId);
description.AutoDeleteOnIdle = TimeSpan.FromSeconds(600);
namespaceManager.CreateSubscription(description);
that exact feature is on the backlog for one of the next releases. that said, in azure you could use the instance-id fro the role environment to create the name of your subscription and thus have a restarting instance reuse a subscription. the instance-id names are stable.
Edit: The feature is AutoDeleteOnIdle https://learn.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.subscriptiondescription
I had the exact same problem, preview solving it was released beginning of 2013: http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.subscriptiondescription.autodeleteonidle.aspx
It's very easy to use (see example below). Unfortunately it seems that the subscription times out if there is no message published for the AutoDeleteOnIdle period, even if you have some process awaiting for messages (according to Azure Servicebus AutoDeleteOnIdle).
NamespaceManager manager=NamespaceManager.CreateFromConnectionString(serviceBusConnectionString);
if(!manager.SubscriptionExists(topic,subscriptionName))
{
manager.CreateSubscription(new SubscriptionDescription(topic,subscriptionName) {
AutoDeleteOnIdle=TimeSpan.FromDays(2)
});
}