How to push the Maximum length message in azure service bus - azure

I want to push a message into azure service bus, let say of size 3 MB.
for this, I have wrote :
QueueInfo queueInfo = new QueueInfo("sq-jcibe-microservice-plm-qa");
long maxSizeInMegabytes = 5120;
queueInfo.setMaxSizeInMegabytes(maxSizeInMegabytes);
service.updateQueue(queueInfo);
service.sendQueueMessage("sq-jcibe-microservice-plm-qa", brokeredMessage);
I am getting following exception.
com.microsoft.windowsazure.exception.ServiceException: com.sun.jersey.api.client.UniformInterfaceException: PUT https://sb-jcibe-microservice-qa.servicebus.windows.net/sq-jcibe-microservice-plm-qa?api-version=2013-07 returned a response status of 400 Bad Request
Response Body: <Error><Code>400</Code><Detail>SubCode=40000. For a Partitioned Queue, ordering is supported only if RequiresSession is set to true.
Parameter name: SupportOrdering. TrackingId:59bb3ae1-95f9-45e1-8896-d0f6a9ac2be8_G3, SystemTracker:sb-jcibe-microservice-qa.servicebus.windows.net:sq-jcibe-microservice-plm-qa, Timestamp:11/30/2016 4:52:22 PM</Detail></Error>
I am not undrstanding what does it mean and how should I resolve the problem.Please help me out with this scenario.?

maxSizeInMegabytes refers to the maximum total size of the messages on a queue, not individual message size.
Individual message size cannot exceed 256KB for a standard tier, and 1MB for a premium tier (including headers for both).
If you wish to send messages larger than the Maxim message size, you'll have to either implement a claim check pattern (http://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html) or use a framework that does it for you. Implementing yourself would mean something among the lines of storing the payload as a Storage blob and message would contain the Uri.

Related

What does rotate period, bufferlogsize, and synctimeout mean exactly in Winston Azur blob storage? Explanation with simple examples are appreciated

In our project we are using winston3-azureblob-transport NPM package to store Application logs to blob storage.
However due to increase in users we are getting an error "409 - ClientOtherError - BlockCountExceedsLimit|The committed block count cannot exceed the maximum limit of 50,000 blocks".
Could anyone tell us using rotatePeriod, bufferLogSize and syncTimeout helps us to stop the error "409 - ClientOtherError - BlockCountExceedsLimit|The committed block count cannot exceed the maximum limit of 50,000 blocks".
Also provide any another alternative solution. However Winston logger should not be replaced.
The error "The committed block count cannot exceed the maximum limit of 50,000 blocks" usually occurs when the maximum limits are exceeded.
Each block in a block blob can be a different size. Based on the Service version you are using, maximum blob size differs.
If you attempt to upload a block that is larger than maximum limit your service version is supporting, the service returns status code 409(ClientOtherError - BlockCountExceedsLimit). The service also returns additional information about the error in the response, including the maximum block size permitted in bytes.
rotatePeriod: A moment format ex : YYYY-MM-DD will generate blobName.2022.03.29
bufferLogSize: A minimum number of logs before sync the blob, set to 1 if you want sync at each log.
syncTimeout: The maximum time between two sync, set to zero if you don't want.
For more in detail, please refer this link:
GitHub - agmoss/winston-azure-blob: NEW winston transport for azure blob storage by Andrew Moss agmoss

Runtime errors for Azure stream Analytics Job

I tried to modify an existing azure stream analytics job by adding one more temporary result set.
But when I run the SA job, it is throwing runtime error and watermark delay is getting increased.
Below is the existing SAQL in the Stream Analytics job:
-- Reading from Event Hub
WITH INPUTDATASET AS (
SELECT
udf.udf01([signals2]) AS flat
FROM [signals2]
PARTITION BY PartitionId
WHERE [signals2].ABC IS NOT NULL
),
INPUT1 AS (
SELECT
ID,
SIG1,
SIG2
FROM [signals2] as input
WHERE GetArrayLength(input.XYZ) >=1
)
--Dump the data from above result set into cosmosDB
I tried to add the below temporary result set to the SAQL:
INPUT2 AS (
SELECT
ID,
SIG3,
SIG4
FROM [signals2] as input
WHERE GetArrayLength(input.XYZ) =0
)
Now when I start the SA job, it throws runtime error.
When I fetch the logs below is the error logs i received.
TimeGenerated,Resource,"Region_s",OperationName,"properties_s",Level
"2020-01-01T01:10:10.085Z",SAJOB01,"Japan West","Diagnostic: Diagnostic Error","{""Error"":null,""Message"":""First Occurred: 01\/01\/2020 01:10:10 | Resource Name: signals2 | Message: Maximum Event Hub receivers exceeded. Only 5 receivers per partition are allowed.\r\nPlease use dedicated consumer group(s) for this input. If there are multiple queries using same input, share your input using WITH clause. \r\n "",""Type"":""DiagnosticMessage"",""Correlation ID"":""xxxx""}",Error
"2020-01-01T01:10:10.754Z",SAJOB01,"Japan West","Receive Events: ","{""Error"":null,""Message"":""We cannot connect to Event Hub partition [25] because the maximum number of allowed receivers per partition in a consumer group has been reached. Ensure that other Stream Analytics jobs or Service Bus Explorer are not using the same consumer group. The following information may be helpful in identifying the connected receivers: Exceeded the maximum number of allowed receivers per partition in a consumer group which is 5. List of connected receivers - AzureStreamAnalytics_xxxx_25, AzureStreamAnalytics_xxxx_25, AzureStreamAnalytics_zzz_25, AzureStreamAnalytics_xxx_25, AzureStreamAnalytics_xxx_25. TrackingId:xxx_B7S2, SystemTracker:eventhub001-ns:eventhub:ehub01~26|consumergrp01, Timestamp:2020-01-01T01:10:10 Reference:xxx, TrackingId:xxx_B7S2, SystemTracker:eventhub001-ns:eventhub:ehub01~26|consumergrp01, Timestamp:2020-01-01T01:10:10, referenceId: xxx_B7S2"",""Type"":""EventHubBasedInputQuotaExceededError"",""Correlation ID"":""xxxx""}",Error...
For the SA job, the input signals2 is having a dedicated consumer group (consumergrp01)
For this Stream Analytics job, dedicated consumer group is available.There are 3 readers on a partition for this consumer group, but still it is throwing the error as Maximum Event Hub receivers exceeds. Why is it so?
Message: Maximum Event Hub receivers exceeded. Only 5 receivers per
partition are allowed.
I think the error message is clear on the root cause of the runtime error.Please refer to the statement in this doc:
There can be at most 5 concurrent readers on a partition per consumer
group; however it is recommended that there is only one active
receiver on a partition per consumer group. Within a single partition,
each reader receives all of the messages. If you have multiple readers
on the same partition, then you process duplicate messages. You need
to handle this in your code, which may not be trivial. However, it's a
valid approach in some scenarios
You could follow the suggestion in the error message:If there are multiple queries using same input, share your input using WITH clause. Try to isolate input2 as a temporary result set then use it in other temp results.

Stream Analytics query hits size limit

I'm new to Azure Stream Analytics. I have an Event hub as input source and now I'm trying to execute a simple query on this stream. An example query is like this:
SELECT
count(*)
INTO [output1]
FROM
[input1] TIMESTAMP BY Time
GROUP BY TumblingWindow(second, 10)
So I want to count the events which arrived within a certain time frame.
When executing this query, I always get the following error:
Request exceeded maximum allowed size limit
As I already narrowed down the checked time window and I'm certain that the amount of events within this time frame is not very big (at most several 100)
I'm not sure how to avoid this error.
Do you have a hint?
Thanks!
Request exceeded maximum allowed size limit
This error(i believe it should be more explicit) indicates that you violated the azure stream analytic resource and object limits.
It's not just about quantity, it's also about size.Please check your source inputs' size or try to reduce the windowsize and test again.
1.Does the record size of the source query mean that one event can only have 64 KB or does this parameter mean 64 K events?
It means the size of one event should below 64KB.
Is there a possibility to use Stream Analytics to select only certain
subfields of the event or is the only way to reduce the event size
before it is sent to the event hub?
As i know,ASA only collects data for processing it,so the size is all depends on the source side and your query sql. Since you need to use COUNT, i'm afraid that you have to do something on the eventhub side.Please refer to my thoughts:
Use Event Hub Azure Function Trigger, when an event streams into event hub,trigger the function and pick only partial key-values and save it into another event hub namespace.(Just in order to reduce the size of source event) Anyway you only need to COUNT records, i think it works for you.

Overcoming Azure Vision Read API Transactions-Per-Second (TPS) limit

I am working on a system where we are calling Vision Read API for extracting the contents from raster PDF. Files are of different sizes, ranging from one page to several hundred pages.
Files are stored in Azure Blob and there will be a function to push files to Read API once when all files are uploaded to blob. There could be hundreds of files.
Therefore, when the process starts, a large number of documents are expected to be sent for text extraction per second. But Vision API has limit of 10 transactions per second including read.
I am wondering what would be best approach? Some type of throttling or queue?
Is there any integration available (say with queue) from where the Read API will pull documents and is there any type of push notification available to notify about completion of read operation? How can I prevent timeouts due to exceeding 10 TPS limit?
Per my understanding , there are 2 key points you want to know :
How to overcome 10 TPS limit while you have lot of files to read.
Looking for a best approach to get the Read operation status and
result.
Your question is a bit broad,maybe I can provide you with some suggestions:
For Q1, Generally ,if you reach TPS limit , you will get a HTTP 429 response , you must wait for some time to call API again, or else the next call of API will be refused. Usually we retry the operation using something like an exponential back off retry policy to handle the 429 error:
2.1) You need check the HTTP response code in your code.
2.2) When HTTP response code is 429, then retry this operation after N seconds which you can define by yourself such as 10 seconds…
For example, the following is a response of 429. You can set your wait time as (26 + n) seconds. (PS: you can define n by yourself here, such as n = 5…)
{
"error":{
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 26 seconds."
}
}
2.3) If step 2 succeed, continue the next operation.
2.4) If step 2 fail with 429 too, retry this operation after N*N seconds (you can define by yourself too) which is an exponential back off retry policy..
2.5) If step 4 fail with 429 too, retry this operation after NNN seconds…
2.6) You should always wait for current operation to succeed, and the Waiting time will be exponential growth.
For Q2,, As we know , we can use this API to get Read operation status/result.
If you want to get the completion notification/result, you should build a roll polling request for each of your operation at intervals,i.e. each 10 seconds to send a check request.You can use Azure function or Azure automation runbook to create asynchronous tasks to check read operation status and once its done , handle the result based on your requirement.
Hope it helps. If you have any further concerns , please feel free to let me know.

Windows Azure Service Bus Topics Billing

I need details related to Windows Azure Service Bus Topics Billing. For example.
Am I gonna be charged for what my applicatons publish or for what my applications receive?
For example. Lets say that I have one publisher and 5 topics. On each topics there are 1000 mesages per second, where every message is 1KB in size.
On the other side, I have one subscriber that is subscribed on only one topic and also have applied filter, so it receives only 10 messages per second, instead of 1000.
On the publisher side we have. 5 * 1000 msg/s x 60*60*24*30 * 1KB = 12 960 000 000 messages * 1KB for five topics in one month.
On the subscriber side we have 1 * 10 msg/s x 60*60*24*30 * 1KB = 25 920 000 messages * 1KB.
So, Am I gona be charged for A or B?
A: 12 960 000 000 messages * 1KB
B: 25 920 000 messages * 1KB
I found this article very helpful in understanding the pricing structure: http://msdn.microsoft.com/en-us/library/windowsazure/hh667438.aspx
In essence, putting a message on to a queue counts as one message. Reading a message from a queue (or trying to read) also counts as one message.
In the case of topics and subscribers, putting the message on the topic is one message and each subscriber reading a message is also one message.
In your example you would be charged for 12 960 000 000 + 25 920 000 = 12985920000 messages. Or ~$13k - which isn't too bad considering you are pushing about 12TB through a transactional queueing system.
Do note that you should use the built-in long-polling support to read the queue, as you will be charged for trying to read an empty queue.
Also bear in mind that there is a nominal charge for obtaining an authentication token, so make sure your code does not obtain a new token for each put or get. See the cost table at the bottom of this article: http://msdn.microsoft.com/en-us/library/hh767287%28VS.103%29.aspx
You will be charged for A+B...
Multiple deliveries of the same message (for example, message fan out
to multiple listeners or message retrieval after abandon, deferral, or
dead lettering) will be counted as independent messages. For example,
in the case of a topic with three subscriptions, a single 64 KB
message sent and subsequently received will generate four billable
messages (one “in” plus three “out”, assuming all messages are
delivered to all subscriptions).
Refer MSDN for more info : http://msdn.microsoft.com/en-us/library/hh667438.aspx#BKMK_SBv2FAQ2_6

Resources