Messages cannot be larger than 65536 bytes - azure

I'm using Azure Queue to send emails. But for last time I'm getting exception about queue size limit up to 65536 bytes even after cheking the message size.

While it is true that the maximum size of a message can be 64KB however Azure uses UTF16 encoding to store the data thus for each byte of data that you provide, Azure Storage uses 2 bytes to store that data.
What this means is that you can essentially store up to 32KB of data in a message in an Azure Queue. Because you're exceeding this 32KB limit, you're getting this error.

A string message will be Base64 encoded before being sent thus increasing its length by about a third.
Therefore the maximum length of message string you can submit is 49152 which equates to 65536, the maximum allowed.
The formula for calculating the Base64 encoded length can be found here: https://stackoverflow.com/a/13378842/5836877

Related

Can I increase the size of the column "Statement" in Azure Log Analytics

I have a KLOG statement used to extract some data from Azure Log Analytics. My problem is related to the fact that Azure Log Analytics seems to truncate the SQL statements longer than 4000 characters. For the audited server, I have more queries written by the users longer than 4000 characters. Can I increase the size of the column "Statement" somehow?
Thank you
Can I increase the size of the column "Statement" somehow?
Azure has a limitation of collecting log messages size.
If you want to add a custom property message with large no of length you can use the trackTrace. It has the capacity to have max length up to 8192 characters.
# it will add more than 4000 character in AI Logs
telemetryClient.TrackTrace(<telemetry with more than 4000 character >);
The highest max allowed message limit is 32768 characters for items in the property collection have limit of 8192. (Max key length 150 value length 8192 characters)
Refer MS-DOC for length limits of Application Insights Data model with respective to the type of telemetry.
Reference
Follow the Steps given by #cijothomas to add large length of message (more than 8K) to application Insights

how to get node.js body-parser complaining about payload being too large

so I have a post route and the payload is a json.
It has a number of fields and one is a base64 encoded string corresponding to a large png file.
the error I get is
PayloadTooLargeError: request entity too large
at readStream (/Users/reza.razavipour/Projects/s3uploader/node_modules/raw-body/index.js:155:17)
at getRawBody (/Users/reza.razavipour/Projects/s3uploader/node_modules/raw-body/index.js:108:12)
at read (/Users/reza.razavipour/Projects/s3uploader/node_modules/body-parser/lib/read.js:77:3)
how do I get around this limitation?
In the future I will have to process very large many Gbs zip files...
According to documentation there is a limit option one can pass to the json parser in order to configure the body limit.
limit
Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.
Something like this for 100 megabytes:
bodyParser.json({ limit: '100mb' })

Block size and transaction per block in Hyperledger Fabric

What is the relationship between MaxMessageCount, AbsoluteMaxBytes, and PreferredMaxBytes?
A block in fabric consists of a MaxMessageCount number of transaction or PreferredMaxBytes?
What should be the value of these to get maximum throughput?
Max Message Count: The maximum number of transactions/messages to permit in block.
Absolute Max Bytes: The (strict) maximum number of bytes allowed for serialized transactions/messages in a block.
Preferred Max Bytes: The preferred maximum number of bytes allowed the serialized transactions/messages in a batch. A transaction/message larger than the preferred max bytes will result in a batch larger than preferred max bytes.
The criteria that is encountered first will be taken into consideration while the orderer cuts the block.
If you have a constantly flowing high number of transactions, then pack as many transactions as possible in a block to get max throughput. Otherwise tweak the BatchTimeout and MaxMessageCount to optimize your transaction throughput.
If you want to dig deep on this aspect refer to this research paper: https://arxiv.org/pdf/1805.11390.pdf

How to push the Maximum length message in azure service bus

I want to push a message into azure service bus, let say of size 3 MB.
for this, I have wrote :
QueueInfo queueInfo = new QueueInfo("sq-jcibe-microservice-plm-qa");
long maxSizeInMegabytes = 5120;
queueInfo.setMaxSizeInMegabytes(maxSizeInMegabytes);
service.updateQueue(queueInfo);
service.sendQueueMessage("sq-jcibe-microservice-plm-qa", brokeredMessage);
I am getting following exception.
com.microsoft.windowsazure.exception.ServiceException: com.sun.jersey.api.client.UniformInterfaceException: PUT https://sb-jcibe-microservice-qa.servicebus.windows.net/sq-jcibe-microservice-plm-qa?api-version=2013-07 returned a response status of 400 Bad Request
Response Body: <Error><Code>400</Code><Detail>SubCode=40000. For a Partitioned Queue, ordering is supported only if RequiresSession is set to true.
Parameter name: SupportOrdering. TrackingId:59bb3ae1-95f9-45e1-8896-d0f6a9ac2be8_G3, SystemTracker:sb-jcibe-microservice-qa.servicebus.windows.net:sq-jcibe-microservice-plm-qa, Timestamp:11/30/2016 4:52:22 PM</Detail></Error>
I am not undrstanding what does it mean and how should I resolve the problem.Please help me out with this scenario.?
maxSizeInMegabytes refers to the maximum total size of the messages on a queue, not individual message size.
Individual message size cannot exceed 256KB for a standard tier, and 1MB for a premium tier (including headers for both).
If you wish to send messages larger than the Maxim message size, you'll have to either implement a claim check pattern (http://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html) or use a framework that does it for you. Implementing yourself would mean something among the lines of storing the payload as a Storage blob and message would contain the Uri.

Microsoft.WindowsAzure.Storage.Table.CloudTable.ExecuteBatchAsync() truncates message

When I call this method with large EntityProperty (around 17Kb of text), it truncates the string.
I know that there is a limitation of 64Kb for a column and 1Mb for 1 entire row when it comes to Azure Table.
Any insights?
Apart from all these size restrictions, you forgot about the size restriction in an entity group transaction which is done by ExecuteBatchAsync method and that is:
The transaction can include at most 100 entities, and its total
payload may be no more than 4 MB in size.
Ref: http://msdn.microsoft.com/en-us/library/azure/dd894038.aspx
Please ensure that your payload size is less than 4 MB.

Resources