Azure batch insert - how to reduce response length? - azure

I use Azure client lib to perform batch inserts into Azure Table Storage. Everything works fine. But when I sniff requests using Fiddler I found that every response from azure is about 90KB. I've changed prefer header to "return-no-content", but the response is still over 60KB (when request is 50KB).
Is there any way to reduce response lenght? Just to be like 100B (HTTP 202 or something).

Per OData V3 protocol format that Azure Storage Table Service uses, a batch response body must structurally match one-to-one with the corresponding batch request body, such that the same multipart MIME message structure defined for requests is used for responses.
Setting echoContent to false (aka "Prefer: return-no-content") would ensure that entities themselves are not returned though as you also observed, therefore reducing the size of the response body.

Related

Azure file storage: High count of ClientOtherError

Just noticed that my fileshare storage in Azure has a very high rate of the "ClientOtherError" appearing. They're running at anywhere from 50-100% of the success count.
Anyone have any experience as to why this might be?
The attached graph shows the ClientOtherError transactions in red/orange and the successful transactions in blue.
here my error rate as compare (Win FS and also used by AKS cluster)
I do a good amount of overwrites, maybe that contributes to the number of errors.
ClientOtherError :
Authorized request that failed as expected. This error can represent
many 300-400 level HTTP status codes and conditions such as NotFound
and ResourceAlreadyExists.
We came across very high percentage of ClientOtherError (Failed transaction by response type) with our Azure blob storage. However, in our case this error can be ignored. This was happening because operations were being performed on files that didn't exist. They were basically successful API calls that return non 200 HTTP status code. In our scenario, Failed transaction by API name showed below items.
DeleteFile
GetBlobProperties
GetFileProperties
blob storage example 1:
blob storage example 2:
ClientOtherError usually means expected errors, such as not found and resource already exists. If your code uses APIs such as Exists, Create***IfNotExist(for example, CreateTableIfNotExist), those errors will be encountered frequently. Some examples of operations that execute successfully but that can result in unsuccessful HTTP status codes include:
ResourceNotFound (Not Found 404), for example from a GET request to a blob that does not exist.
ResourceAlreadyExists (Conflict 409), for example from a CreateIfNotExist operation where the resource already exists.
ConditionNotMet (Not Modified 304), for example from a conditional operation such as when a client sends an ETag value and an HTTP If-None-Match header to request an image only if it has been updated since the last operation.
In order to debug this further, you can use Azure storage logging which would log information about every operation performed against your storage account. It will include the HTTP code of every response.
Here is a list of common status codes. Many (not all) 300-400 level HTTP status code will result in ClientOtherError.
OP seems to face this problem with Azure file share. I suspect something similar is happening. The windows storage explorer application under the hood probably does similar API calls resulting in ClientOtherError.
File share here seems to have similar API's that can result in ClientOtherError when the file is missing.
I would say that in most of the cases this error is expected and can be ignored.

How to record the total size of a request in ASP.NET Core

I'm trying to log the total size of a request sent out from an ASP.NET Core website hosted in Azure. The goal is to be able to attribute some sense of what the data out cost is for specific functionality within the application. (Specifically we are looking at some functionality that uses Azure Blob Storage and allows downloads of the blobs but I don't think that's relevant to the question.)
I think the solution is some simple middleware that logs out the sizes to a storage mechanism (less concerned with that part) but not sure what to put inside the middleware.
Does anyone know how you would "construct" the total size of the response being sent out of the web app.?
I would presume it will include HttpContext.Response.Body.Length but I'm pretty confident that doesn't include the headers. Also not sure if that's the compressed size of the response body or not.
Thanks

How to use azure logic app action to download files in browser

I originally created a logic app that would, given a JSON payload, run a stored procedure, transform the results into a CSV table and then email the CSV to a specified email account. Unfortunately requirements changed slightly and instead of emailing the csv they want it to download directly in the browser.
I am unable to get the HTTP response action to tell the browser to download the file using the Content-Disposition header. It looks like this is pulled out of the request by design. Is anyone aware of another action (perhaps a function?) that could be used in place of the HTTP response to get a web browser to download the file rather than returning it as text in the response body?
It does indeed seem to be the case that the Response action doesn't support the Content-Disposition header for some reason. Probably the easiest workround is to proxy the request through a simple HTTP-triggered Azure Function with CORS enabled (or an API on your server) that just fetches the file from the Logic App and then returns it with the Content-Disposition header attached.
NB. Don't rely on <a download="filename"> - most browsers that support the download attribute only respect it for same-origin requests.

Google App Engine standard doesn't compress my Next.js/Express app

I'm trying to figure out what to do to make Google App Engine (standard version) apply compression to the output of my Next.js/Node.js/Express application.
As far as I've gathered, the problem is that
1) Google's load balancer removes all meta tags indicating that the client supports compression from the request, and thus app.use(compression()) in server.js won't do anything. I've tried to force compression using a {filter: shouldCompress} function, but doesn't seem to matter since Google's front end still returns an uncompressed result. (Locally compression works fine.)
2) How and when Google's load balancer chooses to apply compression is a mystery to me. (And particularly, why not to my silly but large application/javascsript content :))
Here's what they say in the docs:
If the client sends HTTP headers with the original request indicating
that the client can accept compressed (gzipped) content, App Engine
compresses the handler response data automatically and attaches the
appropriate response headers. It uses both the Accept-Encoding and
User-Agent request headers to determine if the client can reliably
receive compressed responses.
How Requests are Handled: Response Compression
So there's that. I'd love to use App Engine for this project but when index.js is 700KB instead of a compressed 200KB, it's kind of a showstopper.
As per the Request Headers and Responses documentation for Node.js, the Accept-Encoding header is removed from the request for security purpose.
Note: Entity headers (headers relating to the request body) are not sanitized or checked, so applications should not rely on them. In particular, the Content- MD5 request header is sent unmodified to the application, so may not match the MD5 hash of the content. Also, the Content-Encoding request header is not checked by the server, so if the client sends a gzipped request body, it will be sent in compressed form to the application.
Also note the response on Google Group which states:
Today, we are not passing through the Accept-Encoding header, so it is not possible for your middleware to decide that it should compress.
We will roll out a fix for this in the new few weeks.

Is the Azure Put Blob operation atomic?

The documentation for Azure's Put Blob REST API operation tells us that it is possible to upload a block blob up to 64 MB with a single request.
I'm wondering whether such an operation is atomic. In particular I need to know whether the following assumptions are true or false.
If two or more clients concurrently to put a particular non-existing blob using this API specifying If-None-Match: *, then at most one of them will succeed.
A blob put using this API will never be partially exposed. It will either not exist or exist with the entire content that was put (<64MB) including metadata.
Can anyone confirm or refute these assumptions?
I have received confirmation directly from a Microsoft support technician that both of these assumptions are true:
If two or more clients concurrently to put a particular non-existing blob using this API specifying If-None-Match: *, then at most one of them will succeed.
A blob put using this API will never be partially exposed. It will either not exist or exist with the entire content that was put (<64MB) including metadata.
Is the Azure Put Blob operation atomic?
Answer: Not at all.
Any attempt to read the blob before the completion of step 3 would
result in HTTP 404 (not found).
Yes, 100% secure you'll receive a 404
Any attempt to read the blob after the completion of step 3 would
either see the entire blob content and meta data, or result in HTTP
404 (not found) in case step 3 was not successful.
Yes, if the operation isn't complete there is no file in blob storage
Any attempt to put the blob with an If-None-Match: * header before the
start of step 2 would have to wait until step 3 is completed, either
successfully in which case the request must fail with HTTP 409
(precondition failed) or continue normally, since the blob would not
exist.
In my testing: there's no wait.
So, normally after a second attempt to upload the same file name you will receive a HTTP/1.1 409 The specified blob already exists. (just if you have sent the request with If-None-Match:* header)
The problem is that if the first upload file hasn't received yet the first 201 confirmation (or unique if you're uploading all in one request) then the second file will be allowed to create the resource even if it was launched after the first one. This use to happen if the second file is shorter than the first one because maybe in just the 1st (short ) request the file will finish the transmission.
The weirdest thing is that when this happen the first stream will continue uploading data normally until when last request is emitted, the answer for the last request will be 409.
I strongly recommend you to create a spike solution to test your specific use case because the situation described above maybe is not a valid use case for your application.

Resources