Azure Computer Vision enforces 4mb limit on Images in paid tier? - azure

I am currently working with Azure Computer Vision (read and anlyze APIs).
The docs state that images must be 4mb on free tier or up to 50mb on paid.
https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005
Repro
Pricing Tier S1
POST
https://{yourdomainy.cognitiveservices.azure.com/vision/v3.1/analyze?visualFeatures=Categories,Description,Tags
Headers
ocp-apim-subscription-key:{yourkey}
Content-Type:application/json
Body
{"url":"{imageurl}"}
(i have also tried posting a byte array of the image - same result) All images are between 3-8mb, so far less than the 50mb limit stated in docs.
Reponse
{\"code\":\"InvalidImageSize\",\"requestId\":\"177edee5-d17d-4fb7-a16f-da30644f77c4\",\"message\":\"Input image is too large.\"}
Any idea why this is happening?
Thanks

Baced on your question you are getting this message also for 3MB images, therefore less than 4MB.
In addition to the MB constraint there is also a dimension constraint. The max dimension is 10000 x 10000 pixels.
It may be that you are getting a InvalidImageSize due to the dimension constraint.

FWIW I just found this question and it did cause me some confusion too. I would just like to say that from my investigation you need to check each api individually for their size limites. It would seem that (as of api v3.1) many of the api's (Analyze and Describe) endpoints have a 4MB limit, with a couple of exceptions such as Read which call out 4MB limit on Free and 50MB on paid.
As the original post referred to Analyze endpoint in the example request I think this is likely the cause

Related

Is my understanding of cosmosdb pricing correct?

I’m struggling to understand how the pricing mechanism for RU/s works. Specifically my confusion comes in when the word “first” is used.
I’ve done my research here:https://devblogs.microsoft.com/cosmosdb/build-apps-for-free-with-azure-cosmos-db-free-tier/?WT.mc_id=aaronpowell-blog-aapowell
In the second paragraph it’s mentioned:
“With Azure Cosmos DB Free Tier enabled, you’ll get the first 400 RU/s throughput and 5 GB storage in your account for free each month, for the lifetime of the account.”
So hypothetically speaking if I have an app that does one query and that 1 query evaluates to 1RU. Can I safely assume that
400 users can execute the query once per second for free?
500 users can execute the query once per second and I will only be charged for 100RU
If the RU is consistently less than 401 per second, there will be no charge
Please do make mention if there’s any other costing I should be aware of. Ie. Any cosmosDb dependencies, or app service costing
You're not really thinking about RU/sec correctly.
If you have, say, 400 RU/sec, then that is your allocated # of RU within a one-second window. It has nothing to do with # of users (as I doubt you're giving end-users direct access to your Cosmos DB instance).
in the case of operations only costing 1 RU, then yes, you should be able to execute 400 operations within a 1-second window, without issue (although there is only one type of operation costing 1 RU, and that's a point-read).
in the case you run some type of operation that puts you over the 400RU quota for that 1-second period, that operation completes, but now you're "in debt" so to speak: you will be throttled until the end of the 1-second period, and likely a bit of time into the next period (or, depending on how deep in RU-debt you went, maybe a few seconds).
when you exceed your RU/sec allocation, you do not get charged. In your example, you asked what happens if you try to consume 500 RU in a 1-second window, and asserted you'd be charged for 100 RU. Nope. You'll just be throttled after exhausting the 400 RU allocation.
The only way you'll end up being charged for more RU/sec, is if you increase your RU/sec allocation.
There is some more reading out there you can do:
azure cosmos db free tier
pricing examples

What does rotate period, bufferlogsize, and synctimeout mean exactly in Winston Azur blob storage? Explanation with simple examples are appreciated

In our project we are using winston3-azureblob-transport NPM package to store Application logs to blob storage.
However due to increase in users we are getting an error "409 - ClientOtherError - BlockCountExceedsLimit|The committed block count cannot exceed the maximum limit of 50,000 blocks".
Could anyone tell us using rotatePeriod, bufferLogSize and syncTimeout helps us to stop the error "409 - ClientOtherError - BlockCountExceedsLimit|The committed block count cannot exceed the maximum limit of 50,000 blocks".
Also provide any another alternative solution. However Winston logger should not be replaced.
The error "The committed block count cannot exceed the maximum limit of 50,000 blocks" usually occurs when the maximum limits are exceeded.
Each block in a block blob can be a different size. Based on the Service version you are using, maximum blob size differs.
If you attempt to upload a block that is larger than maximum limit your service version is supporting, the service returns status code 409(ClientOtherError - BlockCountExceedsLimit). The service also returns additional information about the error in the response, including the maximum block size permitted in bytes.
rotatePeriod: A moment format ex : YYYY-MM-DD will generate blobName.2022.03.29
bufferLogSize: A minimum number of logs before sync the blob, set to 1 if you want sync at each log.
syncTimeout: The maximum time between two sync, set to zero if you don't want.
For more in detail, please refer this link:
GitHub - agmoss/winston-azure-blob: NEW winston transport for azure blob storage by Andrew Moss agmoss

Is there a way to increase the attachment file size on Azure Devops Services Wiki

I have tried finding a setting but can't find an option to increase the attachment size from 18 MB.
I am afraid that there is no method could increase the attachment size from 18 MB.
You could refer to this doc: File naming conventions
Restriction type Restriction
File size Must not exceed the maximum of 18 MB
Attachment file size Must not exceed the maximum of 19 MB
This is a limitation of Azure DevOps, so there is no method to increase it for the time being.
But the requirement makes sense, you could refer to the following suggestion ticket in Our UserVoice Site: File size limits for wiki are restricting
You can also create a new suggestion ticket based on your requirement.

Azure VM stats - Network In/Out - what are the measurements?

I feel perturbed, but I don't understand the measurement Azure uses for Network In/Out and a few other things.
On Azure portal -> my VM -> Metrics -> [Host] Network In/Out, it says that it is measured in bytes, but then it also draws graph over time. If it were plain, bytes, it should be cumulative and therefore grow indefinitely, but it isn't, therefore I am inclined to believe it is measured per second or something like that. But Azure docs claim that it is bytes and not bytes per second (link here)
Am I missing something obvious?
I am inclined to believe the data is in bytes per minute. At least for mine it appears that way. I set my graph for a 10 minute interval. Taking the mouse off the graph the total bytes show at the bottom. Hovering over the individual sample points (10 in total) they average between 31-34MB each. Adding them up you get close to the total for the graph interval 326MB. 10*32.5 is very close to the this total leading me to believe that each interval on the graph is a sum of the individual interval (1 minute). That is what I am seeing anyway. Terrible documentation from Microsoft. Why not just specify this in the (i) hover point on the individual graph?
#eddyP23 - if you add up all your points in your graph it appears you would come to the same conclusion. Each point is a sum of the interval (1 minute). I am not sure how else to read this.
If it were bytes per second the data total for the complete interval would be vastly larger. 10 minute interval
Sorry for the delay.
therefore I am inclined to believe it is measured per second or
something like that. But Azure docs claim that it is bytes and not
bytes per second
You can find the Network In here:
The Network In (bytes per second) used for monitor your VM's performance.

In New Relic RPM, I get reports with an Apdex index listed. What is the subscript meaning?

This sounds ridiculous, but New Relic RPM reports an Apdex index in a form like this:
0.92(3.5)
Where the 3.5 is subscripted.
What does the 3.5 mean? I can't find the definition anywhere, and yet there it is in my reports, staring me in the face.
The bracketed/subscripted number is the threshold (in seconds) for your Apdex score. So, in your case, if the full application response (page load) is less than 3.5s then that satisfies the requirement. If your app responds slower than the threshold then your Apdex score is impacted.
This threshold is customizable, so you can select what is appropriate for your application type.
You can read more about Apdex in our docs.
The sub-scripted number is your target response time for that tier. On the user agent (browser) the high water mark is 7 seconds. You should check US-Only and make this number 2 to 4 seconds to be world class.
The app server tier must respond much faster. The high water mark default that NR sets is .5 seconds or 500 milliseconds, a world class page buffer flush would be in the 50-200 ms on average.
Remember all this information is about aggregated averages and not instance data which will have many outliers and have a broad distribution.

Resources