I use Microsoft Azure and the cognition service to be able to use the Speaker Recognition API.
I have a subscription "Pay as you go" with the Standard Pricing Tier (5 calls per second) . I use Unity to make the voice recording and to send the audio to the server.
Every 5 seconds, I send the audio to the server inside the update function of the C# script.
But around 30 seconds of speech, I got the error 429 : code: RateLimitExceeded, message : Rate limit is exceeded. Try again later.
If anyone use it, do you know why I got this response from the server, while I have a subscription to avoid this limit.
I joined the Microsoft assistant, and they told me that the subscription is active, but I don't see any direct debit.
If you look at the note in the readme section here https://github.com/Microsoft/Cognitive-SpeakerRecognition-Windows,
Note: Make sure that the number of requests per minute resulting from tunning the step size won't exceed your subscription's rate limit.
For example, applying a step size of 1 on an audio file of size 1 minute will result in 60 requests. Applying a step size of 2 on the same audio file will result in 30 requests. For your convenience, we have provided sample audios to enroll 2 speakers and a sample audio for streaming. These audios are found under SpeakerRecognition\Windows\Streaming\SPIDStreamingAPI-WPF-Samples\SampleAudios.
Also, according to the API link on https://azure.microsoft.com/en-us/services/cognitive-services/speaker-recognition/
The audio file format must meet the following requirements:
Container - WAV
Encoding - PCM
Rate - 16K
Sample Format - 16 bit
Channels - Mono
Related
Is there any message receiving limit per device on Azure IoTHub?
If any, can I remove or raise the upper limit without registering additional devices?
I tested 2 things to make sure if I can place enough load (ideally, 18000 message/s)on Azure IoT Hub in the future load tests.
① Send a certain amount of mqtt messages from a VM.
② Send a certain amount of mqtt messages from two VMs.
I expected that the traffic of ② would be twice as large as that of ①. But it wasn't. Maximum messages per minute on IoTHub of ② is not so different from that of ①. Both of them are around 3.6k [message/min]. At that time, I registered only one device on IoT Hub. So I added another device and tested ② again to see if the second device could increase the traffic. As a result, it increased the traffic and IoT Hub had bigger messages per minute.
Judging from this result, I thought IoTHub has some kind of limit on receiving message per device. But I am not sure. So if anyone know about the limit, could you tell me what kind of limit it is and how to raise the upper limit without registering additional devices because in production we use only one device.
For your information, I know there is a "unit" to increase the throughput in IoTHub. To increase the load I changed the number of unit from 2 to 20 in both ① and ②. However, it did not make messages/min in IotHub bigger. I'd also like to know why the "unit" did not work as expected.
Thank you for reading, in advance. Any comment would be my help.
Every basic (B1,B2, B3) or standard unit of IoT Hub SKU (S1, S2, S3) has specific daily message quota as per https://azure.microsoft.com/en-us/pricing/details/iot-hub/. A single IoTHub can support 1 million devices and there is no per device cost associated, only the msg/day quota as above.
e.g. S1 SKU has 400,000 msg/day quota and you can add multiple units of S1 to increase capacity. S2 has 6000,000 msg/day and S3 has 300,000,000 msg/day quota per unit and more units can be added.
Before this limit is reached IoTHub will raise alert which can be used to automatically add more units or jump to higher SKU.
Regarding your test, there are specific throttling limits to avoid misuse of the service here -
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-quotas-throttling
As an example, for 18000 msg/sec you will need 3 units of S3 SKU (each with 6000 msg/sec rate limit). In addition there are other limits like how quickly connections can be attempted, if using Azure IoT SDK's the built-in retry logic helps overcome this otherwise you need to have retry policy. Basically you dont want million device trying to connect at the same time, IoTHub will only accept connections at a certain rate. This is not concurrent connection limit but a rate at which new connnections are accepted.
How do I calculate the incoming bytes per second for an event hub namespace?
I do not control the data producer and so cannot predict the incoming bytes upfront.
I am interested in adjusting the maximum throughput units I need, without using the auto-inflate feature.
1 TU provides 1 MB/s ingress & 2 MB/s egress, but the metrics are reported per minute, not per second.
Can I make a decision based on the sum/avg/max incoming bytes reported in the Azure portal?
I believe you'll need to use Stream Analytics to query your stream and based on the query output change your TU on Event Hub.
You can also try to use Azure Monitor, but I believe it won't group per second as you need, so you'd better try the first option.
Per second metrics cannot be reliable due to very nature of potential intermittent spikes at the traffic in and out. 1 minute averages are good to monitor and you can easily take action via a Logic App.
Check messaging metrics to monitor here - https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-metrics-azure-monitor#message-metrics
I am using the amazon rekognition API to analyse my video to find and Search faces.
A one minute video has been on processing for nearly an hour now. But have not received any result .
Is this normal ?
SNS needed to be connected With SQN
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html#SendMessageToSQS.sqs.permissions
I am working on a project where we are recording temperature and humidity of multiple devices and on cloud side using Azure stream analytics to find if any device has breached it mentioned threshold limit.
We need to monitor device for 15 mins if device is constantly breaching its limitations then need to raise alert.
But the tricky part is if device is still breaching its threshold for another 30 min then again raise another alert. Then need to raise alert again and again after every 30 mins until device is back to normal limits.
I can use sliding window query in stream analytics to find out which device is out of threshold for first 15 mins, But how can find subsequent 30 min threshold breach and raise alert?
I'd suggest sending off one output from your current ASA job to a new event hub, and have a new ASA job monitoring the data from the second event hub over the 30 minute treshold.
On this documentation page there is the following limitation of Application Insights documented:
Up to 500 telemetry data points per second per instrumentation key (that is, per application). This includes both the standard telemetry sent by the SDK modules, and custom events, metrics and other telemetry sent by your code.
However it doesn't explain what the implications of that limit are?
a) Does it buffer and throttle, but still persist all data eventually? So say - 1000 data points get pushed within a second - it will persist the first 500, then wait for a bit and push the other 500?
or
b) Does it just drop/not log data? So say - 1000 data points get pushed within a second and only the first 500 will be persisted and the other 500 not (ever)?
It is the latter (b) with the caveat that ALL data will start to be throttled in this case, i.e. once RPC is > 500 (100 for free apps, please see https://azure.microsoft.com/en-us/documentation/articles/app-insights-data-retention-privacy/ for details) is detected, it will start rejecting all data from this instrumentation key on data collection endpoint, until RPC rate is back to under 500.
EDIT: Further information from Bret Grinslade:
The current implementation averages over one minute -- so if you send 30K in 1 minute (500*60) it will throttle your application. The HTTP response will tell the SDK to retry later. If the incoming rate never comes down, the response will tell the SDK to drop the data. We are working on other features to improve this experience -- pre-aggregation on the client, improved burst data rates, etc.
A bit more detail on top of Alex's response. The current implementation averages over one minue -- so if you send 30K in 1 minute (500*60) it will throttle your application. The HTTP response will tell the SDK to retry later. If the incoming rate never comes down, the response will tell the SDK to drop the data. We are working on other features to improve this experience -- pre-aggregation on the client, improved burst data rates, etc.
AI now has the ingestion throttling limit of 16K EPS: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-pricing