On Azure Storage service, i do mp3 streaming by doing range requests. For that reason, i set DefaultServiceVersion as "2011-08-18" for unversioned requests. I am able to get range response information headers and jumping middle of the audio file on HTML 5 Audio player.
I experience that usually i can't play whole audio file because streaming stops suddenly somewhere in middle of the file. I watched request informations via Fiddler application and i see that Azure Storage does not send whole requested range because fiddler give warning.
"Content-Length mismatch: Response Header indicated 6.318.692 bytes,
but server sent 2.007.994"
Also when i watch request in Chrome Developer Tools, the request is failing in somewhere of the file. This is quite frequently happen. Why request completed without getting full requested byte range ?
Check that you're not seeing a timeout. Quite often load balancing and similar features will terminate long duration connections.
I would recommend enabling logging on your account. (see http://msdn.microsoft.com/en-us/library/windowsazure/hh343270.aspx for more info on Logging / Metrics). This can be done easily via the azure portal. Once this is done you will have more information to self diagnose the issue from the service perspective.
Joe
Related
I've encountered an interesting problem when trying to make a HTTP request from Azure VM. It appears that when the request is ran from this VM the response never arrives. I tried using a custom C# code that makes an HTTP request and Postman. In both cases we can see in the logs on the target API side that the response has been sent, but no data is received on the origin VM. The exact same C# request and Postman request work outside of this VM in multiple networks and machines. The only tool that actually works for this request on VM side is Curl Bash terminal but it is not an option based on current requirements.
Tried on multiple Azure VM sizes, on Windows 10 and Windows Server 2019.
The target API is on-premise hosted and it requires around 5 minutes for the data to be sent back. Payload is very small but due to the computing performed on the API side it takes a while to generate. Modifying this API is not an option.
So to be clear- the requests are perpetually stuck until the timeout on client side is reached (if it was configured). Does anybody know what could be a reason for this?
If these transfers take longer than 4 minutes without keep alives, Azure will typically close the connection.
You should be able to see this by monitoring the connection with wireshark.
TCP Timeouts can be configured when using a Load Balancer, but you can also try adding keep alives in your API server if possible.
Sometime we are getting "Error reading FrontendRequest message content" exceptions in API Management. Backend calls are actually not failing, and their responses are what we expect. This is not very frequent (a handful per day), but I would like to find the reason.
Thanks in advance,
Jose
This means that there were some problems (details should be in logs) reading request content from client, that is client that has made a call to APIM service. It's normal to have some of those since usually you do not control where calling clients are or what is their connection quality. But if you have this persistently or do control your clients and sure that there are no problems with their connection, might want to file a support request.
I'm able to successfully load test my bot server by getting the proper auth token from Microsofts auth URL (basically through this page)
I was wondering if this was a valid test on the service considering that we're not actually hitting the bot frameworks endpoint (which has rate limiting)
Is there another way to load test a bot service wherein i can replicate the bot frameworks throttling/rate limits?
I ended up with using load test with Visual Studio and Visual Studio Team Services.
The reason why I used this approach is that you can setup full path of load tests. Azure Bot Service can be either Web App or Function App with endpoint prepared for receiving messages - using HTTP POST so in the end is just web service.
You can setup load tests for different endpoints including number of hits to selected endpoint. In case of Bots you can for instance setup test with 100 fake messages sent to the bot to see the performance.
You can read more under these two links below:
Load test your app in the cloud using Visual Studio and VSTS
Quickstart: Create a load test project
Unfortunately as stated in the documentation you linked, the rates are not publicly available due to how often they are adjusted.
Regarding user-side throttling- this should not actually have an effect either way as long as you simulate reasonable traffic, but even if you go a bit overboard, an individual user hitting rate-limiting would be functionally equivalent to just having a bit more traffic. The single user sending more messages to the bot is the same as three users sending the same amount of messages slightly slower and there's no limit for your bot in terms of how many customers you might have. That said, a user getting a message, reading it, and typing up a response should not put themselves into a situation where they are rate-limited.
However, regarding bot side throttling it is useful to know if your bot is sending messages too fast for the system. If you are only ever replying directly to messages from users, this will not be an issue, as the system is built with replying to each user message in mind. The only area you might run into trouble is if you are sending additional (or unsolicited) messages, however even here as long as you are within reasonable limits you should be OK. (i.e. if you aren't sending several messages back to a user as fast as possible for each message they send you, you will probably not have problems.) You can set a threshold for bot replies within your channel at some reasonable-sounding limit to test this.
If you would like to see how your bot responds in cases where throttling is occurring (and not necessarily forcing it into tripping the throttling threshold), consider setting your custom channel to send 429 errors to your bot every so often so that it has to retry sending the message.
I have a NodeJS REST API which has endpoints for users to upload assets (mostly images). I distribute my assets through a CDN. How I do it right now is call my endpoint /assets/upload with a multipart form, the API creates the DB resource for the asset and then use SFTP to transfer the image to the CDN origin's. Upon success I respond with the url of the uploaded asset.
I noticed that the most expensive operation for relatively small files is the connection to the origin through SFTP.
So my first question is:
1. Is it a bad idea to always keep the connection alive so that I can
always reuse it to sync my files.
My second question is:
2. Is it a bad idea to have my API handle the SFTP transfer to the CDN origin, should I consider having a CDN origin that could handle the HTTP request itself?
Short Answer: (1) it a not a bad idea to keep the connection alive, but it comes with complications. I recommend trying without reusing connections first. And (2) The upload should go through the API, but there maybe be ways to optimize how the API to CDN transfer happens.
Long Answer:
1. Is it a bad idea to always keep the connection alive so that I can always reuse it to sync my files.
It is generally not a bad idea to keep the connection alive. Reusing connections can improve site performance, generally speaking.
However, it does come with some complications. You need to make sure the connection is up. You need to make sure that if the connection went down you recreate it. There are cases where the SFTP client thinks that the connection is still alive, but it actually isn't, and you need to do a retry. You also need to make sure that while one request is using a connection, no other requests can do so. You would possibly want a pool of connections to work with, so that you can service multiple requests at the same time.
If you're lucky, the SFTP client library already handles this (see if it supports connection pools). If you aren't, you will have to do it yourself.
My recommendation - try to do it without reusing the connection first, and see if the site's performance is acceptable. If it isn't, then consider reusing connections. Be careful though.
2. Is it a bad idea to have my API handle the SFTP transfer to the CDN origin, should I consider having a CDN origin that could handle the HTTP request itself?
It is generally a good idea to have the HTTP request go through the API for a couple of reasons:
For security reasons' you want your CDN upload credentials to be stored on your API, and not on your client (website or mobile app). You should assume that your code for website can be seen (via view source) and people can generally decompile or reverse engineer mobile apps, and they'll be able to see your credentials in the code.
This hides implementation details from the client, so you can change this in the future without the client code needing to change.
#tarun-lalwani's suggestion is actually a good one - use S3 to store the image, and use a lambda trigger to upload it to the CDN. There are a couple of Node.js libraries that allow you to stream the image through your API's http request towards the S3 bucket directly. This means that you don't have to worry about disk space on your machine instance.
Regarding your question to #tarun-lalwani's comment - one way to do it is to use the S3 image url path until the lambda function is finished. S3 can serve images too, if properly given permissions to do so. Then after the lambda function is finished uploading to the CDN, you just replace the image path in your db.
In our asp.net application we have an upload feature. When Fiddler is running on the client (with Act as a system proxy) the upload is quick (10megs in 20 sec). however, when Fiddler is not up on the client it's taking about 5 minutes. Any one have any suggestions?
Converting to answer
Fiddler isn't replacing that setting, but as a local proxy, it's buffering the complete POST request locally (so the small buffer doesn't matter) and then blasting it to the server as quickly as the server will take it. The send buffer size was increased in later browser versions.
For IE6, see the steps in http://support.microsoft.com/kb/329781 to adjust the buffer size.