Azure Maps - Can I get the speed limit of the roads? - azure

I'm trying to get the speed limit of a specific point on the map (lat, lng) using an API, but I can't find it in the Azure Maps documentation. I found it on Bing Maps, but I wanted to use Azure Maps instead if possible, as they give you 250k map free requests per month.
Thanks!

Yes you can access speed limit data in Azure Maps by using the reverse geocoding service and setting the "returnSpeedLimit" parameter to true: https://learn.microsoft.com/en-us/rest/api/maps/search/getsearchaddressreverse
You can also use the batch reverse geocoding service if you have a lot of data points: https://learn.microsoft.com/en-us/rest/api/maps/search/postsearchaddressreversebatch
You might also find the Traffic flow segment API interesting. It will tell you current speed of traffic on a section of road: https://learn.microsoft.com/en-us/rest/api/maps/traffic/gettrafficflowsegment The free flow speed isn't the speed limit, but the average speed vehicles travel that section of road when there is no traffic.
Similarly, the routing service can return the current speed due to traffic over each segment of a route if you set the "sectionType" parameter to "traffic". https://learn.microsoft.com/en-us/rest/api/maps/route/getroutedirections

Related

How to control the usage of APIs by consumers during a given period (throttle) in Azure function app Http trigger without using Azure API Gateway

How to control the usage of APIs by consumers during a given period in Azure function app Http trigger. Simply how to set a requests throttle when exceed the request limit, and please let me know a solution without using azure API Gateway.
The only control you have over host creation in Azure Functions an obscure application setting: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT. This implies that you can control the number of hosts that are generated, though Microsoft claim that “it’s not completely foolproof” and “is not fully supported”.
From my own experience it only throttles host creation effectively if you set the value to something pretty low, i.e. less than 50. At larger values then its impact is pretty limited. It’s been implied that this feature will be will be worked on in the future, but the corresponding issue has been open in GitHub with no update since July 2017.
For more details, you could refer to this article.
You can use the initialVisibilityDelay property of the CloudQueue.AddMessage function as outlined in this blog post.
This will throttle the message to prevent the 429 error if implemented correctly using the leaky bucket algorithm or equivalent.

How Monitor - Cosmos DB (preview) Requests is calculated?

Azure provides monitor to the incoming request to the Cosmos. When I am alone working on my Cosmos DB, ran a simple select vertex statement(eg., g.V('id')). Then I monitored the incoming request, it shows around 10. But for sure I know i'm the only person accessed. I also tried traversing through the graph in a single select query the Request count is huge (around 100).
Do anybody noticed the metrics? We are assuming the request code is huge for an hour in production cause the performance slowness. Is the metric is trustworthy to believe or how to find the incoming request to the cosmos?

Programmatically get the amount of instances running for a Function App

I'm running an Azure Function app on Consumption Plan and I want to monitor the amount of instances currently running. Using REST API endpoint of format
https://management.azure.com/subscriptions/{subscr}/resourceGroups/{rg}
/providers/Microsoft.Web/sites/{appname}/instances?api-version=2015-08-01
I'm able to retrieve the instances. However, the result doesn't match the information that I see in Application Insights / Live Metrics Stream.
For example, right now App Insights shows 4 servers online, while API call returns just one (the GUID of this 1 instance is also among App Insights guids).
Who can I trust? Is there a better way to get instance count (e.g. from App Insights)?
UPDATE: It looks like data from REST API are wrong.
I was sending 10000 messages to the queue, logging each function call with respective instance ID which processed the request.
While messages keep coming in and the backlog grows, instance count from REST API seems to be correct (scaled from 1 to 12). After sending stops, the reported instance count rapidly goes down (eventually back to 1, while processors are still busy).
But based on the speed and the execution logs I can tell that the actual instance count kept growing and ended up at 15 instances at the moment of last message processed.
UPDATE2: It looks like SDK refuses to report more than 20 servers. The metric flats out at 20, while App Insights kept steady growth and is already showing 41.
Who can I trust? Is there a better way to get instance count (e.g. from App Insights)?
Based on my understanding we need to use Rest API endpoint to retrieve the instance, App Insights could be configured for multiple WebApps, so the number of servers online in the App Insights may be for multiple WebApps.
Updated:
Based on my test, the number of the application insight may be not real time.
During my test if the WebApp Function scale out then I could get multiple instances with Rest API, and I also can check the number of servers online in the App Insights.
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourcegroup}/providers/Microsoft.Web/sites/{functionname}/instances?api-version=2016-08-01
But after I finished the test, I could get the number of the instance with Rest API is 1, based on my understanding, it is right result.
At the same time I check it in the Application Insight the number of the servers online is the max number during my test.
And after a while, the number of server online in the application insight also became 1.
So If we want to get the number of intance for Azure function, my suggestion is that using REST API to do that.
Update2:
According to the DavidEbbo mentioned that the REST API is not always reliable.
Unfortunately, the REST API is not always reliable. Specifically, when a Function App scales across multiple scale units, only the instances from the 'home' scale unit are reflected. You probably will not see this in a smallish test, but likely will if you start scaling out widely (say over 20 instances).

How is the cost for Azure Function proxy calculated?

We have lots of images in Azure Blob Storage (LRS Hot). We calculate around 15 million downloads per month for a total of 5000 GB egress (files are on average 350kB). I can calculate the price for the Blob Storage but the Function proxy is unknown. The Azure Functions pricing document doesn't say anything about proxy functions and specifically about bandwidth.
Question 1: Are these calculations correct?
Execution count price is €0,169 per million executions, which equals to 15 * 0,169€=2,54€/month.
GB-s price is €0,000014/GB-s and memory usage is rounded to nearest 128MB. If file download time is 0,2s and memory is 128MB we have 0,2 * (128/1024) * 15000000 * 0,000014 = 5,25€/month
Question 2: What about bandwidth? Is there any cost for that?
Q1: Mostly yes.
Azure Functions Proxies (Preview) works just like regular functions, meaning that any routing done by your proxy counts as one execution. Also, just like standard functions, it uses your GB-s while it's running. Your calculation approach is correct, with the caveat that reading from blog storage is actually a streaming activity, which will consume a fixed amount of memory multipled by the time it will take to each file to download.
Q2: This works the same way as Azure App Service. From the pricing page:
165 MB outbound network traffic included, additional outbound network bandwidth charged separately.

How to scale a nodejs app

In order to predict our exploitation costs, my new associates and I would like to predict our hosting needs.
Our application would be a public one, involving increasing number of users.
We found that, for node.js applications, we basically have 2 options :
As a service, like Heroku
Take a raw server, dedicated or virtual, like OVH here in France
Specifications :
The server would be essentially a backend one, serving ressources "REST-like" over socket.io (with sails.js' implementation sails.io.js)
The usage would basically be, for each user :
Making a search : server taking a "request" (socket event), processing a reasonnable calculation (involing a few maths), returning a reasonnable number (< 1000) of "responses" (socket event), taken from a database, as json
the user would make, say, 3 requests in a raw usage)
Each user would use the application twice a day
In the background, each user would send it's location to the server, still with "REST-like" over socket, say, every minute
Question
I'd just like to know, what basically would be the process to guess the kind of server we have to purchase ? We would like to "scale as we grow" the server, but we still have to make plans, and I can't realy figure out how to predict the need for 10000 users for example.
Would this be about calculating a "per user" server performance unit (Ram, Cpu, "Dyno") and network unit (bandwidth) ?
Thank you very much =)
It's 7 months ago, but as an answer try to use Google Compute Engine, and use the auto scale function to scale use as you go. Node.js works on it and you can install any other packages you need.
Load balancing is handled for you by google as well. You pay extras, but you save a lot of time from research/developing scaling issues.

Resources