Azure Web Role Internal Endpoint - Not Load Balanced - azure

The Azure documentation says that internal endpoints on a web role will not be load balanced. What is the practical ramification of this?
Example:
I have a web role with 20 instances. If I define an internal endpoint for that web role, what is the internal implementation? For example, will all 20 instances still service this end point? Can I obtain a specific endpoint for each instance?
We have a unique callback requirement that could be nicely served by utilizing the normal load balancing behavior on the public endpoint, but having each instance expose an internal endpoint. Based on the published numbers for endpoint limits, this is not possible. So, when defining an internal endpoint, is it "1 per instance", or what? Do all of the role instances service the endpoint? What does Microsoft mean when they say that the internal endpoint is not load balanced? Does all the traffic just flow to one instance? That does not make sense.

First let's clarify the numbers and limitations. The limitations for EndPoints is for Roles, not for Instances. If you are not sure, or still confusing Roles and Instances terms, you can check out my blog post on that. So, the limit is Per Role(s).
Now the differences between the EndPoints - I have a blog post describing them here. But in a quick round, Internal EndPoint will only open communication internally within the deployment. That's why it is Internal. No external traffic (from Internet) will be able to go to an Internal Endpoint. In that terms, it is not load balanced, because no traffic goes via/through a load balancer! The traffic of internal EndPoints only goes between Role Intances (eventually via some internal routing hardwere) but never lives a deployment boundaries. Having said that, it must already be clear that no Internet traffic can be sent to an Internal EndPoint.
A side note - InputEndpoint however is discoverable from Internet and from Inside the deployment. But it is LoadBalanced, since the traffic to an InputEndpoint comes via/through the LoadBalancer from the Internet.
Back to the Numbers. Let's say you have 1 WebRole with 1 Input EndPoint and 1 Internal EndPoint. That makes a total of 2 EndPoints for your deployment. Even if you spin up 50 instances, you sill have just 2 EndPoints that count toward the total EndPoints limit.
Can you obtain a specific EndPoint for Specific Instace - certainly yes! via the RoleEnvironemnt class. It has Roles enumeration. Each Role has Instances, and each Instance has InstanceEndpoints.
Hope this helps!

The endpoints are defined at the role level and instantiated for each instance.
An input endpoint has a public IP address making it accessible from the internet. Traffic to that input endpoint is load-balanced (with a round-robin algorithm) among all the instances of the role hosting the endpoint.
An internal endpoint has no public IP address and is only accessible from inside the cloud service OR a virtual network including that cloud service. Windows Azure does not load balance traffic to internal endpoints - each role instance endpoint must be individually addressed. Ryan Dunn has a nice post showing a simple example of implementing load balanced interaction with an internal endpoint hosting a WCF service.
The Spring Wave release introducted a preview of an instance input endpoint, which is a public IP endpoint that is port forwarded to specific role instance. This, obvously, is not load balanced but provides a way to directly connect to a specific instance.

Just trying to make things more concise and concrete:
// get a list of all instances of role "MyRole"
var instances = RoleEnvironment.Roles["MyRole"].Instances;
// pick an instance at random
var instance = instances[new Random().Next(instances.Count())];
// for that instance, get the IP address and port for the endpoint "MyEndpoint"
var endpoint = instance.InstanceEndpoints["MyEndpoint"].IPEndpoint;
Think of internal endpoints as a discovery mechanism for finding your other VMs.

Related

How can I route outbound traffic from an App Service integrated with a VNet containing a Service Endpoint to an external Azure hosted API?

I'm trying to secure my containerized web app with a Premium V2 App Service Plan. I've enabled Service Endpoints for an integration subnet for the different App Services to restrict incoming traffic from each other except for the frontend (so all of them are integrated with the VNet and all have incoming traffic restricted to that VNet except for the frontend).
I have also other Azure services like Azure Functions or a Storage Account that can have inbound traffic restricted by using those Service Endpoints. However, One of the App Services calls an external 3rd party API that lies on Azure too. That API may or not be behind a static IP. However, it has a Custom Domain associated.
The problem arises when I try to connect to that API from one of the VNet integrated App Services. As the destination IP is inside one of the IP ranges that are added to the routing with the use of a Service Endpoint, traffic is sent via that Service Endpoint instead of simple Azure routing. I've tried overriding the route with a Route Table associated to that subnet but that seems not to be possible, with or without a NAT Gateway attached to the subnet. I guess Azure routing is prioritized here. I'm sure the route is not effective as I used it on a different subnet where I deployed a VM.
Is there any way I can use that Service Endpoint for my internal traffic only so it's not used when it goes to an Azure hosted API or I need to switch to a different approach like Private Endpoints or an ASE?
I am unsure what you're looking for but if you want to explicitly define routes you should try using app services setting "WEBSITE_VNET_ROUTE_ALL" = 1 which overrides the default precedence of routing and makes sure that every outbound call follows the route defined inside route table of subnet.
Use the following steps to add the WEBSITE_VNET_ROUTE_ALL setting in your app:
Go to the Configuration UI in your app portal. Select New application setting.
Enter WEBSITE_VNET_ROUTE_ALL in the Name box, and enter 1 in the Value box.
When WEBSITE_VNET_ROUTE_ALL is set to 1, outbound traffic is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
We've been able to ask the 3rd party to disable blocking rules. It turns out they had a rule that blocked this specific traffic.
I already tried changing that setting, but didn't try putting a route table on it. However, it'd make no difference as I can't define a list of allowed outbound IPs belonging to Azure since we have no static IP to call.

How do I restrict the clients that can access my Azure App Service?

Given that I create an Azure 'App Service'
How do I ensure that this service is only callable from ...
A.> 2 existing external servers (whose IP addresses will be known)
B.> 3 other App Services which I will be creating, but whose IP Addresses may not be known since I may need to scale those out (Over multiple additional instances)
To clarify... Is there some Azure service that will allow me to treat this collective of machines (both real and virtual) as a single group, such that I can apply some test on incoming requests to see if they originate from this group?
on Azure WebApps, You may wish to know; the IP Restrictions (https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions) allow you to define a list of IP addresses that are allowed to access your app. The allow list can include individual IP addresses or a range of IP addresses defined by a subnet mask. When a request to the app is generated from a client, the IP address is evaluated against the allow list. If the IP address is not in the list, the app replies with an HTTP 403 status code.
You can use IP and Domain Restrictions to control the set of IP addresses, and address ranges, that are either allowed or denied access to in your websites. With Azure WebApps you can enable/disable the feature, as well as customize its behavior, using web.config files located in their website.
Additionally, VNET Integration gives your web app access to resources in your virtual network but does not grant private access to your web app from the virtual network. Private site access is only available with an ASE configured with an Internal Load Balancer (ILB).
If you haven’t checked this already, checkout Integrate your app with an Azure Virtual Network for more details on VNET Integration (https://learn.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet)
I strongly suggest dropping the whole what's my IP approach and throwing in OAuth. Azure AD gives you access tokens with moderate effort —
Service to service calls using client credentials (shared secret or certificate)
Else, TLS client authentication would be next on my list. Although that tends to really suck if you have to deal with several programming stacks, TLS offloaders and what not.

URL for Specific Azure Web Role Instance

Lets say i have an Azure Web Role with 3 instances. Is there a way for me to directly access each role via a URL change?
Im trying to test the endpoints of the instances individually-- thus my inquiry.
Edit
I am not looking for how to down one of the instances, i'm looking for how to ping an endpoint on each of the instances individually.
Input endpoints are load-balanced, so you can't really direct traffic to one single instance.
Having said that, there are a few workarounds:
There's a health-check event you can set up a handler for. In all but one of your instances, you could set the instance's busy-flag, taking it out of the load balancer. To pull this off, you'd need some type of pub/sub (service bus queue?) mechanism to broadcast messages to the instances, letting them know whether to include or exclude themselves from the load balancer. you'd do something like:
RoleEnvironment.StatusCheck += RoleEnvironment_StatusCheck;
Then...
void RoleEnvironment_StatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
if(someMagicConditionToRemoveFromLB)
e.SetBusy();
}
Another option would be to have something like ARR running in a separate web role instance, providing custom load balancing.
Maybe you could come up with other workarounds, but in general, web/worker load balancing isn't set up for direct-instance access.
To add to what David indicated you can set up InstanceInput endpoints on the roles as well. This creates an endpoint on another port that will send traffic directly to one instance. You can do this and point the local endpoint port to 80 and thus get the ability to address individual instances externally; however, this likely isn't something you want to keep around. You can do this as a test and then remove the endpoints with an in place upgrade that just removed the instanceinput endpoints. Note that during this type of upgrade you may loose connectivity as endpoints are updated.

Azure - secure communication between internal roles in Azure

In this link (Azure: security between web roles) the OP asks: "In Azure, if you choose to use internal endpoint (instead of input endpoint), https is not an option. http & tcp are the only options. Does it mean internal endpoint is 100% secure and you don't need encryption"
the answer he gets is: No, a web/worker role cannot connect to an internal endpoint in another deployment
My question is possible at all to deploy such a solution?
Thanks
Joe
There are two separate things you brought up in your question.
Internal endpoints are secure in that the only other VM instances that can access these are within the same deployment. If, say, a web app needs to talk to a WCF service on a worker role instance, it can direct-connect with a tcp or http connection, with no need for encryption. It's secure.
Communication between deployments requires a Virtual Network, as internal endpoints are not accessible outside the boundary of the deployment. You can connect two deployments via Virtual Network, and a that point each of the virtual machine instances in each deployment may see each other. The notion of endpoints is moot at this point, as you can simply connect to a specific port on one of the server instances.

Directly accessing Azure workers; bypassing the load balancer

Typically, access to Azure workers is done via endpoints that are defined in the service definition. These endpoints, which must be TCP or HTTP(S), are passed through a load balancer and then connected to the actual IP/port of the Azure machines.
My application would benefit dramatically from the use of UDP, as I'm connecting from cellular devices where bytes are counted for billing and the overhead of SYN/ACK/FIN dwarfs the 8 byte packets I'm sending. I've even considered putting my data directly into ICMP message headers. However, none of this is supported by the load balancer.
I know that you can enable ping on Azure virtual machines and then ping them -- http://weblogs.thinktecture.com/cweyer/2010/12/enabling-ping-aka-icmp-on-windows-azure-roles.html.
Is there anything preventing me from using a TCP-based service (exposed through the load balancer) that would simply hand out an IP address and port of an Azure VM address, and then have the application communicate directly to that worker? (I'll have to handle load balancing myself.) If the worker gets shut down or moved, my application will be smart enough to reconnect to the TCP endpoint and ask for a new place to send data.
Does this concept work, or is there something in place to prevent this sort of direct access?
You'd have to run your own router which exposes an input (external) endpoint and then routes to an internal endpoint of your service, either on the same role or a different one (this is actually how Remote Desktop works). You can't directly connect to a specific instance by choice.
There's a 2-part blog series by Benjamin Guinebertière that describes IIS Application Request Routing to provide sticky sessions (part 1, part 2). This might be a good starting point.
Ryan Dunn also talked about http session routing on the Cloud Cover Show, along with a follow-up blog post.
I realize these two examples aren't exactly what you're doing, as they're routing http, but they share a similar premise.
There's a thing called InstanceInputEndpoint which you can use for defining ports on the public IP which will be directed to a local port on a particular VM instance. So you will have a particular port+IP combination which can directly access a particular VM.
<InstanceInputEndpoint name="HttpInstanceEndpoint" protocol="tcp" localPort="80">
<AllocatePublicPortFrom>
<FixedPortRange max="8089" min="8081" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
More info:
http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx

Resources