URL for Specific Azure Web Role Instance - azure

Lets say i have an Azure Web Role with 3 instances. Is there a way for me to directly access each role via a URL change?
Im trying to test the endpoints of the instances individually-- thus my inquiry.
Edit
I am not looking for how to down one of the instances, i'm looking for how to ping an endpoint on each of the instances individually.

Input endpoints are load-balanced, so you can't really direct traffic to one single instance.
Having said that, there are a few workarounds:
There's a health-check event you can set up a handler for. In all but one of your instances, you could set the instance's busy-flag, taking it out of the load balancer. To pull this off, you'd need some type of pub/sub (service bus queue?) mechanism to broadcast messages to the instances, letting them know whether to include or exclude themselves from the load balancer. you'd do something like:
RoleEnvironment.StatusCheck += RoleEnvironment_StatusCheck;
Then...
void RoleEnvironment_StatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
if(someMagicConditionToRemoveFromLB)
e.SetBusy();
}
Another option would be to have something like ARR running in a separate web role instance, providing custom load balancing.
Maybe you could come up with other workarounds, but in general, web/worker load balancing isn't set up for direct-instance access.

To add to what David indicated you can set up InstanceInput endpoints on the roles as well. This creates an endpoint on another port that will send traffic directly to one instance. You can do this and point the local endpoint port to 80 and thus get the ability to address individual instances externally; however, this likely isn't something you want to keep around. You can do this as a test and then remove the endpoints with an in place upgrade that just removed the instanceinput endpoints. Note that during this type of upgrade you may loose connectivity as endpoints are updated.

Related

Azure Trafic Manager - Multiple subscription keys

I have two instances of API Management(APIM), in two different regions. The endpoints are protected behind subscription keys. As known, you cannot set these, so they are different for each APIM instance. I am using Azure Traffic Manager in front of the APIM instance to handle load balancing and as an failover component. But when using two instances, with different keys, theres a major issue. Since traffic manager only redirects your requests, you will have unauthorized request to one of the endpoints. Anyone figured out how to deal with this?
You can set subscription key to any value provided it's unique in instance: https://learn.microsoft.com/en-us/rest/api/apimanagement/2019-01-01/subscription/update
You can try one of these methods:
Use client certificates to authenticate instead
You can create subscriptions manually using the API in which you can set the
access keys
You can use the OAuth2 authentication
Traffic manager is just a layer 7 DNS based load balancer. You can use Traffic Manager to load balance only when both the instances are using the same key.
There are different profiles in Traffic Manager but there are no way to detect or choose a backend instance based the keys which is used.
Alternatively you can use Application gateway instead of the Traffic Manager and route the traffic based on the path to the instances.

Azure Application Gateway with multiple apps in a single App Service Environment backend pool

I received in hands a project where an Azure Application Gateway (AGW) uses as backend pool an Internal Load Balancer (ILB) App Service Environment (ASE) containing multiple apps.
The AGW is setup up using several multi-site listeners, where the host of the each multi-site listener matches a custom domain in an App Service instance running in the ILB ASE. Like this:
I need to add a new app to the ASE and corresponding configuration to the AGW.
The problem is that the AGW can have at maximum of 20 listeners, which has been reached in the project I received in hands. So I can't add more apps to the AGW with this setup.
To work around the listener limitation, with minimal changes, I would like to make use of multi-site path-based routing with the ILB ASE as backend pool.
I would like something that looks like the following:
I have spent some time going over the docs as well as other StackOverflow questions. I also have gone over the multi-site app service docs https://learn.microsoft.com/en-us/azure/application-gateway/create-web-app, including playing around with the -PickHostNameFromBackend switches.
I have made a few experiments without success so far.
I believe that what I want to do is currently not supported by the AGW. I think I understand why. The hostname passed from the AGW to the ILB ASE (api.example.com) is not present as custom domain in any of the App Service instances in the ASE, so the request will not be fulfilled. Correct me if I am wrong please.
Is my desired setup (Figure 2) possible ?
If not possible, what would be alternative solutions, with only one AGW as I have today?
Firstly you can open a support ticket to increase the listener/backend pool count from 20 to 40. That should offer you some expansion room immediately.
The second scenario should be possible as well. You should use api-aaa.example.com and api-bbb.example.com as backend pool members. And use the switch PickHostNameFromBackendAddress on HTTPSettings and also create a custom probe with PickHostNameFromBackendHttpSettings flag set and associate probe to the HTTPSetting. The you would use this setting in each path based rule while associate paths to backend pools. Please ensure that your internal DNS within VNet, can resolve api-aaa.example.com and api-bbb.example.com to the ILB IP 222.222.222.222.

How to put an HTTP Gateway on top of Azure Container Instances?

I would like to use Azure Container Instances behind a gateway (HTTP) to avoid an idle infrastructure when there is no traffic.
Something which looks like this.
There is something like that available in Azure ? (like API Gateway in AWS)
Best
There is an Azure template that integrates Application Gateway with Container Instances here. In the example ACIs are deployed in a VNET and the Applications Gateway serves as entry point to the APIs.
You can probably accommodate that templates to fit your requirements.
Step by step:
Upload your images to Container Registry
Create two separate container instances.
Image Type: Private
Fill all required fields, be sure to include the container registry hos name in the image name (the container name can be anything)
Expose your ports in the networking tab
Add a dns name label. Why? Ip can change, see this docs.
Add your env variables if any
It should create them without any problem. Try to access to the dns provided and check if the sites are running properly.
Create new Application Gateway
Fill all required fields (name, tier, whatever...). Use Tier Standard V2.
Public frontend, create new IP address if needed.
Add two bakend pools, use IP or Hostname and provide the dns created in step 2 for each ACI
Add a route: Listener Type: Basic. In Backend targets fill with target type = backend pool, select one of your backend pools and create a new http setting. Literraly fill it with whatever values you want, I coulnd't get this to work in the first page, ever, so I always edit it later.
Add yout tags
Hit create
When it finishes go to your newly created appGateway and search for HTTP Settings.
These are the connection parameters that the appGW uses to connect to your backend, if your backends listens in the same port, the same path and so on you can reuse the HTTP Setting, if it doesn't create 2.
Go to rules and create a path-based rule.
Check your only listener, the default backend and the setting associated with it.
Add a path to your second backend name=endpoint2;paths=_/endpoint2/*_BackendPool=backend2;HTTPSetting=backend2HTTPSetting
And that's it!

Azure Web Role Internal Endpoint - Not Load Balanced

The Azure documentation says that internal endpoints on a web role will not be load balanced. What is the practical ramification of this?
Example:
I have a web role with 20 instances. If I define an internal endpoint for that web role, what is the internal implementation? For example, will all 20 instances still service this end point? Can I obtain a specific endpoint for each instance?
We have a unique callback requirement that could be nicely served by utilizing the normal load balancing behavior on the public endpoint, but having each instance expose an internal endpoint. Based on the published numbers for endpoint limits, this is not possible. So, when defining an internal endpoint, is it "1 per instance", or what? Do all of the role instances service the endpoint? What does Microsoft mean when they say that the internal endpoint is not load balanced? Does all the traffic just flow to one instance? That does not make sense.
First let's clarify the numbers and limitations. The limitations for EndPoints is for Roles, not for Instances. If you are not sure, or still confusing Roles and Instances terms, you can check out my blog post on that. So, the limit is Per Role(s).
Now the differences between the EndPoints - I have a blog post describing them here. But in a quick round, Internal EndPoint will only open communication internally within the deployment. That's why it is Internal. No external traffic (from Internet) will be able to go to an Internal Endpoint. In that terms, it is not load balanced, because no traffic goes via/through a load balancer! The traffic of internal EndPoints only goes between Role Intances (eventually via some internal routing hardwere) but never lives a deployment boundaries. Having said that, it must already be clear that no Internet traffic can be sent to an Internal EndPoint.
A side note - InputEndpoint however is discoverable from Internet and from Inside the deployment. But it is LoadBalanced, since the traffic to an InputEndpoint comes via/through the LoadBalancer from the Internet.
Back to the Numbers. Let's say you have 1 WebRole with 1 Input EndPoint and 1 Internal EndPoint. That makes a total of 2 EndPoints for your deployment. Even if you spin up 50 instances, you sill have just 2 EndPoints that count toward the total EndPoints limit.
Can you obtain a specific EndPoint for Specific Instace - certainly yes! via the RoleEnvironemnt class. It has Roles enumeration. Each Role has Instances, and each Instance has InstanceEndpoints.
Hope this helps!
The endpoints are defined at the role level and instantiated for each instance.
An input endpoint has a public IP address making it accessible from the internet. Traffic to that input endpoint is load-balanced (with a round-robin algorithm) among all the instances of the role hosting the endpoint.
An internal endpoint has no public IP address and is only accessible from inside the cloud service OR a virtual network including that cloud service. Windows Azure does not load balance traffic to internal endpoints - each role instance endpoint must be individually addressed. Ryan Dunn has a nice post showing a simple example of implementing load balanced interaction with an internal endpoint hosting a WCF service.
The Spring Wave release introducted a preview of an instance input endpoint, which is a public IP endpoint that is port forwarded to specific role instance. This, obvously, is not load balanced but provides a way to directly connect to a specific instance.
Just trying to make things more concise and concrete:
// get a list of all instances of role "MyRole"
var instances = RoleEnvironment.Roles["MyRole"].Instances;
// pick an instance at random
var instance = instances[new Random().Next(instances.Count())];
// for that instance, get the IP address and port for the endpoint "MyEndpoint"
var endpoint = instance.InstanceEndpoints["MyEndpoint"].IPEndpoint;
Think of internal endpoints as a discovery mechanism for finding your other VMs.

Directly accessing Azure workers; bypassing the load balancer

Typically, access to Azure workers is done via endpoints that are defined in the service definition. These endpoints, which must be TCP or HTTP(S), are passed through a load balancer and then connected to the actual IP/port of the Azure machines.
My application would benefit dramatically from the use of UDP, as I'm connecting from cellular devices where bytes are counted for billing and the overhead of SYN/ACK/FIN dwarfs the 8 byte packets I'm sending. I've even considered putting my data directly into ICMP message headers. However, none of this is supported by the load balancer.
I know that you can enable ping on Azure virtual machines and then ping them -- http://weblogs.thinktecture.com/cweyer/2010/12/enabling-ping-aka-icmp-on-windows-azure-roles.html.
Is there anything preventing me from using a TCP-based service (exposed through the load balancer) that would simply hand out an IP address and port of an Azure VM address, and then have the application communicate directly to that worker? (I'll have to handle load balancing myself.) If the worker gets shut down or moved, my application will be smart enough to reconnect to the TCP endpoint and ask for a new place to send data.
Does this concept work, or is there something in place to prevent this sort of direct access?
You'd have to run your own router which exposes an input (external) endpoint and then routes to an internal endpoint of your service, either on the same role or a different one (this is actually how Remote Desktop works). You can't directly connect to a specific instance by choice.
There's a 2-part blog series by Benjamin Guinebertière that describes IIS Application Request Routing to provide sticky sessions (part 1, part 2). This might be a good starting point.
Ryan Dunn also talked about http session routing on the Cloud Cover Show, along with a follow-up blog post.
I realize these two examples aren't exactly what you're doing, as they're routing http, but they share a similar premise.
There's a thing called InstanceInputEndpoint which you can use for defining ports on the public IP which will be directed to a local port on a particular VM instance. So you will have a particular port+IP combination which can directly access a particular VM.
<InstanceInputEndpoint name="HttpInstanceEndpoint" protocol="tcp" localPort="80">
<AllocatePublicPortFrom>
<FixedPortRange max="8089" min="8081" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
More info:
http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx

Resources