What do Azure Cloud Service Endpoints actually configure? - azure

In a Cloud Service Definition file, we specify the Endpoints for our Cloud Service. I understand this is used to configure Azure's own network load-balancer and also the instance's own Windows Firewall. This only needs the port number (for the load-balancer) and/or the process executable name (for Windows Firewall, if using process-based rules instead of port-number-based rules).
But what is the significance of each protocol="http|https|tdp|udp" attribute value?
My confusion is because tdp and udp are both transport-layer protocols - but that's enough to configure the load-balancer and firewall (as they can selectively block non-TCP or non-UDP traffic as appropriate). But what does protocol="http" or protocol="https" do, exactly?
HTTP and HTTPS are both Application-layer protocols that run on top of TCP - so if I specify protocol="tcp" for a WebRole then that should work for both HTTP and HTTPS traffic, right?
But what happens if I have a Worker Role (not a Web Role) Cloud Service which has a protocol="https" endpoint defined? What happens if my Worker Role code creates a Socket that listens on that particular port? And what happens to the incoming connection?
Does the load-balancer require the incoming connection to conform to the HTTP or HTTPS protocol specification? What if it's a polyglot protocol or some other protocol that looks like HTTP but is actually appliction-defined (e.g. SIP)?
If I specify https I also have to specify the certificate name - so this must mean the load-balancer is responsible for decryping TLS communication and then forwarding cleartext to the Web Role or Worker Role socket?
What if I have a Worker Role that privately makes use of HTTP.sys (or IIS Hostable Web Core) or self-hosts ASP.NET Core, for example, how would that be configured as the <Sites> element is unavailable?
Does the use of protocol="https" or protocol="http" cause any configuration of HTTP.sys?

Related

Azure Application Gateway - How to control traffic for different application

I am creating an application gateway and that will be a single point of entry for my multi tenant application. That means I will have multiple application request on this application gateway and then I need to redirect to backend pool. If I will have one application A deployed in app service A then it will listen at port 80 of app gateway. Similar if I have another application, I can expose it using similar way on different port. How can I achieve it. I tried creating multiple rules but not working.
If I read your question correctly, you want multiple app services, each on a potentially different port, to be served by a single application gateway. And it sounds possible you might want to make requests to that application gateway on different port. Sound right?
If so, then what you need to do is something along these lines:
Set up a backend pool for each app service.
Set up an HTTP setting for each backend pool, specifying port, session affinity, protocol, etc. - This will be the port that your App Service takes requests on.
Create a front end IP configuration to expose a public and/or private IP address
Create a listener for each app service, and port that you want to support. This will be the port you want the client requests to use. You can do two listeners per service to allow 80 & 443 (HTTP & HTTPS) traffic, for example.
Create a rule to connect each listener to its backend pool and HTTP setting combination
Optional - set up health probes that target monitoring endpoints based on an URL and a specific HTTP setting entry

host multiple public sites on service fabric

I have a service fabric cluster deployed with a domain of foo.northcentralus.cloudapp.azure.com
It has a single node type with a single public ip address / load balancer.
Lets say I have the following two apps deployed:
http://foo.northcentralus.cloudapp.azure.com:8101/wordcount/
http://foo.northcentralus.cloudapp.azure.com:8102/visualobjects/
How can I set this up so I can have multiple domains each hosted on port 80? (assuming I own both of these domains obviously)
http://www.wordcount.com
http://www.visualobjects.com
Do I need more than one public ip address in my cluster to support this?
You should be able to do this with a single public IP address through some http.sys magic.
Assuming you're using Katana for your web host (the word count and visual object samples you reference use Katana), then it should be as simple as starting the server using the domain name in the URL:
WebApp.Start("http://visualobjects.com:80", appBuilder => this.startup.Invoke(appBuilder));
The underlying Windows HTTP Server API will register that server with that URL, and any HTTP request that comes in with a Host: visualobjects.com header will automatically be routed to that server. Repeat for any number of servers with their own hostname. This is the host routing that http.sys does for multi-website hosting on a single machine, same as you had in IIS.
The problem you'll run into is with reserving the hostname, which you have to do under an elevated user account before you open the server. Service Fabric has limited support for this in the form of Endpoint configuration in ServiceManifest.xml:
<!-- THIS WON'T WORK FOR REGISTERING HOSTNAMES -->
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
The limitation here is that there is nowhere to specify a hostname, so Service Fabric will always register "http://+:[port]". That unfortunately doesn't work if you want to open a server on a specific hostname - you need to register just the hostname you want to use. You have to do that part manually using netsh (and remove any Endpoints for the same port from ServiceManifest.xml, otherwise it will override your hostname registration).
To register the hostname with http.sys manually, you have to run netsh for the hostname, port, and user account under which your service runs, which by default is Network Service:
netsh http add urlacl url=http://visualobjects.com:80/ user="NT AUTHORITY\NETWORK SERVICE"
But you have to do this from an elevated account on each machine the service will run on. Luckily, we have service setup entry points that can run under elevated account privileges.
edit
One thing you will need to do in order for this to work is open up the firewall on the same port you are listening on. You can do that with the following:
<Resources>
<Endpoints>
<Endpoint Protocol="tcp" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
Note that the Protocol is tcp instead of http. This will open up the firewall port but not override the http.sys registration that you set up with the .bat script.
I believe so. Separate public IP's which can then be set up to allow for routing from those IPs to the same backend pools, but on different ports.

Endpoint configuration for Service Fabric

I deployed an app on Service Fabric and there's an HTTP listener spawned inside. How can I configure the listening URL in relation to app/cluster?
More precisely, is there any way to build this URL inside the app by retrieving some environment/role parameter ?
Suppose my cluster is called "test", then it will be available at: test.northeurope.cloudapp.azure.com. If I have an app called "Sample" for which I configured an endpoint called "SampleTypeEndpoint" inside ServiceManifest.xml, what would be the complete URL my app would listen to?
The endpoints you configure in ServiceManifest.xml right now fulfill two purposes:
Allow Service Fabric to provide a unique port from an application port range, if you don't need a well-known port.
When opening a web server that uses http.sys, allow Service Fabric to set up URL ACLs for a random port or a well-known port (80, 443, etc) and certificate ACLs for HTTPS.
That's basically it. The actual address on which you open a listener is up to you to determine. Typically, you open a listener on the node IP and use a NAT for ingress traffic on a domain name. In Azure, the NAT is the Azure Load Balancer which is automatically configured to accept traffic on your cluster's VIP as well as the .region.cloudapp.azure.com domain.
Here's a more thorough overview of how this works on Service Fabric cluster in Azure: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/

Azure - secure communication between internal roles in Azure

In this link (Azure: security between web roles) the OP asks: "In Azure, if you choose to use internal endpoint (instead of input endpoint), https is not an option. http & tcp are the only options. Does it mean internal endpoint is 100% secure and you don't need encryption"
the answer he gets is: No, a web/worker role cannot connect to an internal endpoint in another deployment
My question is possible at all to deploy such a solution?
Thanks
Joe
There are two separate things you brought up in your question.
Internal endpoints are secure in that the only other VM instances that can access these are within the same deployment. If, say, a web app needs to talk to a WCF service on a worker role instance, it can direct-connect with a tcp or http connection, with no need for encryption. It's secure.
Communication between deployments requires a Virtual Network, as internal endpoints are not accessible outside the boundary of the deployment. You can connect two deployments via Virtual Network, and a that point each of the virtual machine instances in each deployment may see each other. The notion of endpoints is moot at this point, as you can simply connect to a specific port on one of the server instances.

Directly accessing Azure workers; bypassing the load balancer

Typically, access to Azure workers is done via endpoints that are defined in the service definition. These endpoints, which must be TCP or HTTP(S), are passed through a load balancer and then connected to the actual IP/port of the Azure machines.
My application would benefit dramatically from the use of UDP, as I'm connecting from cellular devices where bytes are counted for billing and the overhead of SYN/ACK/FIN dwarfs the 8 byte packets I'm sending. I've even considered putting my data directly into ICMP message headers. However, none of this is supported by the load balancer.
I know that you can enable ping on Azure virtual machines and then ping them -- http://weblogs.thinktecture.com/cweyer/2010/12/enabling-ping-aka-icmp-on-windows-azure-roles.html.
Is there anything preventing me from using a TCP-based service (exposed through the load balancer) that would simply hand out an IP address and port of an Azure VM address, and then have the application communicate directly to that worker? (I'll have to handle load balancing myself.) If the worker gets shut down or moved, my application will be smart enough to reconnect to the TCP endpoint and ask for a new place to send data.
Does this concept work, or is there something in place to prevent this sort of direct access?
You'd have to run your own router which exposes an input (external) endpoint and then routes to an internal endpoint of your service, either on the same role or a different one (this is actually how Remote Desktop works). You can't directly connect to a specific instance by choice.
There's a 2-part blog series by Benjamin Guinebertière that describes IIS Application Request Routing to provide sticky sessions (part 1, part 2). This might be a good starting point.
Ryan Dunn also talked about http session routing on the Cloud Cover Show, along with a follow-up blog post.
I realize these two examples aren't exactly what you're doing, as they're routing http, but they share a similar premise.
There's a thing called InstanceInputEndpoint which you can use for defining ports on the public IP which will be directed to a local port on a particular VM instance. So you will have a particular port+IP combination which can directly access a particular VM.
<InstanceInputEndpoint name="HttpInstanceEndpoint" protocol="tcp" localPort="80">
<AllocatePublicPortFrom>
<FixedPortRange max="8089" min="8081" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
More info:
http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx

Resources