Custom LoadBalaner probing for https endpoints in Azure CloudServices - node.js

About my case: I have a node.js REST API deployed in Azure CloudService. The node.js process is hosted in IIS using iisnode. Because of this the default probing doesn't work well as it might be that the entire IIS process is down or something when wrong in the node.exe process and the default probing will not encounter the issue. As a solution I am trying to implement custom probing.
The Problem: I am trying to make the Azure LoadBalancer use a custom probe endpoint for one of my CloudServices as discussed in this article. I am struggling with the fact that it seems custom LoadBalancing probes are available only for public input endpoints using http, tcp or udp.
In my case I have the limitation that I should expose only endpoints under the https protocol. Here is my CloudService definition:
<ServiceDefinition xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="dec-api-server" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="dec-api" vmsize="Small">
<Certificates>
<Certificate name="HttpsCertificate" storeLocation="LocalMachine" storeName="CA" />
</Certificates>
<Endpoints>
<InputEndpoint name="HttpsIn" protocol="https" port="443" certificate="HttpsCertificate"/>
<InputEndpoint name="internalProbingEndpoint" port="8091" protocol="http" loadBalancerProbe="customProbe"/>
</Endpoints>
<Sites>
<Site name="Web">
<Bindings>
<Binding name="HttpsIn" endpointName="HttpsIn" />
<Binding name="internalProbingBinding" endpointName="internalProbingEndpoint" />
</Bindings>
</Site>
</Sites>
</WebRole>
<LoadBalancerProbes>
<LoadBalancerProbe name="customProbe" intervalInSeconds="30" path="/probe" timeoutInSeconds="60" port="8091" protocol="http"/>
</LoadBalancerProbes>
</ServiceDefinition>
I have tried the following things:
I defined the loadBalancerProbe="customProbe" attribute in the httpsIn endpoint and modified protocol and the port in the LoadBalancerProbe element but it seems it is not possible as the deployment fails with a complain that it is not valid XML.protocol=https is not supported there.
Then I thought I can add second input endpoint using http that will be used for probing and will disable the network traffic for other networks using Endpoint ACL and allow only the LoadBalancer to access it. It works, or at least I can see in the IIS log that the LoadBalancer calls the /probe endpoint but in case it returns status 500 it takes out of rotation only this Endpoint but not the entire WebRole or Instance of the CloudService. The calls through the HttpsIn endpoint still hit the machine where the probe endpoint returns 500.
The Question: Is there a way to configure the Azure LoadBalancer for a CloudService to use a custom endpoint for probing when HTTPS is used?
Is there a workaround if that is not supported?
Any help or hint would be greatly appreciated.
Thanks

Related

Azure ReservedIPAddress & Cloud Service without an endpoint

I have a cloud service that opens a socket externally and requires a whitelisted IP address. Nothing will externally initiate a connection with my service.
When I attempt to publish it with an associated ReservedIP address I get the following error: Validation Errors: Error validating the .cscfg file against the .csdef file. Severity:Error, message:ReservedIP 'xxxx' was not mapped to an endpoint. The service definition must contain atleast one endpoint that maps to the ReservedIP..
.cscfg
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="Gateway" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="5" osVersion="*" schemaVersion="2015-04.2.6">
<Role name="WorkerRole1">
<Instances count="1" />
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="yyyyy" />
<Setting name="APPINSIGHTS_INSTRUMENTATIONKEY" value="xxx" />
<Setting name="ASPNETCORE_ENVIRONMENT" value="dev" />
</ConfigurationSettings>
</Role>
<NetworkConfiguration>
<AddressAssignments>
<ReservedIPs>
<ReservedIP name="xxxxx"/>
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
</ServiceConfiguration>
Is there a way to deploy this without specifying an endpoint? (I'm using VS2017RC to deploy)
If not, what would the xml look like for a dummy 'endpoint' and what risks do I run doing that?
Is there a better way I should be approaching this?
I ran into the same issue and the working solution for me was to take the "Input endpoint" from here and place it in .csdef file within the WorkerRole tag.
<Endpoints>
<InputEndpoint name="StandardWeb" protocol="http" port="80" localPort="80" />
</Endpoints>
Looks like ReservedIP is only supported with services containing an external endpoint. What you can do is add an external endpoint but firewall it off with the NSG (Network Security Group).
On help defining an endpoint see
https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-enable-communication-role-instances
Also, if you use a port that is actually not bound to in the machine, it should not be a vulnerability; but adding a deny rule in NSG would cover for any change in future as well.
[Aside] If your service does not have any incoming connections, you should consider using a worker role instead of a web role. Long running threads can get terminated in web role instances.

Strange TCP connections (every 25 seconds) on my azure cloud service

A few days ago, something strange started to appear on my azure cloud service.
Every 25 seconds, a TCP connection from 13.95.160.11 is made.
It's an Microsoft Azure IP.
It has never done this before.
At first, I was thinking about a load balancer configuration but there is nothing about it in the documentation.
Here is my csdef Endpoints :
<Endpoints>
<InputEndpoint name="HttpEndpoint" protocol="http" port="8080" />
<InputEndpoint name="TcpEndpoint" protocol="tcp" port="12345"/>
<InternalEndpoint name="TcpInternal" protocol="tcp" />
</Endpoints>
I have also tried to downgrade my azure SDK from 2.9 to 2.8, but nothing change.
I don't know what I am missing, do you have any idea on what is happening?
I have the same problem and I asked on MSDN forums. This was the answer provided by a moderator:
IP address is related to Microsoft Azure in order to monitor the health (Keep connection alive).
Source.

Understanding Azure cloud services firewall

I'm trying to understand what are firewall rules for Azure cloud services (Web/Worker roles) by default, and I'm confused.
Based on multiple source, including this link http://download.microsoft.com/download/C/A/3/CA3FC5C0-ECE0-4F87-BF4B-D74064A00846/AzureNetworkSecurity_v3_Feb2015.pdf, inbound connections are blocked by default for cloud services, be it worker role or web role. To open inbound connection I would need to specify parameters for EndPoints elements in .cscfg.
However, I never did this, but my web roles and worker roles accept inboud connection, even UDP connection to worker role.
What am I missing?
Update: I apologize, I was looking at wrong file. For reasons I cannot explain I mixed .csdef and .cscfg. Now it looks like stupid question :)
You're correct - web and worker roles require endpoints to be defined, to allow external traffic to pass through to your role instances.
Regarding the fact you can currently access your existing web/worker instances: By default, an endpoint for port 80 is created for your web role, and if you enabled RDP, that is enabled as well.
Just be aware that there are port mappings that occur: That is, you specify the external port (maybe... port 8000), which then maps to your actual port where your code is listening (maybe... port 80).
And also be aware that, if you use one of those ports for one role, you must come up with a different port for a different role. All instances of a given role may consume the same port, in a load-balanced fashion. But... if you set up a web server using, say, port 8000 externally on your web role, and you define another web role (or maybe a worker role), you cannot use port 8000 for that role.
Role endpoints are exposed in the cloud service project, within Visual Studio, in case you don't want to edit the configuration file directly.
David has most of the answer covered, for the detailed WHY it works:
https://azure.microsoft.com/nl-nl/documentation/articles/cloud-services-role-enable-remote-desktop/
Take a look at the csdef file, there is an imports section in there
<Imports>
<Import moduleName="<import-module>"/>
</Imports>
The module for RDP is "RemoteAccess" and there will be a "RemoteAccessForwarder", all plugins/modules are in the Azure SDK in this directory (replace v2.9 with your azure SDK version)
C:\Program Files\Microsoft SDKs\Azure\.NET SDK\v2.9\bin\plugins
Importing this module results in the following config being added to the csdef file at runtime:
<?xml version="1.0" ?>
<RoleModule
xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"
namespace="Microsoft.WindowsAzure.Plugins.RemoteAccess">
<Startup priority="-1">
<Task commandLine="RemoteAccessAgent.exe" executionContext="elevated" taskType="background" />
<Task commandLine="RemoteAccessAgent.exe /blockStartup" executionContext="elevated" taskType="simple" />
</Startup>
<ConfigurationSettings>
<Setting name="Enabled" />
<Setting name="AccountUsername" />
<Setting name="AccountEncryptedPassword" />
<Setting name="AccountExpiration" />
</ConfigurationSettings>
<Endpoints>
<InternalEndpoint name="Rdp" protocol="tcp" port="3389" />
</Endpoints>
<Certificates>
<Certificate name="PasswordEncryption" storeLocation="LocalMachine" storeName="My" permissionLevel="elevated" />
</Certificates>
</RoleModule>
This will open port 3389 for the RDP connection, so the Endpoint is in the .csdef file, but through an import.
Also take a look at the "RemoteForwarder", it acts as the gateway, so only 1 port (3389) has to be opened on the outside, and only 1 instance will listen to this. The RemoteForwarder will then forward the RDP connection to the right machine. More info:
https://blogs.msdn.microsoft.com/avkashchauhan/2011/12/06/how-does-remote-desktop-works-in-windows-azure/

Custom Load balancer probe for Azure Webrole is not working as expected for Https endpoint

I am trying to setup custom load balancer probe for azure webrole(2 instances). I made following changes in the ServiceDefinition.csdef file:
<ServiceDefinition>
<LoadBalancerProbes>
<LoadBalancerProbe name="MyLoadBalancerProbe" protocol="http" path="/api/infrastructure/healthprobe"/>
</LoadBalancerProbes>
<WebRole>
...
<Endpoints>
<InputEndpoint name="Endpoint1" protocol="https" port="443" certificate="mycertificate" />
<InputEndpoint name="Endpoint2" protocol="http" port="80" loadBalancerProbe="MyLoadBalancerProbe"/>
</Endpoints>
</WebRole>
</ServiceDefinition>
I could see the load balancer probe sending http get request to endpoint "/api/infrastructure/healthprobe" in IIS logs.
When i start sending http status code 403 instead of 200 from one of the instances, i can see all http request from the browser going only other instance but all https requests were still going to both instances.
Then i updated the InputEndpoint for https
<InputEndpoint name="Endpoint1" protocol="https" port="443" certificate="mycertificate" loadBalancerProbe="MyLoadBalancerProbe"/>
Then the custom load balancer probe stop sending any request to both the instances and both http and https request were showing connection time out.
My requirement is that when any of the instance return http status 403 to local probe, both http and https request should not go to that instance. I am not able to figure out what i am doing wrong here.
Thanks in advance
The LoadBalancerProbe schema does not support 'https' as a protocol - https://msdn.microsoft.com/en-us/library/azure/jj151530.aspx
That would be why the probes stop working.
Besides from testing your certificate, why would you want both probes?
I have not looked in to if you can just change the port of the probe to test your https connectivity - maybe that will work?
-Mikkel

Azure internal load balancer issues and questions

I didnt find any information about these issues regarding the Azure internal load balancer:
Adding another InputEndpoint lead to ILB will be created but not being accessible or functional
Using “only” the ILB definition lead to the public default InputEndpoint vanishes
Not transparent how long it takes until the ILB is available. However, it is visible by viewing the available port for the cloud services web role. If the public port is available, the ILB is not, and vice versa.
So these are my questions:
Is it expected behavior that an internal load balancer replaces the public one?
Is a public load balancer supported beside an internal one/ can I have public access to web roles that are controlled by an internal load balancer?
Are multiple ports supported (e.g. https beside http or private/ public access)?
Some details:
The internal load balancer is connected via fixed ip to a VPN for a cloud service. Configuration looks like this:
<?xml version="1.0"?>
<ServiceDefinition name="MyCloudTest" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
<WebRole name="MyWebRole" vmsize="Standard_D1">
<Runtime executionContext="elevated" />
<Sites>
<Site name="Web">
<Bindings>
<Binding name="ILB-Endpoint-Http" endpointName="ilb-endpoint-http" />
<!--<Binding name="ILB-Endpoint-Https" endpointName="ilb-endpoint-https" />-->
<!--<Binding name="public-http-binding" endpointName="public-http-endpoint" />-->
</Bindings>
</Site>
</Sites>
<Endpoints>
<!--<InputEndpoint name="public-http-endpoint" protocol="http" port="81" />-->
<InputEndpoint name="ilb-endpoint-http" protocol="http" localPort="8080" port="8080" loadBalancer="my-ilb" />
<!--<InputEndpoint name="ilb-endpoint-https" protocol="https" localPort="*" port="8443" loadBalancer="my-ilb" />-->
</Endpoints>
This is part of the ServiceConfiguration defining the ILB pointing to the VPN with fixed ip.
<NetworkConfiguration>
<VirtualNetworkSite name="myvpn" />
<AddressAssignments>
<InstanceAddress roleName="MyWebRole">
<Subnets>
<Subnet name="intra" />
</Subnets>
</InstanceAddress>
</AddressAssignments>
<LoadBalancers>
<LoadBalancer name="my-ilb">
<FrontendIPConfiguration type="private" subnet="intra" staticVirtualNetworkIPAddress="172.28.0.27" />
</LoadBalancer>
</LoadBalancers>
Every hint is highly appreciated.
1.Is it expected behavior that an internal load balancer replaces the public one?
It is the same implementation but ILB is restricted to your own private space (your VNET)
See https://azure.microsoft.com/en-us/documentation/articles/load-balancer-overview/
2.Is a public load balancer supported beside an internal one/ can I have public access to web roles that are controlled by an internal load balancer?
Yes you can have both in the same deployment
3.Are multiple ports supported (e.g. https beside http or private/ public access)?
You can add multiple endpoints. An endpoint has a public port and a private port.
Multiple public ports cannot share the same private port

Resources