I'm trying to understand what are firewall rules for Azure cloud services (Web/Worker roles) by default, and I'm confused.
Based on multiple source, including this link http://download.microsoft.com/download/C/A/3/CA3FC5C0-ECE0-4F87-BF4B-D74064A00846/AzureNetworkSecurity_v3_Feb2015.pdf, inbound connections are blocked by default for cloud services, be it worker role or web role. To open inbound connection I would need to specify parameters for EndPoints elements in .cscfg.
However, I never did this, but my web roles and worker roles accept inboud connection, even UDP connection to worker role.
What am I missing?
Update: I apologize, I was looking at wrong file. For reasons I cannot explain I mixed .csdef and .cscfg. Now it looks like stupid question :)
You're correct - web and worker roles require endpoints to be defined, to allow external traffic to pass through to your role instances.
Regarding the fact you can currently access your existing web/worker instances: By default, an endpoint for port 80 is created for your web role, and if you enabled RDP, that is enabled as well.
Just be aware that there are port mappings that occur: That is, you specify the external port (maybe... port 8000), which then maps to your actual port where your code is listening (maybe... port 80).
And also be aware that, if you use one of those ports for one role, you must come up with a different port for a different role. All instances of a given role may consume the same port, in a load-balanced fashion. But... if you set up a web server using, say, port 8000 externally on your web role, and you define another web role (or maybe a worker role), you cannot use port 8000 for that role.
Role endpoints are exposed in the cloud service project, within Visual Studio, in case you don't want to edit the configuration file directly.
David has most of the answer covered, for the detailed WHY it works:
https://azure.microsoft.com/nl-nl/documentation/articles/cloud-services-role-enable-remote-desktop/
Take a look at the csdef file, there is an imports section in there
<Imports>
<Import moduleName="<import-module>"/>
</Imports>
The module for RDP is "RemoteAccess" and there will be a "RemoteAccessForwarder", all plugins/modules are in the Azure SDK in this directory (replace v2.9 with your azure SDK version)
C:\Program Files\Microsoft SDKs\Azure\.NET SDK\v2.9\bin\plugins
Importing this module results in the following config being added to the csdef file at runtime:
<?xml version="1.0" ?>
<RoleModule
xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"
namespace="Microsoft.WindowsAzure.Plugins.RemoteAccess">
<Startup priority="-1">
<Task commandLine="RemoteAccessAgent.exe" executionContext="elevated" taskType="background" />
<Task commandLine="RemoteAccessAgent.exe /blockStartup" executionContext="elevated" taskType="simple" />
</Startup>
<ConfigurationSettings>
<Setting name="Enabled" />
<Setting name="AccountUsername" />
<Setting name="AccountEncryptedPassword" />
<Setting name="AccountExpiration" />
</ConfigurationSettings>
<Endpoints>
<InternalEndpoint name="Rdp" protocol="tcp" port="3389" />
</Endpoints>
<Certificates>
<Certificate name="PasswordEncryption" storeLocation="LocalMachine" storeName="My" permissionLevel="elevated" />
</Certificates>
</RoleModule>
This will open port 3389 for the RDP connection, so the Endpoint is in the .csdef file, but through an import.
Also take a look at the "RemoteForwarder", it acts as the gateway, so only 1 port (3389) has to be opened on the outside, and only 1 instance will listen to this. The RemoteForwarder will then forward the RDP connection to the right machine. More info:
https://blogs.msdn.microsoft.com/avkashchauhan/2011/12/06/how-does-remote-desktop-works-in-windows-azure/
Related
Is there any way to expose a microservice endpoint without port number in Azure Service Fabric? Port number can be defined in ServiceManifest.xml or it can be dynamically assigned by Service Fabric cluster, but how to call a service without specifying port number?
Of course you do not have to specify port number if you do not need it. Service Fabric will automatically assign port to your service. I also do not define port number because we have 100+ services and it is "a little bit hard" to do that
Just omit Port declaration in ServiceManifest.xml
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="UserHttpEndpoint" Type="Input" />
<Endpoint Protocol="tcp" Name="UserRpcEndpoint" Type="Input" />
</Endpoints>
</Resources>
An endpoint would be useless without a port. So even if you could have one, you shouldn't want it. You are probably looking for a way to call the service without knowing its port number. This can be achieved by using a reverse proxy. With a reverse proxy you can call a service by providing the port of the reverse proxy.
I have a cloud service that opens a socket externally and requires a whitelisted IP address. Nothing will externally initiate a connection with my service.
When I attempt to publish it with an associated ReservedIP address I get the following error: Validation Errors: Error validating the .cscfg file against the .csdef file. Severity:Error, message:ReservedIP 'xxxx' was not mapped to an endpoint. The service definition must contain atleast one endpoint that maps to the ReservedIP..
.cscfg
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="Gateway" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="5" osVersion="*" schemaVersion="2015-04.2.6">
<Role name="WorkerRole1">
<Instances count="1" />
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="yyyyy" />
<Setting name="APPINSIGHTS_INSTRUMENTATIONKEY" value="xxx" />
<Setting name="ASPNETCORE_ENVIRONMENT" value="dev" />
</ConfigurationSettings>
</Role>
<NetworkConfiguration>
<AddressAssignments>
<ReservedIPs>
<ReservedIP name="xxxxx"/>
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
</ServiceConfiguration>
Is there a way to deploy this without specifying an endpoint? (I'm using VS2017RC to deploy)
If not, what would the xml look like for a dummy 'endpoint' and what risks do I run doing that?
Is there a better way I should be approaching this?
I ran into the same issue and the working solution for me was to take the "Input endpoint" from here and place it in .csdef file within the WorkerRole tag.
<Endpoints>
<InputEndpoint name="StandardWeb" protocol="http" port="80" localPort="80" />
</Endpoints>
Looks like ReservedIP is only supported with services containing an external endpoint. What you can do is add an external endpoint but firewall it off with the NSG (Network Security Group).
On help defining an endpoint see
https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-enable-communication-role-instances
Also, if you use a port that is actually not bound to in the machine, it should not be a vulnerability; but adding a deny rule in NSG would cover for any change in future as well.
[Aside] If your service does not have any incoming connections, you should consider using a worker role instead of a web role. Long running threads can get terminated in web role instances.
I have a service fabric cluster deployed with a domain of foo.northcentralus.cloudapp.azure.com
It has a single node type with a single public ip address / load balancer.
Lets say I have the following two apps deployed:
http://foo.northcentralus.cloudapp.azure.com:8101/wordcount/
http://foo.northcentralus.cloudapp.azure.com:8102/visualobjects/
How can I set this up so I can have multiple domains each hosted on port 80? (assuming I own both of these domains obviously)
http://www.wordcount.com
http://www.visualobjects.com
Do I need more than one public ip address in my cluster to support this?
You should be able to do this with a single public IP address through some http.sys magic.
Assuming you're using Katana for your web host (the word count and visual object samples you reference use Katana), then it should be as simple as starting the server using the domain name in the URL:
WebApp.Start("http://visualobjects.com:80", appBuilder => this.startup.Invoke(appBuilder));
The underlying Windows HTTP Server API will register that server with that URL, and any HTTP request that comes in with a Host: visualobjects.com header will automatically be routed to that server. Repeat for any number of servers with their own hostname. This is the host routing that http.sys does for multi-website hosting on a single machine, same as you had in IIS.
The problem you'll run into is with reserving the hostname, which you have to do under an elevated user account before you open the server. Service Fabric has limited support for this in the form of Endpoint configuration in ServiceManifest.xml:
<!-- THIS WON'T WORK FOR REGISTERING HOSTNAMES -->
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
The limitation here is that there is nowhere to specify a hostname, so Service Fabric will always register "http://+:[port]". That unfortunately doesn't work if you want to open a server on a specific hostname - you need to register just the hostname you want to use. You have to do that part manually using netsh (and remove any Endpoints for the same port from ServiceManifest.xml, otherwise it will override your hostname registration).
To register the hostname with http.sys manually, you have to run netsh for the hostname, port, and user account under which your service runs, which by default is Network Service:
netsh http add urlacl url=http://visualobjects.com:80/ user="NT AUTHORITY\NETWORK SERVICE"
But you have to do this from an elevated account on each machine the service will run on. Luckily, we have service setup entry points that can run under elevated account privileges.
edit
One thing you will need to do in order for this to work is open up the firewall on the same port you are listening on. You can do that with the following:
<Resources>
<Endpoints>
<Endpoint Protocol="tcp" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
Note that the Protocol is tcp instead of http. This will open up the firewall port but not override the http.sys registration that you set up with the .bat script.
I believe so. Separate public IP's which can then be set up to allow for routing from those IPs to the same backend pools, but on different ports.
About my case: I have a node.js REST API deployed in Azure CloudService. The node.js process is hosted in IIS using iisnode. Because of this the default probing doesn't work well as it might be that the entire IIS process is down or something when wrong in the node.exe process and the default probing will not encounter the issue. As a solution I am trying to implement custom probing.
The Problem: I am trying to make the Azure LoadBalancer use a custom probe endpoint for one of my CloudServices as discussed in this article. I am struggling with the fact that it seems custom LoadBalancing probes are available only for public input endpoints using http, tcp or udp.
In my case I have the limitation that I should expose only endpoints under the https protocol. Here is my CloudService definition:
<ServiceDefinition xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="dec-api-server" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="dec-api" vmsize="Small">
<Certificates>
<Certificate name="HttpsCertificate" storeLocation="LocalMachine" storeName="CA" />
</Certificates>
<Endpoints>
<InputEndpoint name="HttpsIn" protocol="https" port="443" certificate="HttpsCertificate"/>
<InputEndpoint name="internalProbingEndpoint" port="8091" protocol="http" loadBalancerProbe="customProbe"/>
</Endpoints>
<Sites>
<Site name="Web">
<Bindings>
<Binding name="HttpsIn" endpointName="HttpsIn" />
<Binding name="internalProbingBinding" endpointName="internalProbingEndpoint" />
</Bindings>
</Site>
</Sites>
</WebRole>
<LoadBalancerProbes>
<LoadBalancerProbe name="customProbe" intervalInSeconds="30" path="/probe" timeoutInSeconds="60" port="8091" protocol="http"/>
</LoadBalancerProbes>
</ServiceDefinition>
I have tried the following things:
I defined the loadBalancerProbe="customProbe" attribute in the httpsIn endpoint and modified protocol and the port in the LoadBalancerProbe element but it seems it is not possible as the deployment fails with a complain that it is not valid XML.protocol=https is not supported there.
Then I thought I can add second input endpoint using http that will be used for probing and will disable the network traffic for other networks using Endpoint ACL and allow only the LoadBalancer to access it. It works, or at least I can see in the IIS log that the LoadBalancer calls the /probe endpoint but in case it returns status 500 it takes out of rotation only this Endpoint but not the entire WebRole or Instance of the CloudService. The calls through the HttpsIn endpoint still hit the machine where the probe endpoint returns 500.
The Question: Is there a way to configure the Azure LoadBalancer for a CloudService to use a custom endpoint for probing when HTTPS is used?
Is there a workaround if that is not supported?
Any help or hint would be greatly appreciated.
Thanks
As far as I know it is - unfortunately - not possible to restrict an Azure website to available to Azure-internal services only, since Websitess do not support virtual networks - currently.
Is this still correct?
If yes... I'm thinking of creating an Azure worker role instead to host my services. Is it possible to make the service only available to the websites from my subscription?
Thank you in advance
best
laurin
Laurin - you are correct - while Websites can utilise Hybrid Connections to connect back to services on-premises they aren't actually able to connect (and be restricted to) internal Azure services.
If you use a Web Role you will need to setup a Virtual Network with an appropriate private IP address range and then ensure you add your Web Role to this Virtual Network. This is done by editing the service configuration of your Cloud Service deployment in Visual Studio and making it similar to the below:
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration ...>
<Role name="WebRole1">
...
</Role>
<NetworkConfiguration>
<Dns>
<DnsServers>
<DnsServer name="YourDns" IPAddress="10.4.3.1" />
</DnsServers>
</Dns>
<VirtualNetworkSite name="YourVirtualNetwork" />
<AddressAssignments>
<InstanceAddress roleName="WebRole1">
<Subnets>
<Subnet name="FrontEndSubnet" />
</Subnets>
</InstanceAddress>
</AddressAssignments>
</NetworkConfiguration>
</ServiceConfiguration>