I started to play with Service Fabric very recently. I added a new Service Fabric cluster on Azure (unsecure) and I created a demo solution with 2 stateless Web API Services as follows:
Endpoint configuration for AnotherAPI is the following:
<Endpoints>
<!-- This endpoint is used by the communication listener to obtain the port on which to
listen. Please note that if your service is partitioned, this port is shared with
replicas of different partitions that are placed in your code. -->
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8698" />
</Endpoints>
I am able to access to the default controller (ValuesController) using the local endpoint:
http://localhost:8698/api/values
But when I try to use the azure endpoint I get an ERR_CONNECTION_TIMED_OUT error on Chrome.
http://{azure-ip-address}:8698/api/values
Is there anything that I am missing?
You have to open that port in your azure cluster via a Load Balancer Probe. You can do this at cluster creation time via ARM template or after the fact. For an existing cluster, go to the resource group, then the LB Balancer, then probes. The default open port in SF is 19080 though. If you just switch to that port it will work if you are not using SSL.
Related
In a Cloud Service Definition file, we specify the Endpoints for our Cloud Service. I understand this is used to configure Azure's own network load-balancer and also the instance's own Windows Firewall. This only needs the port number (for the load-balancer) and/or the process executable name (for Windows Firewall, if using process-based rules instead of port-number-based rules).
But what is the significance of each protocol="http|https|tdp|udp" attribute value?
My confusion is because tdp and udp are both transport-layer protocols - but that's enough to configure the load-balancer and firewall (as they can selectively block non-TCP or non-UDP traffic as appropriate). But what does protocol="http" or protocol="https" do, exactly?
HTTP and HTTPS are both Application-layer protocols that run on top of TCP - so if I specify protocol="tcp" for a WebRole then that should work for both HTTP and HTTPS traffic, right?
But what happens if I have a Worker Role (not a Web Role) Cloud Service which has a protocol="https" endpoint defined? What happens if my Worker Role code creates a Socket that listens on that particular port? And what happens to the incoming connection?
Does the load-balancer require the incoming connection to conform to the HTTP or HTTPS protocol specification? What if it's a polyglot protocol or some other protocol that looks like HTTP but is actually appliction-defined (e.g. SIP)?
If I specify https I also have to specify the certificate name - so this must mean the load-balancer is responsible for decryping TLS communication and then forwarding cleartext to the Web Role or Worker Role socket?
What if I have a Worker Role that privately makes use of HTTP.sys (or IIS Hostable Web Core) or self-hosts ASP.NET Core, for example, how would that be configured as the <Sites> element is unavailable?
Does the use of protocol="https" or protocol="http" cause any configuration of HTTP.sys?
I have a Cloud Service Worker Role in Azure which has been set up with a Reserved IP address. The goal of the Reserved IP is so when the worker role makes external requests it will always come from the same IP. No external traffic is received by the service and no internal communication is required.
EDIT: The Reserved IP was associated with the Cloud Service using the following Azure Powershell command:
Set-AzureReservedIPAssociation -ReservedIPName uld-sender-ip -ServiceName uld-sender
This added the following NetworkConfiguration section into the .cscfg file:
<NetworkConfiguration>
<AddressAssignments>
<ReservedIPs>
<ReservedIP name="uld-sender-ip" />
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
Now, when I try and re-deploy the service or update the configuration settings in Azure, I get the following error:
The operation '5e6772fae607ae0ca387457883bf2974' failed: 'Validation
Errors: Error validating the .cscfg file against the .csdef file.
Severity:Error, message:ReservedIP 'uld-sender-ip' was not mapped to
an endpoint. The service definition must contain atleast one endpoint
that maps to the ReservedIP..'.
So, I have tried adding an Endpoint to the .csdef file like so:
<Endpoints>
<InternalEndpoint name="uld-sender-ip" protocol="tcp" port="8080" />
</Endpoints>
In addition, I have entered NetworkTrafficRules to the .csdef like so:
<NetworkTrafficRules>
<OnlyAllowTrafficTo>
<Destinations>
<RoleEndpoint endpointName="uld-sender-ip" roleName="Sender"/>
</Destinations>
<AllowAllTraffic/>
</OnlyAllowTrafficTo>
</NetworkTrafficRules>
But I still get the same error.
My understanding is that endpoints are only required for internal communication between worker/web roles, or to open a port to receive external communication.
EDIT: My question is how do you map a Reserved IP to an Endpoint for this scenario?
To avoid getting the error while trying to update the configuration settings or re-deploy the service, I ran the Azure Powershell command to remove the reserved ip association with the service:
Remove-AzureReservedIPAssociation -ReservedIPName uld-sender-ip -ServiceName uld-sender
Then I was able to edit and save the configuration settings in Azure, and/or re-deploy the service. Once the service is updated I ran the Azure Powershell command to set the reserved ip association with the service:
Set-AzureReservedIPAssociation -ReservedIPName uld-sender-ip -ServiceName uld-sender
This is obviously not the ideal solution but at least I can make changes to the service if needed. Hope this helps someone.
I have a service fabric cluster deployed with a domain of foo.northcentralus.cloudapp.azure.com
It has a single node type with a single public ip address / load balancer.
Lets say I have the following two apps deployed:
http://foo.northcentralus.cloudapp.azure.com:8101/wordcount/
http://foo.northcentralus.cloudapp.azure.com:8102/visualobjects/
How can I set this up so I can have multiple domains each hosted on port 80? (assuming I own both of these domains obviously)
http://www.wordcount.com
http://www.visualobjects.com
Do I need more than one public ip address in my cluster to support this?
You should be able to do this with a single public IP address through some http.sys magic.
Assuming you're using Katana for your web host (the word count and visual object samples you reference use Katana), then it should be as simple as starting the server using the domain name in the URL:
WebApp.Start("http://visualobjects.com:80", appBuilder => this.startup.Invoke(appBuilder));
The underlying Windows HTTP Server API will register that server with that URL, and any HTTP request that comes in with a Host: visualobjects.com header will automatically be routed to that server. Repeat for any number of servers with their own hostname. This is the host routing that http.sys does for multi-website hosting on a single machine, same as you had in IIS.
The problem you'll run into is with reserving the hostname, which you have to do under an elevated user account before you open the server. Service Fabric has limited support for this in the form of Endpoint configuration in ServiceManifest.xml:
<!-- THIS WON'T WORK FOR REGISTERING HOSTNAMES -->
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
The limitation here is that there is nowhere to specify a hostname, so Service Fabric will always register "http://+:[port]". That unfortunately doesn't work if you want to open a server on a specific hostname - you need to register just the hostname you want to use. You have to do that part manually using netsh (and remove any Endpoints for the same port from ServiceManifest.xml, otherwise it will override your hostname registration).
To register the hostname with http.sys manually, you have to run netsh for the hostname, port, and user account under which your service runs, which by default is Network Service:
netsh http add urlacl url=http://visualobjects.com:80/ user="NT AUTHORITY\NETWORK SERVICE"
But you have to do this from an elevated account on each machine the service will run on. Luckily, we have service setup entry points that can run under elevated account privileges.
edit
One thing you will need to do in order for this to work is open up the firewall on the same port you are listening on. You can do that with the following:
<Resources>
<Endpoints>
<Endpoint Protocol="tcp" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
Note that the Protocol is tcp instead of http. This will open up the firewall port but not override the http.sys registration that you set up with the .bat script.
I believe so. Separate public IP's which can then be set up to allow for routing from those IPs to the same backend pools, but on different ports.
I deployed an app on Service Fabric and there's an HTTP listener spawned inside. How can I configure the listening URL in relation to app/cluster?
More precisely, is there any way to build this URL inside the app by retrieving some environment/role parameter ?
Suppose my cluster is called "test", then it will be available at: test.northeurope.cloudapp.azure.com. If I have an app called "Sample" for which I configured an endpoint called "SampleTypeEndpoint" inside ServiceManifest.xml, what would be the complete URL my app would listen to?
The endpoints you configure in ServiceManifest.xml right now fulfill two purposes:
Allow Service Fabric to provide a unique port from an application port range, if you don't need a well-known port.
When opening a web server that uses http.sys, allow Service Fabric to set up URL ACLs for a random port or a well-known port (80, 443, etc) and certificate ACLs for HTTPS.
That's basically it. The actual address on which you open a listener is up to you to determine. Typically, you open a listener on the node IP and use a NAT for ingress traffic on a domain name. In Azure, the NAT is the Azure Load Balancer which is automatically configured to accept traffic on your cluster's VIP as well as the .region.cloudapp.azure.com domain.
Here's a more thorough overview of how this works on Service Fabric cluster in Azure: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/
Can someone please post a sample code for using InstanceInput endpoints?
I used the below configuration in a worker role where a sample WCF service listens at port 8080.
<Endpoints>
<InstanceInputEndpoint name="InstanceAccess" protocol="tcp" localPort="8080">
<AllocatePublicPortFrom>
<FixedPortRange max="10105" min="10101" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
</Endpoints>
But I was not able to access this WCF service from an external consumer using any of the ports 10101 to 10105. Should we use the public DNS name of the Azure service along with the public ports in the give range?
Also, I was not able to access this endpoint details from within the worker role OnStart() method. I used RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InstanceAccess"]. But it does not return a RoleInstanceEndpoint. Am I missing something here?
Here is a sample Visual Studio solution which uses Azure InstanceInput endpoint and hosts a WCF service on a worker role. The WCF service running on each of the individual instances can be accessed using the Azure DNS name and the public port mapped to that instance. I used the following endpoint configuration.
<Endpoints>
<InstanceInputEndpoint name="Endpoint1" protocol="tcp" localPort="10100">
<AllocatePublicPortFrom>
<FixedPortRange max="10110" min="10106" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
</Endpoints>
This endpoint was somehow not accessible from within the WorkerRole (both OnStart() and Run() methods). So I used 'localhost'.
string endpointIP = "localhost:10100";
if (RoleEnvironment.CurrentRoleInstance.InstanceEndpoints.Keys.Contains("Endpoint1"))
{
IPEndPoint externalEndPoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint;
endpointIP = externalEndPoint.ToString();
}
The solution also contains a console client which uses the hosted DNS name to invoke these individual WCF services.
InstanceInput endpoint is not working locally but once deployed it is working fine and assigned a different port for each instance, based on the port range it is allowed to create an instance, you cannot create instance more than the specified port range in the configuration. for example, port range is 101 - 105 you can create only 5 instance