I have a service fabric cluster deployed with a domain of foo.northcentralus.cloudapp.azure.com
It has a single node type with a single public ip address / load balancer.
Lets say I have the following two apps deployed:
http://foo.northcentralus.cloudapp.azure.com:8101/wordcount/
http://foo.northcentralus.cloudapp.azure.com:8102/visualobjects/
How can I set this up so I can have multiple domains each hosted on port 80? (assuming I own both of these domains obviously)
http://www.wordcount.com
http://www.visualobjects.com
Do I need more than one public ip address in my cluster to support this?
You should be able to do this with a single public IP address through some http.sys magic.
Assuming you're using Katana for your web host (the word count and visual object samples you reference use Katana), then it should be as simple as starting the server using the domain name in the URL:
WebApp.Start("http://visualobjects.com:80", appBuilder => this.startup.Invoke(appBuilder));
The underlying Windows HTTP Server API will register that server with that URL, and any HTTP request that comes in with a Host: visualobjects.com header will automatically be routed to that server. Repeat for any number of servers with their own hostname. This is the host routing that http.sys does for multi-website hosting on a single machine, same as you had in IIS.
The problem you'll run into is with reserving the hostname, which you have to do under an elevated user account before you open the server. Service Fabric has limited support for this in the form of Endpoint configuration in ServiceManifest.xml:
<!-- THIS WON'T WORK FOR REGISTERING HOSTNAMES -->
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
The limitation here is that there is nowhere to specify a hostname, so Service Fabric will always register "http://+:[port]". That unfortunately doesn't work if you want to open a server on a specific hostname - you need to register just the hostname you want to use. You have to do that part manually using netsh (and remove any Endpoints for the same port from ServiceManifest.xml, otherwise it will override your hostname registration).
To register the hostname with http.sys manually, you have to run netsh for the hostname, port, and user account under which your service runs, which by default is Network Service:
netsh http add urlacl url=http://visualobjects.com:80/ user="NT AUTHORITY\NETWORK SERVICE"
But you have to do this from an elevated account on each machine the service will run on. Luckily, we have service setup entry points that can run under elevated account privileges.
edit
One thing you will need to do in order for this to work is open up the firewall on the same port you are listening on. You can do that with the following:
<Resources>
<Endpoints>
<Endpoint Protocol="tcp" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
Note that the Protocol is tcp instead of http. This will open up the firewall port but not override the http.sys registration that you set up with the .bat script.
I believe so. Separate public IP's which can then be set up to allow for routing from those IPs to the same backend pools, but on different ports.
Related
In a Cloud Service Definition file, we specify the Endpoints for our Cloud Service. I understand this is used to configure Azure's own network load-balancer and also the instance's own Windows Firewall. This only needs the port number (for the load-balancer) and/or the process executable name (for Windows Firewall, if using process-based rules instead of port-number-based rules).
But what is the significance of each protocol="http|https|tdp|udp" attribute value?
My confusion is because tdp and udp are both transport-layer protocols - but that's enough to configure the load-balancer and firewall (as they can selectively block non-TCP or non-UDP traffic as appropriate). But what does protocol="http" or protocol="https" do, exactly?
HTTP and HTTPS are both Application-layer protocols that run on top of TCP - so if I specify protocol="tcp" for a WebRole then that should work for both HTTP and HTTPS traffic, right?
But what happens if I have a Worker Role (not a Web Role) Cloud Service which has a protocol="https" endpoint defined? What happens if my Worker Role code creates a Socket that listens on that particular port? And what happens to the incoming connection?
Does the load-balancer require the incoming connection to conform to the HTTP or HTTPS protocol specification? What if it's a polyglot protocol or some other protocol that looks like HTTP but is actually appliction-defined (e.g. SIP)?
If I specify https I also have to specify the certificate name - so this must mean the load-balancer is responsible for decryping TLS communication and then forwarding cleartext to the Web Role or Worker Role socket?
What if I have a Worker Role that privately makes use of HTTP.sys (or IIS Hostable Web Core) or self-hosts ASP.NET Core, for example, how would that be configured as the <Sites> element is unavailable?
Does the use of protocol="https" or protocol="http" cause any configuration of HTTP.sys?
Is there any way to expose a microservice endpoint without port number in Azure Service Fabric? Port number can be defined in ServiceManifest.xml or it can be dynamically assigned by Service Fabric cluster, but how to call a service without specifying port number?
Of course you do not have to specify port number if you do not need it. Service Fabric will automatically assign port to your service. I also do not define port number because we have 100+ services and it is "a little bit hard" to do that
Just omit Port declaration in ServiceManifest.xml
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="UserHttpEndpoint" Type="Input" />
<Endpoint Protocol="tcp" Name="UserRpcEndpoint" Type="Input" />
</Endpoints>
</Resources>
An endpoint would be useless without a port. So even if you could have one, you shouldn't want it. You are probably looking for a way to call the service without knowing its port number. This can be achieved by using a reverse proxy. With a reverse proxy you can call a service by providing the port of the reverse proxy.
I started to play with Service Fabric very recently. I added a new Service Fabric cluster on Azure (unsecure) and I created a demo solution with 2 stateless Web API Services as follows:
Endpoint configuration for AnotherAPI is the following:
<Endpoints>
<!-- This endpoint is used by the communication listener to obtain the port on which to
listen. Please note that if your service is partitioned, this port is shared with
replicas of different partitions that are placed in your code. -->
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8698" />
</Endpoints>
I am able to access to the default controller (ValuesController) using the local endpoint:
http://localhost:8698/api/values
But when I try to use the azure endpoint I get an ERR_CONNECTION_TIMED_OUT error on Chrome.
http://{azure-ip-address}:8698/api/values
Is there anything that I am missing?
You have to open that port in your azure cluster via a Load Balancer Probe. You can do this at cluster creation time via ARM template or after the fact. For an existing cluster, go to the resource group, then the LB Balancer, then probes. The default open port in SF is 19080 though. If you just switch to that port it will work if you are not using SSL.
I was wondering if it was possible to connect to a Service Insight hosted on a Virtual Machine locally? What I mean is:
I have Service Insight installed on a VM in the cloud
Can remote into it via Remote Desktop
Can launch Service Insight on the box to view message traffic
However, I have Service Insight installed locally and when I attempt to connect to the Service Control hosted on my VM not sure how to do this. Looking at the Particular website can't find much documentation either. Service Control expects a URL which I believe should be http://serviceins.cloudapp.net:33333/api/ however this resolves to nothing.
The name of my VM is called serviceins.
I have made changes to ServiceControl.config:
<appsettings>
<add key="ServiceControl/Hostname" value="serviceins.cloudapp.net"/>
<add key="ServiceControl/HoursToKeepMessagesBeforeExpiring" value="24"/>
</appsettings>
ServicePulse.config
service_control_url: 'http://serviceins.cloudapp.net:33333/api/'
I guess my question is how can I access Service Insight without having to remote onto the VM? Can I access to this via simply providing a URL to Service Insight?
Thanks, DS.
Security Warning
ServiceControl has no built in security layer so if you exposing the API URL to the Internet then all of the messages stored in ServiceControl will be accessible by anyone who can connect to port 33333. This is why it's restricted to localhost by default.
I can't stress enough that it should not be done on a production system
For Azure a more secure method would be to use something like a point to site VPN connection. (See: https://msdn.microsoft.com/en-us/library/azure/jj156206.aspx) but this may require a bit of reconfiguration.
If you are still keen to expose the URL in an insecure way here is how you would go about it:
1. Set the hostname in the App.config to a wildcard:
<add key="ServiceControl/HostName" value="*" />
2. Update the URLACL to respond to the wildcard.
You can view the URLACL settings by issuing this command at cmd prompt:
netsh http show urlacl
If you have an existing setting for port http://localhost:33333/api/ or http://serviceins.cloudapp.net:33333/api/ remove them using:
netsh http delete urlacl URL=http://localhost:33333/api/
netsh http delete urlacl URL=http://serviceins.cloudapp.net:33333/api/
Add the wildcard URLACL
netsh http add urlacl URL=http://*:33333/api/ User=Users
Check it via the show command and it should have an entry like this
Reserved URL : http://*:33333/api/
User: BUILTIN\Users
Listen: Yes
Delegate: No
SDDL: D:(A;;GX;;;BU)
3. Windows Firewall
Add an inbound rule to the Windows Firewall. By default the port 33333 will be blocked for incoming connections.
You can do this via an Admin Powershell using the following command (I'm assuming you're VM is Win2012)
New-NetFirewallRule -Name ServiceControl -Direction Inbound -Protocol TCP -LocalPort 33333 -Action Allow -Enabled True
4. Add an Azure Endpoint
You'll also need to open up an Azure Endpoint connection to allow connection to port 33333. This is essentially another firewall. Rather than document this I'll refer you to Microsoft's own doco here: http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/
As part of the endpoint configuration you can add some security by limiting the IP range that is allowed to connect to the port. This is really only useful if you've got a static IP.
Can someone please post a sample code for using InstanceInput endpoints?
I used the below configuration in a worker role where a sample WCF service listens at port 8080.
<Endpoints>
<InstanceInputEndpoint name="InstanceAccess" protocol="tcp" localPort="8080">
<AllocatePublicPortFrom>
<FixedPortRange max="10105" min="10101" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
</Endpoints>
But I was not able to access this WCF service from an external consumer using any of the ports 10101 to 10105. Should we use the public DNS name of the Azure service along with the public ports in the give range?
Also, I was not able to access this endpoint details from within the worker role OnStart() method. I used RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InstanceAccess"]. But it does not return a RoleInstanceEndpoint. Am I missing something here?
Here is a sample Visual Studio solution which uses Azure InstanceInput endpoint and hosts a WCF service on a worker role. The WCF service running on each of the individual instances can be accessed using the Azure DNS name and the public port mapped to that instance. I used the following endpoint configuration.
<Endpoints>
<InstanceInputEndpoint name="Endpoint1" protocol="tcp" localPort="10100">
<AllocatePublicPortFrom>
<FixedPortRange max="10110" min="10106" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
</Endpoints>
This endpoint was somehow not accessible from within the WorkerRole (both OnStart() and Run() methods). So I used 'localhost'.
string endpointIP = "localhost:10100";
if (RoleEnvironment.CurrentRoleInstance.InstanceEndpoints.Keys.Contains("Endpoint1"))
{
IPEndPoint externalEndPoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint;
endpointIP = externalEndPoint.ToString();
}
The solution also contains a console client which uses the hosted DNS name to invoke these individual WCF services.
InstanceInput endpoint is not working locally but once deployed it is working fine and assigned a different port for each instance, based on the port range it is allowed to create an instance, you cannot create instance more than the specified port range in the configuration. for example, port range is 101 - 105 you can create only 5 instance