Endpoint configuration for Service Fabric - azure

I deployed an app on Service Fabric and there's an HTTP listener spawned inside. How can I configure the listening URL in relation to app/cluster?
More precisely, is there any way to build this URL inside the app by retrieving some environment/role parameter ?
Suppose my cluster is called "test", then it will be available at: test.northeurope.cloudapp.azure.com. If I have an app called "Sample" for which I configured an endpoint called "SampleTypeEndpoint" inside ServiceManifest.xml, what would be the complete URL my app would listen to?

The endpoints you configure in ServiceManifest.xml right now fulfill two purposes:
Allow Service Fabric to provide a unique port from an application port range, if you don't need a well-known port.
When opening a web server that uses http.sys, allow Service Fabric to set up URL ACLs for a random port or a well-known port (80, 443, etc) and certificate ACLs for HTTPS.
That's basically it. The actual address on which you open a listener is up to you to determine. Typically, you open a listener on the node IP and use a NAT for ingress traffic on a domain name. In Azure, the NAT is the Azure Load Balancer which is automatically configured to accept traffic on your cluster's VIP as well as the .region.cloudapp.azure.com domain.
Here's a more thorough overview of how this works on Service Fabric cluster in Azure: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/

Related

Azure Application Gateway - How to control traffic for different application

I am creating an application gateway and that will be a single point of entry for my multi tenant application. That means I will have multiple application request on this application gateway and then I need to redirect to backend pool. If I will have one application A deployed in app service A then it will listen at port 80 of app gateway. Similar if I have another application, I can expose it using similar way on different port. How can I achieve it. I tried creating multiple rules but not working.
If I read your question correctly, you want multiple app services, each on a potentially different port, to be served by a single application gateway. And it sounds possible you might want to make requests to that application gateway on different port. Sound right?
If so, then what you need to do is something along these lines:
Set up a backend pool for each app service.
Set up an HTTP setting for each backend pool, specifying port, session affinity, protocol, etc. - This will be the port that your App Service takes requests on.
Create a front end IP configuration to expose a public and/or private IP address
Create a listener for each app service, and port that you want to support. This will be the port you want the client requests to use. You can do two listeners per service to allow 80 & 443 (HTTP & HTTPS) traffic, for example.
Create a rule to connect each listener to its backend pool and HTTP setting combination
Optional - set up health probes that target monitoring endpoints based on an URL and a specific HTTP setting entry

Address of REST API hosted on service fabric

Suppose I have a stateless service running in service fabric and I have 5 nodes in my service fabric cluster. Now since each node in the cluster has an instance of stateless service, this means there will be 5 instances of my stateless service on 5 nodes.
But since each node has a different IP address and port number where it can host the service, there can be multiple different endpoint addresses at which my service is hosted.
Now my service is actually a REST API providing some crud functionalities.
Now I have set the port no to be 8080 in servicemanifest.xml file.
Now my question is, does setting port no specifically in servicemanifest.xml disable dynamically selection of port? Will this make every node on cluster use same port i.e. 8080 in endpoint address of the service?
Another question is that if the service is shifted to some another machine and deployed there and having 8080 as port can cause conflict if some other service on its cluster is already using the same port i.e. 8080?
How will we let the client know at which endpoint address my API is hosted?
Does setting port no specifically in servicemanifest.xml disable
dynamically selection of port?
Yes it does
Will this make every node on cluster use same port i.e. 8080 in
endpoint address of the service?
Yes, if you set instance count to -1 all nodes will run the service at that port. You can call them by using the external load balancer (external to service) or directly on the node IP / localhost (service to service).
If the service is shifted to some another machine and deployed there
and having 8080 as port can cause conflict if some other service on
its cluster is already using the same port i.e. 8080?
There will never be more that 1 instance of 1 stateless service on 1 node, within the same application. SF takes care of this. However, if another service is using the port, it cannot be used by another service unless you use a server that supports port sharing like http.sys.
To deal with port conflicts, have a look at the built-in reverse proxy or Traefik. Using a reverse proxy takes away the pain of managing ports, and allows you to call your service by its application and service name.

Service Fabric Cluster Custom endpoints vs Application start and end ports

This screenshot was taken while creating a cluster on the Azure portal. As given in the picture below,
what is the difference between Custom endpoint and application
start-end port range?
Why is one called an endpoint and the other called a port?
The custom endpoints help says "Custom endpoints allow for connections to applications running on this node type. Enter endpoints separated by a comma.".
In the documentation here and here it is explained in a very clear way,
Custom endpoints: This field allows you to enter a comma-separated list of ports that you want to expose through the Azure Load Balancer to the public Internet for your applications. For example, if you plan to deploy a web application to your cluster, enter "80" here to allow traffic on port 80 into your cluster.
Application Ports (Start|End): are the ports that are used by the Service Fabric applications. The application port range should be large enough to cover the endpoint requirement of your applications. This range should be exclusive from the dynamic port range on the machine, that is, the EphemeralPorts range as set in the configuration. Service Fabric uses these ports whenever new ports are required and takes care of opening the firewall for these ports.
In Summary:
Custom endpoints are ports opened in the Load Balancer to enable external access.
Application Ports is a range of reserved ports to be opened in the nodes and assigned to services when using dynamic allocated ports, but not externally accessible.

Error 502 while using Application Gateway with App Service Environment

I have setup Application Service environment and trying to access WebApps inside App service environment through Application gateway. Below are the steps I followed to create required setup however I am getting "502 - Web server received an invalid response while acting as a gateway or proxy server" error when I hit the URL that is mapped with application gateway public URL
Created Vnet and created App Service environment inside separate subnet, used subdomain name as dev.xyz.com. I used ILB wild card certificate here issued to *.xyz.com
Created app inside App service environment and named it as "dev-web.dev.xyz.com" and added externally accessible DNS name in the custom domain as "dev-web.xyz.com"
Created Application gateway, added Internal IP address of ILB ( App Service Environment) as back end pool
Created App Gateway-HTTP Settings using port 80 and mapped it with custom probe
Created App Gateway-CustomProbe, host name used here is extenally accessible DNS name which is "dev-web.xyz.com"
Created App Gateway-Listner using host name as extenally accessible DNS name which is "dev-web.xyz.com"
Added a basic rule and mapped above resources with each other
I am still not able to access my Web App after acessing dev-web.xyz.com
I am not sure about how port number used to create listner affect the setup or if I am missing anything.
I also want to implement SSL once I am done with above testing, I would appretiate inputs on how to implement that for above setup.
Created App Gateway-Listner using host name as extenally accessible DNS name which is "dev-web.xyz.com"
After created your App Gateway, a default listener is created to bind the front end IP and port 80 for you. The listener means the App Gateway will monitor the requests which were send to the IP address and port and forwarding the requests to the backend resources. Since you add a host name 'dev-web.xyz.com' as its listener. The App Gateway will also monitor the requests which were send to the host. It will cause a infinite loop forwarding due to the listener host is also marked as the backend host.
To fix the error, you need to remove the App Gateway-Listener record which you added.
I was able to resolve the issue by mapping correct port for the listener. Listener won't harm if you have correct rule setup in the configuration.

service fabric URL routing

I am using the Azure Load Balancer with Azure service fabric to host multiple self host web applications, I'd like to create a rule that allows me to route based on the users URL request.
So for example if a user navigates to :
http:// domain.com/Site1 then the rule would route to:
http:// domain.com**:8181**/Site1 within the cluster
if the user navigates to:
http:// domain.com/Site2 then the rule would route to:
http:// domain.com**:8282**/Site2 within the cluster
Is this possible with azure service fabric/load balancer?
The Azure Load Balancer only forwards traffic it receives on a port to a node in your cluster on another port (can be the same port or a different internal port). It operates on Layer 4 (TCP, UDP) so it doesn't know anything about HTTP or URLs (although it does allow HTTP probes).
Here are a couple options for multiple web sites:
If you want your web sites hosted internally on different ports (8181 and 8282), then you'll need something else to do URL routing. Azure Traffic Manager or Azure Application Gateway are possible options that would run outside your cluster. Your Azure Load Balancer would need to open a port for each web site, but the benefit is this way you can run your web sites on dedicated nodes and the ALB would automatically route traffic to the appropriate nodes based on which ports are open.
Alternatively, you can set up your own stateless routing service that runs inside your cluster.
Or you can skip routing altogether and just host all of your websites on port 80/443. As long as you're using an http.sys-based web host, which includes Katana, ASP.NET Core 1 WebListener, or anything you build on HttpListener, you can use the same port for all your websites and let the underlying http server route according to either a URL path or hostname, both of which are supported.

Resources