This screenshot was taken while creating a cluster on the Azure portal. As given in the picture below,
what is the difference between Custom endpoint and application
start-end port range?
Why is one called an endpoint and the other called a port?
The custom endpoints help says "Custom endpoints allow for connections to applications running on this node type. Enter endpoints separated by a comma.".
In the documentation here and here it is explained in a very clear way,
Custom endpoints: This field allows you to enter a comma-separated list of ports that you want to expose through the Azure Load Balancer to the public Internet for your applications. For example, if you plan to deploy a web application to your cluster, enter "80" here to allow traffic on port 80 into your cluster.
Application Ports (Start|End): are the ports that are used by the Service Fabric applications. The application port range should be large enough to cover the endpoint requirement of your applications. This range should be exclusive from the dynamic port range on the machine, that is, the EphemeralPorts range as set in the configuration. Service Fabric uses these ports whenever new ports are required and takes care of opening the firewall for these ports.
In Summary:
Custom endpoints are ports opened in the Load Balancer to enable external access.
Application Ports is a range of reserved ports to be opened in the nodes and assigned to services when using dynamic allocated ports, but not externally accessible.
Related
I have having a hard time find a solution for this.
I have an Azure Internal Load Balancer (level 4). And I have ONLY one Virtual Machine act as the backend pool for the said Load Balancer.
And fun part starts here, I have multiple Docker containers running on that Virtual Machine. Running Nginx Web servers on ports 8080 and 8081.
And now I want to balance the load between these two ports. Literally what I want is something like below in the photo:
So according to the photo, the request comes from abc.xyz.com and it should hit the Load Balancer, and then it should route the traffic to the only VM running multiple docker containers in multiple ports.
How can I achieve this behavior?
I have already setup A frontend configuration with private ip, a rule, backend pool
As per this article(https://learn.microsoft.com/en-us/azure/container-instances/container-instances-virtual-network-concepts#unsupported-networking-scenarios), placing an Azure Load Balancer in front of container instances in a networked container group is not supported and similarly it is not possible to route the traffic on containers to their specific ports running on a single Virtual Machine. The above solution works on VM level not on container level.
The only workaround for this scenario would be to use Azure Application gateway as Microservice architecture is supported on App gateway. To probe on different ports, you need to configure multiple HTTP settings. Reference:
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#can-one-backend-pool-serve-many-applications-on-different-ports
Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. And you can create an internal application gateway. To do that you can create an Application Gateway with both public and private frontend IP address and do not create any listeners for the public frontend IP address. Application Gateway will not listen to any traffic on the public IP address if no listeners are created for it.
Reference: https://learn.microsoft.com/en-us/azure/application-gateway/configuration-front-end-ip ,
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address
Suppose I have a stateless service running in service fabric and I have 5 nodes in my service fabric cluster. Now since each node in the cluster has an instance of stateless service, this means there will be 5 instances of my stateless service on 5 nodes.
But since each node has a different IP address and port number where it can host the service, there can be multiple different endpoint addresses at which my service is hosted.
Now my service is actually a REST API providing some crud functionalities.
Now I have set the port no to be 8080 in servicemanifest.xml file.
Now my question is, does setting port no specifically in servicemanifest.xml disable dynamically selection of port? Will this make every node on cluster use same port i.e. 8080 in endpoint address of the service?
Another question is that if the service is shifted to some another machine and deployed there and having 8080 as port can cause conflict if some other service on its cluster is already using the same port i.e. 8080?
How will we let the client know at which endpoint address my API is hosted?
Does setting port no specifically in servicemanifest.xml disable
dynamically selection of port?
Yes it does
Will this make every node on cluster use same port i.e. 8080 in
endpoint address of the service?
Yes, if you set instance count to -1 all nodes will run the service at that port. You can call them by using the external load balancer (external to service) or directly on the node IP / localhost (service to service).
If the service is shifted to some another machine and deployed there
and having 8080 as port can cause conflict if some other service on
its cluster is already using the same port i.e. 8080?
There will never be more that 1 instance of 1 stateless service on 1 node, within the same application. SF takes care of this. However, if another service is using the port, it cannot be used by another service unless you use a server that supports port sharing like http.sys.
To deal with port conflicts, have a look at the built-in reverse proxy or Traefik. Using a reverse proxy takes away the pain of managing ports, and allows you to call your service by its application and service name.
We are using Kubernetes with Azure as cloud provider. The relevant setup to my question is that we have one loadbalancer and one network security group which is attached to all worker VMs. So basically every time I create a service, it creates a record in LoadBalaner frontend IP configuration, and adds a rule in network security group with specified destination port and Source IP addresses (which restricts from which source IP it can access the VM in which port.)
The problem with this set up is that, if I have a service that uses port 5000 which is open to public IP, and another service that also uses port 5000 but is open to only specific IP, both services are effectively open to public IP, because NSG rules are additive. Note that 5000 port number here does not represent the actual VM node port (although that's what Azure thinks) because it's taken care by kube-proxy in each machine and it will send the traffic to correct VM with corresponding node port. And this is why it makes sense to have two services using same port with different ingress rule set up.
Is there any way I can mitigate this problem? I can't think of any architecture setup I can deal with having different ingress rule for multiple services with same destination port.
Thank you
I deployed an app on Service Fabric and there's an HTTP listener spawned inside. How can I configure the listening URL in relation to app/cluster?
More precisely, is there any way to build this URL inside the app by retrieving some environment/role parameter ?
Suppose my cluster is called "test", then it will be available at: test.northeurope.cloudapp.azure.com. If I have an app called "Sample" for which I configured an endpoint called "SampleTypeEndpoint" inside ServiceManifest.xml, what would be the complete URL my app would listen to?
The endpoints you configure in ServiceManifest.xml right now fulfill two purposes:
Allow Service Fabric to provide a unique port from an application port range, if you don't need a well-known port.
When opening a web server that uses http.sys, allow Service Fabric to set up URL ACLs for a random port or a well-known port (80, 443, etc) and certificate ACLs for HTTPS.
That's basically it. The actual address on which you open a listener is up to you to determine. Typically, you open a listener on the node IP and use a NAT for ingress traffic on a domain name. In Azure, the NAT is the Azure Load Balancer which is automatically configured to accept traffic on your cluster's VIP as well as the .region.cloudapp.azure.com domain.
Here's a more thorough overview of how this works on Service Fabric cluster in Azure: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/
We try to migrate our Platform from classical IIS hosting to a service fabric micro service architecture. So fare we learned that a service fabric lives in a virtual machine scale set and uses Load balancer to communicate to the outside world.
The Problem we now facing is that we have different access points to our application. Like one for browser, one for mobile app. Both use the standard https port, but are different applications.
In iis we could use host headers to direct traffic to one or the other application. But with service fabric we can’t. easiest way for us would be multiple public IP’s. With that we could handle it with dns.
We considered a couple solutions with no success.
Load balancer with Multiple public ip’s. Problem: it looks like that only works with Cloud Services and we need to work with the new Resource Manager World there it seems to be not possible to have multiple public ip’s.
Multiple public load balancer. Problem: Scale Sets accept only on load balancer instance pert load balancer type.
Application Gateway. Seems not to support multiple public ip’s or host header mapping.
Path mapping. Problem: we have the same path in different applications.
My questions are:
Is there any solution to use multiple IP’s and map the traffic internally to different ports?
Is there any option to use host header mapping with service fabric?
Any suggestion how I can solve my problem?
Piling on some Service Fabric-specific info to Eli's answer: Yes you can do all of this and use an http.sys-based self-hosted web server to host multiple sites using different host names on a single VIP, such as Katana or WebListener in ASP.NET Core 1.
The piece to this that is currently missing in Service Fabric is a way to configure the hostname in your endpoint definition in ServiceManifest.xml. Service Fabric services run under Network Service by default on Windows, which means the service will not have access to create a URL ACL for the URL it wants to open an endpoint on. To help with that, when you specify an HTTP endpoint in an endpoint definition in ServiceManifest.xml, Service Fabric automatically creates the URL ACL for you. But currently, there is no place to specify a hostname, so Service Fabric uses "+", which is the strong wildcard that matches everything.
For now, this is merely an inconvenience because you'll have to create a setup entry point with your service that runs under elevated privileges to run netsh to setup the URL ACL manually.
We do plan on adding a hostname field in ServiceManifest.xml to make this easier.
It's definitely possible to use ARM templates to deploy a Service Fabric cluster with multiple IPs. You'll just have to tweak the template a bit:
Create multiple IP address resources (e.g. using copy) - make sure you review all the resources using the IP and modify them appropriately
In the load balancer:
Add multiple frontendIPConfigurations, each tied to its own IP
Add loadBalancingRules for each port you want to redirect to the VMs from a specific frontend IP configuration
Add probes
As for host header mapping, this is handled by the Windows HTTP Server API (see this article). All you have to do is use a specific host name (or even a URL path) when configuring an HTTP listener URL (in OWIN/ASP.NET Core).