Provisioning Service Fabric behind Application Gateway - azure

We are tying to achieve this.
From my understanding, we should place the outside interface of the SF loadbalancer on a private network and then connect to the App Gateway's LB internal interface using Azure Virtual Network Peering.
Is this doable, are there any issues with this?

Yes, it's doable. There are multiple approaches you could go with -
Deploy App Gateway pointing at sf nodes directly, like shown here - Fine Granular Microservices Load Balancing with Azure Service Fabric and Application Gateway
Deploy App Gateway pointing at SF LB
Catches:
There are limitations around
how many Backend Address Pools you could have(up to 20), and how many machines and http settings each pool could run with. So, for instance, if you have SF cluster with the thousands
of services hosted at different ports, think through using SF LB and SF Reverse Proxy.
Azure Application Gateway requires its subnet. When creating a virtual network, ensure that you leave enough address space to have multiple subnets. Once you deploy an application
gateway to a subnet, only additional application gateways can be added to the subnet.
While digging into your question, I've found out that App Gateway might not play nice with websockets under certain circumstances. Check out
Communication through Azure Application Gateway blocked for WebSocket traffic for the details.
P.S.
If SF LB of yours is public you don't need VNET peering. The same works for private SF LB and Application Gateway installed into the same VNET.

I think there is better support to abstract Service Fabric by using Azure API management instead of Application Gateway.
I presume your SF is on Azure then API management has built in support for Service fabric so you dont have to do end point resolution or get partition key etc

Related

App Service VNET integration for outbound traffic: can it reach Internet endpoints?

I deploy my web application to an App Service instance on Premium tier. My web application makes outbound requests to external resources on the Internet.
In order to secure the connection with one of these external resources so I can reach it with a private IP address, my plan is to create a Site-to-Site VPN from Azure to Oracle Cloud Infrastructure (where the external resource resides). Then, I plan to use the VNET Integration for outbound traffic and connect my App Service to my VPN.
My question is - will the web application still be able to reach the other external resources on the Internet with their public IPs? I believe the answer is related to routing tables but I can't wrap my mind around it.
Just because you integrate a Regional VNet (I'm assuming) doesn't mean the app service won't be able to make outbound connections. Pretty much like
When you integrate your app service with your VNet that has the site-to-site VPN, traffic from your app service will traverse the Azure network rather than going out to internet, assuming your app service is using an RFC1918 address for your infrastructure. If you want to secure the traffic even further, then your app service would need to be hosted inside an App Service Environment

azure application gateway behind azure front door multiple websites domains ssl

Infrastructure]1
Info picture - all servers should have the same configurations and websites and ports.
The goal is that on all virtual servers in VMSS are running the different websites ( www.xxx.com , www.yyy.com , wwww.zzz.com )
The SSL termination should be done at Front Door that is clear to me.
Questions: Where should i place the public ip? What should i configure that all websites are running and avaiable for users in application pool? I don't find a tutorial which describes my infrastructure. Could someone help me in this case?
Please help me thanks.
Front Door is global load-balancing services which distribute traffic from your end users across your regional backends.
Load Balancers and Application Gateways are regional load-balancing services which provide the ability to distribute traffic to virtual machines (VMs) within a virtual network (VNETs) or service endpoints within a region.
https://learn.microsoft.com/en-us/azure/frontdoor/front-door-lb-with-azure-app-delivery-suite#choosing-a-global-load-balancer
Here is an example of Microsoft Azure DR architecture with Application Gateway, Front Door, Load Balancer and Traffic Manager.
https://learn.microsoft.com/en-us/azure/frontdoor/front-door-lb-with-azure-app-delivery-suite
Considering your solution, you should configure SSL on FrontDoor and configure Application Gateway as backend.
Application Gateway should have VMSS configured as backend.

Can we have a single application gateway for all VMSS created in different regions?

Can we have a single Application Gateway for all VMSS created in different regions?
If yes please share the possible options.
As the comment mentioned, we could not have a single Application gateway for all VMSS created in a different region since Application Gateway is always deployed in a virtual network subnet and it directly supports to deploy the VMSS as the backends in the same region and virtual network as the Application gateway.
As a workaround, you could use a public IP address as the backend for communicating with instances outside of the virtual network as long as there is IP connectivity. Read more details about backend pools. So you may use a public-facing load balancer associated with the VMSS.
Furthermore, you also could use Traffic Manager to distribute traffic across multiple Application Gateways in different datacenters. Or use Azure Front Door Service provides a scalable and secure entry point for fast delivery of your global web applications.

Azure Container Services (AKS) - Exposing containers to other VNET resources

I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.
For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.
Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).
My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?
I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET?
Thank you in advance!
If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints.
I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.
Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator

Is it possible to use Azure API Management and Azure ACS (kubernetes) as frontend and backend?

I would like to create a simple architecture on Azure. My high level design is very similar to the picture below (source: https://www.import.io/post/using-amazon-lambda-and-api-gateway/)
I do want to access the internal services via the Azure API Management. What I can see on Microfos documentation page is that this simple and secure architecture is not mentioned as a reference:
https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
I have the following issues:
API Management cannot be assigned to a Virtual Network if there is at least one NIC is using the same network (why?)
Even with peered Virtual Networks I cannot access 10.244.X.0/24 network (pods' network) because only 10.240.0.0/16 is owned by the k8s Virtual Network. How can I access cluster ips (10.0.0.0/16) and pod ips (10.244.0.0/16)?
Well, you don't need an Extra VNET, but just an extra Subnet. That Subnet could lie within your existing VNET. The Size of Subnet can be the smallest /29 which Azure supports.
The Extra Subnet requirement for API Management comes from the fact, that it is built on PAAS V1 (Classic) technology. While we can deploy into a Resource Manager VNET (V2 layer), there are consequences to that. The Classic deployment model in Azure are not tightly coupled with Resource Manager model and so if you create a resource in V2 stuff, the V1 doesn't know about it and problems can happen such as API Management trying to use an IP that is already allocated to a NIC (built on V2).
To learn more about difference of Classic and Resource Manager models in Azure refer to blog difference between Classic and ResourceManager models
The answer is basically YES although the setup is not trivial.
You need:
One extra VNet for the API Management (EDIT: an extra subnet is enough)
One service (kubernetes terminology)
Steps:
Peer the Kubernetes VNet and the extra VNet you have created (test it)
API Management -> Virtual network: change to External
Choose as Virtual Network the one extra VNet (lets call it 'apimgmntvnet') and a Subnet
Save it! Drink a beer because it took me 1h!
Meanwhile expose your deployment internally:
kubectl expose deployment app --port=<serviceport> --name=app --target-port=<containerport> --type=NodePort (NodePort is important!LoadBalancer type triggers kubernetes to dynamically configure the Azure External LB for Kubernetes install)
Check node IP:PORT on kubernetes (kubectl proxy) BUI
API Management -> Publisher portal: modify your API to the IP address (AgentIP:30361)
Theoretically it should work. It is advised to start with a VM in the apimgmntvnet and try peering first from the VM and than delete it (API Management cannot be part of a VNet where at least one NIC is present (?!) ).

Resources