How to close Google App Engine URL's for direct access after creating load balancer's - security

I have created a load balancer on my Google Cloud app engine, and also have added SSL certificates to them, but the links of the App Engines are still active and don't have any security on them.
So I wanted to know how can I close or disable those links of the app engine?
And secondly,
Can we do something like only the load balancer is able to access the App Engine and the load balancer is open to public and the app engine links are closed for the public access.
Something like if the load balancer had a static IP we could have added it to the App engine firewall and allowed that IP and denied the rest?
Please Help me with this scenario.

You can configure the ingress for App Engine so requests sent to the default URL are discarded and only the Load Balancer will be capable to communicate with the backend service.
To do so, you can modify the ingress controls and set it to Internal and Cloud Load Balancing, so your app will only receive requests that are routed through Cloud Load Balancing, or that are sent from VPC networks in the same project. All other requests will be denied with a 403 error.
I think also this page from the documentation on how requests are routed with Cloud Balancing is worth a read for your use case.

In your GCP cloud console, go to App Engine > Firewall rules.
Click on Create rule and allow ingress from the LoadBalancer's public IP ranges 130.211.0.0/22 and 35.191.0.0/16 ranges.
for reference

Related

Inside load balancer in Azure

In Azure, I have 3 Web Apps (for simplicity):
Frontend website
Endpoint 1
Endpoint 2
The frontend website requests data from an endpoint.
Both endpoints are synchronized all the time (outside the scope of this question), but sometimes I need to do some maintenance on them, which gives me some downtime.
Can I somehow setup a loadbalancer only my frontend website can see, and get any of the online endpoints - like this:
The last line of this article says Internal Load Balancers might be the fit:
Can I use ILB on PaaS services (Web/Worker roles)?
ILB is designed to work with web/worker roles as well, and it is available from SDK 2.4 onwards.
Does anyone know of a guide, or have tried making this with Web Apps?
I dont think this is something you can achieve "natively" with load balancers. App Services are not actually bound to the VNet. Previously you could only use point-to-site vpn to connect them to vnet, right now there is a new vnet integration feature in preview which might allow you to use internal load balancers, but I doubt that, because they (load balancers) only allow to use virtual machines\scale sets\availability sets as backend pools.
Application gateways can be bound to the App Services. And they can be internal as well. You'd also need to restrict App Service(s) to receive traffic from anything that is not you Application gateway.
You can use traffic manager\front door for this sort of load balancing, but the endpoints won't be private

Can Azure Application Gateway distribute request to specific URL?

I have a use case where my cluster has 3 VMs working as head node in HPC Pack and a bunch of other VMs working as compute nodes.
So basically, after creating this cluster, i must install a special HCP client, from this client, i type the DNS name of each of VMs to access the HPC management interface.
For example: https://head-node-1.azure.com
Of course, if i access this DNS from Chrome, i only see IIS page.
I wants to create a load balancer with its DNS name. Let's say https://load-balancer.azure.com
So from my client, every time i access load balancer DNS name, i can see the management interface, not IIS page.
How can i do that?
Not sure I'm understanding you correctly. Basically, Azure Application Gateway supports URL path-based routing rules.
Actually, Application Gateway supports web-based traffic load balancing. [Azure load balancer][2] supports stream-based traffic. If you want to listen to the protocol HTTP or HTTPS, you can use Application Gateway. Per your description, you could not access HPC management interface from web explorer, you could use a 4 layer load balancing based on TCP/UDP.
So you could create a public-facing load balancing and add the head node VMs as the backend pools. Create a health probe and load balancing rules to specify the ports you want to listen for your HPC management interface on the each of VMs.
Hope this helps, let me know if you have any concerns.

Trying to understand load balancing in azure cloud service

I am maintaining a azure cloud service which has 1 web role and few worker roles. The webrole has multiple instances. When I open the cloud service from the resources, I can see the service endpoint and public ip address. I want to understand how is the traffic load balanced in this azure cloud service. I searched for load balancers but I could not find it in the subscription. I was also not able to get the reference of some document which explains load balancing in the cloud service specifically.
Any info in this regard?
Long story short,
The default distribution mode for Azure Load Balancer is a 5-tuple hash. The tuple is composed of the source IP, source port, destination IP, destination port, and protocol type. The hash is used to map traffic to the available servers and the algorithm provides stickiness only within a transport session.
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
Internal load balancer is supported for cloud services. An internal load balancer endpoint created in a cloud service that is outside a regional virtual network will be accessible only within the cloud service.
I found these docs which might be helpful to you. These explain setting internal load balancer for cloud services.
Classic : https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-ilb-classic-cloud
ARM : https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-ilb-arm-ps
Just to make it clear, the information below is about classic services. For information about classic and resource manager model, see this page.
In cloud services you get a load balancer automatically configured when you create the service. If you want to configure it, you can do so using the service model.
The load balancer can be of two different types,
internal load balancer
external loab balancer
The internal one can only be accessed inside the cloud service, while the external one got a public IP. See this page for how to make an internal load balancer.
Load balancers keep track of the health state of the endpoints by regularly probing them. Check out this page for how to configure the probing. As long as the internal services return a HTTP 200, they are kept in the load balancers pool.
Have a look at this page for more general information on load balancers for cloud services.
Also, see this page as well. It contains a good information about the service.

Difference in Load balancing on Azure LoadBancer and Application Gateway?

I have done Load balancing on Azure using Azure Load Balancing and Application Gateway for HTTPS traffic.
In Azure Load Balancer, we can do health check at port 443 while in Application Gateway there are options to upload SSL certs and for health check we can specify a file like index.html on which we can perform health check.
I know that Application Gateway is the right way, but what is the drawback of using Azure Load balancer.
Can someone explain me this.
Thanks
Maybe the following table helps understanding the difference between Azure load balancer and Application Gateway:

Azure Load balancing to Multiple Sites with Disaster Recovery

I am trying to configure applications on 2 different Azure sites having their local load balancing capabilities. I can use Traffic manager to distribute the traffic and have weighted routing to force everything to my primary site.
But i want this to occur automatically where i can map a service pointing to the internal load balancers at both sites and evaluate the sites are up and running or not to decide where to forward the traffic. This will allow me not to manually configure the Traffic Manager in case of disaster.
Note : The services are hosted on IIS on IaaS VMs. ILB1 and ILB2 are respective loadbalancer for Site1 and Site2.
Any help is appreciated!
Thanks
As far as I know, we can't add internal load balancer as traffic manager endpoints.
But I want this to occur automatically where I can map a service
pointing to the internal load balancers at both sites and evaluate them
sites are up and running or not to decide where to forward the
traffic.
By default, we can set multiple sites around the world with traffic manager, traffic manager will probe the health of all sites, forward network traffic to the right site.
We can use traffic manager profile to manage network traffic, traffic Manager profiles use traffic-routing methods to control the distribution of traffic to your cloud services or website endpoints.
For example, we create website 1 on site 1 (primary site), create website 2 on site 2. If we use the weighted method, network traffic will to site 1. When site 1 is down, traffic manager will know site 1 was down, will route network traffic to site 2.
Traffic manager works as a DNS level Load Balancer, it will route network to the available site by default.
About traffic manager probe settings, we can via the Azure portal to modify it, like this:
By the way, if you want to use traffic manager, we can add public IP address to traffic manager endpoint.
Update:
As a workaround, we can deploy a S2S VPN between two locations, and use Haproxy to work as load balancer, then add two VMs to public load balancer, like this:
We can use Haproxy to set primary website, more information about Haproxy, please refer to this link.

Resources