Server to server Communication behind a load balancer - iis

I have two IIS servers sitting behind a load balancer. And my application's DNS is app.domain.com. I should have a way to communicate from one server to other server without touching load balancer. Application in both servers is listening to port 80. Also I have more than one application deployed in same port in these servers. How can I refer to (URL) to individual servers without using DNS.

Use IP:port with HOST header to distinguish between multiple applications in each node.

Related

Azure Load Balancer to balance the load between multiple ports but in same VM (Backend Pool)

I have having a hard time find a solution for this.
I have an Azure Internal Load Balancer (level 4). And I have ONLY one Virtual Machine act as the backend pool for the said Load Balancer.
And fun part starts here, I have multiple Docker containers running on that Virtual Machine. Running Nginx Web servers on ports 8080 and 8081.
And now I want to balance the load between these two ports. Literally what I want is something like below in the photo:
So according to the photo, the request comes from abc.xyz.com and it should hit the Load Balancer, and then it should route the traffic to the only VM running multiple docker containers in multiple ports.
How can I achieve this behavior?
I have already setup A frontend configuration with private ip, a rule, backend pool
As per this article(https://learn.microsoft.com/en-us/azure/container-instances/container-instances-virtual-network-concepts#unsupported-networking-scenarios), placing an Azure Load Balancer in front of container instances in a networked container group is not supported and similarly it is not possible to route the traffic on containers to their specific ports running on a single Virtual Machine. The above solution works on VM level not on container level.
The only workaround for this scenario would be to use Azure Application gateway as Microservice architecture is supported on App gateway. To probe on different ports, you need to configure multiple HTTP settings. Reference:
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#can-one-backend-pool-serve-many-applications-on-different-ports
Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. And you can create an internal application gateway. To do that you can create an Application Gateway with both public and private frontend IP address and do not create any listeners for the public frontend IP address. Application Gateway will not listen to any traffic on the public IP address if no listeners are created for it.
Reference: https://learn.microsoft.com/en-us/azure/application-gateway/configuration-front-end-ip ,
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address

Cannot SSH into scale set instances using application gateway

I've created a scale-set with two instances and connected the scale-set with application gateway. Now, I want to ssh into the instances but I cannot do it using application gateway. The instances do not have public IP assigned.
I was able to ssh into instances using load-balancer(when scale-set was connected to load balancer).
I tried to create an NSG and associate NSG with application gateway subnet, but still cannot ssh into scale set instances.
How can I ssh into scale set instances that is behind application gateway?
I dont think you can do that, Application Gateway operates on layer 7 (so HTTP), so you are pretty much limited to HTTP traffic only using Application gateway.
You can attach Load Balancer to your scale set and use that only for natting your ssh connections and bypass load balancer for application gateway connections
other options: vpn\jumpbox
Application gateway will be connecting to backend VMs using a different port. I faced this issue, but I was using load balancer. Load balancer was using port 50,000 to connect to my backend instance. I was not able to SSH through port 22 but was able to SSH using port 50,000. You can check which port is application gateway using to connect to backend instance and use that port to SSH. I believe it should work.

Hosting web application on Amazon AWS EC2

I am developing a web application locally. However, I would like to host the final product on an Amazon EC2 instance. I have moved my web application to the EC2 instance and am able to run the application; it's now listening on port 8081.
What I don't understand is how to allow users on the internet to access the web application running on port 8081 of the EC2 instance. I have tried redirecting the domain name to the IP address of the EC2 instance on the NameCheap DNS (where we bought the domain) to no avail. I suspect one of the things I need to do is set the permissions of the EC2 permission group but what should I set it to?
Help is greatly appreciated!
Thanks!
You can setup a nginx server to proxy all request to the port 8081.
Read more information here: https://doesnotscale.com/deploying-node-js-with-pm2-and-nginx/
Generally speaking, for a public web application you will want to run on a standard port (e.g. 80 or 443). You could do that by just running your node app as a privileged user (required by most OS's to expose 80 or 443), but generally it's better to have a web server in front pass the traffic, treating your node app as an upstream server (even if it's on localhost). NGinX is a good choice for this.
Regardless of what port you want to run it on, you'll need to update your EC2 security policy for that instance to allow traffic on that port (80, 443, 8081, whatever). You'll also need to make sure it's exposing a public IP address. It's not a bad idea to assign it an Elastic IP, since you'll wnat it to have the same address across instance reboots.
Finally, depending on what AMI you're running from, there may be a host firewall configured that you'll need to check on and configure to allow the traffic.

service fabric URL routing

I am using the Azure Load Balancer with Azure service fabric to host multiple self host web applications, I'd like to create a rule that allows me to route based on the users URL request.
So for example if a user navigates to :
http:// domain.com/Site1 then the rule would route to:
http:// domain.com**:8181**/Site1 within the cluster
if the user navigates to:
http:// domain.com/Site2 then the rule would route to:
http:// domain.com**:8282**/Site2 within the cluster
Is this possible with azure service fabric/load balancer?
The Azure Load Balancer only forwards traffic it receives on a port to a node in your cluster on another port (can be the same port or a different internal port). It operates on Layer 4 (TCP, UDP) so it doesn't know anything about HTTP or URLs (although it does allow HTTP probes).
Here are a couple options for multiple web sites:
If you want your web sites hosted internally on different ports (8181 and 8282), then you'll need something else to do URL routing. Azure Traffic Manager or Azure Application Gateway are possible options that would run outside your cluster. Your Azure Load Balancer would need to open a port for each web site, but the benefit is this way you can run your web sites on dedicated nodes and the ALB would automatically route traffic to the appropriate nodes based on which ports are open.
Alternatively, you can set up your own stateless routing service that runs inside your cluster.
Or you can skip routing altogether and just host all of your websites on port 80/443. As long as you're using an http.sys-based web host, which includes Katana, ASP.NET Core 1 WebListener, or anything you build on HttpListener, you can use the same port for all your websites and let the underlying http server route according to either a URL path or hostname, both of which are supported.

I can not access to my web site using weblogic and F5 load balaner

I have 3 weblogic application server in clustered state. I have access to my web site from each url. another person confige an F5 Load balancer for servers and say that all ip and port and other configures is ok. we test load balancer by an apache server on laptops, one of them is server and one of the is client and each of them is on each side of load balancer. It works ok and I can access to test application from load balancer ip and port. but I can not access to my application that has weblogic as application server. Why can't I access it? Can any help to me?
i set ip address of F5 loadbalancer as default gateway of server and now i can access to my website beyond the loadbalancer. is this solution is correct? why my connection is slower than connecting direct to server?

Resources