My team deployed our sites last night, found a bug this morning, deployed again, and now all our sites are down. Our sites use a load balancer and are all running on the same IIS app pool. We've tried restarting IIS several times to no avail, which leads us to think it is a problem with the load balancer. Is it possible to bypass the load balancer using a hosts file or is there another way?
The network card on each server will have its own ip address, over an above the ip address published by the load balancer, you could reset your dns record to this address.
Bear in mind this'll take you down to a single server, but given that you have no servers at the moment...
Related
Disclaimers: I come from AWS background but relatively very new to GCP. I know there are a number of existing similar questions (e.g, here and here etc) but I still cannot get it work since the exact/detailed instructions are still missing. So please bear with me to ask this again.
My simple design:
Public HTTP/S Traffic (Ingress) >> GCP Load Balancer >> GCP Servers
GCP Load Balancer holds the SSL Cert. And then it uses Port 80 for downstream connections to the Servers. Therefore, LB to the Servers are just HTTP.
My question:
How do I prevent the incoming HTTP/S Public Traffic from reaching to the GCP Servers directly? Instead, only allow the Load Balancer (as well as it's Healthcheck Traffic)?
What I tried so far:
I went into Firewall Rules and removed the previously allowing rule of Ports 80/443 (Ingress Traffic) from 0.0.0.0/0. And then, added (allowed) the External IP address of Load Balancer.
At this point, I simply expected the Public Traffic should be rejected but the Load Balancer's. But in reality, both seemed to be rejected. Nothing reached the Servers anymore. The Load Balancer's External IP wasn't seemed to be recognised.
Later I also noticed the "Healthchecks" were also not recognised anymore. Therefore Healthchecks couldn't reach to Servers and then failed. Hence the Instances were dropped by Load Balancer.
Please also note that: I cannot pursue the approach of simply removing the External IPs on the Servers. (Although many people say this would work.) But we still want to maintain the direct SSH accesses to the Servers (by not using a Bastion Instance). Therefore I still need the External IPs, on each and every Web Servers.
Any clear (and kind) instructions will be very much appreciated. Thank you all.
You're able to setup HTTPS connectivity between your load balancer and your back-end servers while using HTTP(S) load balancer. To achieve this goal you should install HTTPS certificates on your back-end servers and configure web-servers to use them. If you decided to completely switch to HTTPS and disable HTTP on your back-end servers you should switch your health check from HTTP to HTTPS also.
To make health check working again after removing default firewall rule that allow connection from 0.0.0.0/0 to ports 80 and 443 you need to whitelist subnets 35.191.0.0/16 and 130.211.0.0/22 which are source IP ranges for health checks. You can find step by step instructions how to do it in the documentation. After that, access to your web servers still be restricted but your load balancer will be able to use health check and serve your customers.
Whenever i'm adding a VM(Windows/Linux) to the backend pool of a Standard(not basic) Internal load balancer, the VM loses internet access(outbound) to all the internet sites(example: www.google.co.in) except Microsoft sites(bing.com).
Things i have tried:
1. Created Health probe and load balancing rules to verify the load balancing is happening - and yes the load balancing works but no internet access
2. DisableOutboundSNAT on the Rule - load balancing works but no internet access
3. Created NSG to allow all outbound traffic (which is enabled by default) - no luck
Finally this issue is resolved.
This is by design as mentioned on here:
So for a conclusion, if we want to access internet from the VM behind a Standard ILB, we need to associate a Public IP to the VM. ( I tested it and it worked).
Also, this seems a very good design as VM is completely private(no outbound implicitly) when it is behind a Standard Load Balancer.
Thanks to Micah for resolving this on this post.
On Azure (through portal)
Created Virtual Machine with a Static IP, data disk, and opened ports
Then remote desktop - Install IIS and FTP, ports opened in firewall
(can successfully connect via ftp client)
Created a Public Load Balancer with a Static IP with Probes and Rules
(can connect with ftp client through load balancer ip address fine)
(if I enter ip address of load balancer in browser I can view the default iis website fine) (at moment there is only one vm in virtual machine set)
Added a couple of websites in IIS, one a .net app, and the other with just some hello world .html files to test connectivity via domain name. I set bindings to host name for websites with and without www. and IP address set to all (*). restarted websites.
Created a couple of Azure DNZ Zones with A Records pointing to the Load Balancer IP address. Changed name servers on domain register to point to the azure dns servers.
However, this is where it stops. A browser cannot get to either website and I get a '500' error. dns propogation check tools verify that the nameservers are reaching azure for domain names.
There must be something really basic I am missing (???) It is as if DNS resolution is stopping at the virtual machines. Any suggestions.
If you are Configuring multiple websites in a IIS of VM and also you want to map them for different domain name, then you need to Configure Host Header for all websites in IIS (Please find below links for this) and also need to update same A Record for all your websites at you Domain provider setting.
This will work if you have separate Domain Names registered else it will not work.
Without domain name you can deploy websites on different ports in IIS and then configure custom domain in Azure Load Balancer NAT rules.
Links for Host Header config in IIS
https://technet.microsoft.com/en-us/library/cc753195(v=ws.10).aspx
http://support.simpledns.com/kb/a82/virtual-hosting-with-iis-internet-information-services.aspx
This was my fault in some missing hyphens in the zone record. The other .net website was throwing 500 errors sometimes instead of error-name-not_resolved from incomplete nameserver propogation and incomplete .net configuration for the website on VM
The host headers were set correctly including www.xxx.com and .xxx.com variants for both port 80 and port 443, and I did have the 'A' records with both # and www variants in the zone set to the IP of the load balancer correctly.
For anyone else with these issues, when checking for localhost connectivity test on your virtual machine (assuming you are hosting multiple sites), remember to add a virtual directory in IIS manager pointing to the file location along with an alias.
While a learning curve, the whole infrastructure of Azure is quite amazing! Impressed.
I have just taken over as a developer for a company. They host their development site on Rackspace. When I arrived, this server was spun down. Upon bringing it back up, I discovered that the IP address of that server points to the live website. There must be some kind of forwarding in place (I assume that it is through Rackspace) that does this. How can I fix this? I searched for settings on Rackspace to no avail. I would like to be able to access this dev site at least through the direct IP address until the network admin reappoints the develoment domain name to proper IP.
I'm guessing that you mean the live website domain routes traffic through to this server? Off the top of my head, you either have DNS load balancing in place - so an A record on your domain matching the IP address of the powered down machine OR you have a load balancer within rackspace that is routing traffic to it.
My farm consists of two front end (FE) web servers that are managed by a load balancer. One FE went down so we configured the load balancer to only send traffic to the other FE. We rebuilt the failed FE and rejoined the farm which appears to have worked successfully (looking at IIS). I want to test the new FE before configuring the Load Balancer to use the new server.
The approach I took was to add the IP/URL to my host file that pointed to the new server but nothing comes up.
Any advice would be great. Thanks
How you would normally do this is to add an AAM entry for the servers hostname.
For example have
intranet.domain.com resolves to your
NLB which then distributes requests
to SharePoint servers called WFE1,
WFE2 etc.
If you check SharePoints AAM (Central Administration > Operations > Alternate Access Mappings) you should have intranet.domain.com as url for the default zone (and you should only have one default zone entry per web applicaton).
If you add WFE1/WFE2 etc the AAM under the custom zone so the internal URL (WFEx) is mapped to the public URL (intranet.domain.com) then you should be able to go directly to your WFE by using the address http://WFEx/ in your browser.
As long as your DNS server is setup correctly this will work from any computer on your network regardless if its part of the NLB farm or not - essential for troubleshooting.
If you can't do this check a ping to WFEx is returning the servers IP address and not some other address such as the NLB/firewall etc.
MSDN - What every SharePoint administrator needs to know about Alternate Access Mappings
you need to edit internal URL based on the user profile.You can also edit the user permission if the anonymous user tries to access WFE1 instaed of WFE2.
If you are currently using hardware load balancing (and both servers sit behind it) you will probably need to add a new virtual IP address to your load balancer that connects only to the new FE before re-introducing it into the farm.
Add this virtual IP address to your hosts file for your domain name and you should be able to test it individually.
what if you add the ip/url of the working FE server to your host file? does nothing come up then? also, be careful about spaces vs. tabs vs. multiple spaces in your host file:
http://geekswithblogs.net/JanS/archive/2009/06/17/beware-of-spacing-in-windows7-hosts-file.aspx
So you've made a hosts file entry that points the cluster DNS name to one of the WFE's private ip adresses?
Make sure you can see that ip address. Sometimes only the cluster ip adress is visible to the outside and not the servers' private ip.
I usually add a host file entry for the cluster DNS name to each WFE. That way I can remote desktop to a machine and test it locally there. I do have remote desktop access..