Cross-colo fail-over design, DNS level fail-over? - dns

I'm interested in cross-colo fail-over strategies for web applications, such that if the main site fails users seamlessly land at the fail-over site in another colo.
The application side of things looks to be mostly figured out with a master-slave database setup between the colos and services designed to recover and be able to pick up mid-stream. I'm trying to figure out the strategy for moving traffic from the main site to the fail-over site. DNS failover, even with low TTLs, seems to carry a fair bit of latency.
What strategies would you recommend for quickly moving traffic between colos, assuming the servers at the main colo are unreachable?
If you have other interesting experience / words of wisdom about cross-colo failover I'd love to hear those as well.

DNS based mechanisms are troublesome, even if you put low TTLs in your zone files.
The reason for this is that many applications (e.g. MSIE) maintain their own caches which ignore the TTL. Other software will do a single gethostbyname() or equivalent call and store the result until the program is restarted.
Worse still, many ISPs' recursive DNS servers are known to ignore TTLs below their own preferred minimum and impose their own higher TTLs.
Ultimately if the site is to run from both data centers without changing its IP address then you need to look at arrangements for "Multihoming" via global BGP4 route announcements.
With multihoming you need to get at least a /24 netblock of "provider independent" (aka "PI") IP address space, and then have that only be announced to the global routing table from the backup site if the main site goes offline.

As for DNS, I like to reference, "Why DNS Based Global Server Load Balancing Doesn't Work". For everything else -- use BGP.
Designing networks in order to load balance using BGP is still not an easy task and I myself certainly am not an expert on this. It's also more complex than Wikipedia can tell you but there are a couple interesting articles on the web that detail how it can be done:
Load Balancing In BGP Networks
Load Sharing in Single and Multi homed environments
There is always more if you search for BGP and load balancing. There are also a couple whitepapers on the net which describe how Akamai does their global loadbalancing (I believe it's BGP too.), which is always interesting to read and learn about.
Beyond the obvious concepts you can use software and hardware to achieve, you might also want to check with your ISP/provider/colo if they can set you up.
Also, no offense in regard to your choice of colo (Who's the provider?), but most places should be setup to deal with downtimes and so on, they should not require you to take actions. Of course floods or aliens can always strike, but in that case I guess there are more important issues. :-)

If you can, Multicast - http://en.wikipedia.org/wiki/Multicast or AnyCast - http://en.wikipedia.org/wiki/Anycast

Related

Understanding load balancing and DNS records

I am curious on how to setup multiple load-balancers (with different IP addresses) with a specific domain.
I understand that it is possible to setup multiple A-records in a DNS to all of my load-balancers, but I can understand that this is not ideal.
DNS' doesn't do any kind of is-alive checks, so if a load-balancer dies, the DNS will still send users to this address, right?
So how do you connect a domain/DNS with multiple load-balancers, while preventing a dead load-balancer from getting requests...
I read something about anycast, but is this the only solution?
I am just curious about how this issue is normally handled.
Thanks.
You have multiple solutions.
On a pure DNS level you can publish your records with a low TTL (say 5 minutes), and have your monitoring systems change the content of the zone by removing the dead record when detected. This does not provide immediate fail-over but can be often good enough.
It does not involve too complicated systems.
Also, some DNS servers allow some "programmed part", with a dynamic backend that can compute records based on some external parameters, like doing live checks and replying only with the live records.
Anycast is another solution indeed, and has then no relationship with the DNS anymore (although the DNS itself can be "anycasted" but then it is to resolve its possible failover needs, not the ones of your application).
Basically your multiple systems, on various places in the world, are advertised with the same IP address. So the DNS has only one record.
With the "magic" of BGP, each instance announcing a given IP address will collect all the nearby traffic, so you get load-balancing for free in fact. And you need some specific tooling so that, as soon as some local instance is dead (or in maintenance mode for example), you stop announcing its IP address there, so that all other networks in the world, again because of BGP, learn that to reach "something" behing that IP they need to go somewhere else, to another instance of yours announcing this IP.
This is far more complicated to setup as you need a proven BGP setup (and making errors in BGP can have even greater consequences than in DNS), and multiple instances located in different datacentres, and possibly multiple AS numbers, depending on how you want to do your anycast done. This clearly needs skilled professional in BGP routing where the first solution with only DNS (in the first case of just changing a static zonefile) is reachable by any enthousiastic amateur.
So the answer also slightly depend on the network locations of your load-balancers.

Azure traffic manager irratic load balancing causing issues

I have an azure traffic manager configured to route traffic over two data centres based on performance (latency). The two DCs are replicas of each other, and is engineered in this way so that our global customers are givin a good performance no matter where they are connecting from.
The application tiers do not hold state, and the data tiers are set up using SQL merge replication on a 1 minute timer to keep the DBS in sync as to provide service continuity in the event of a Datacenter failover.
The issues that I have found is that the traffic managers routing is slightly erratic. I have observed registering a user under one Datacenter only to find the login has bee routed to the other one - the SQL replication hasn't synced at this point and the second DC isn't aware that the user exists. Even though the user both registered and logged in from the same location! The DCs are in the West US and South east asia.
I'm looking at a few options to fix this. Solution A is to Silo the users data to a specific data center, therefor whatever DC the user registers to is used thereafter. I wouldn't have syncing issues but I lose the advantage of continuity that the SQL replication provides.
Solution B is to use a different more predictable global load balancer. But first I want some opinions and to perhaps see if I am doing something wrong or perhaps my architecture is flawed.
Thanks for advice.
My solution had challenges using the traffic manager also, although slightly different to yours. The traffic manager is a great value solution if it can work for you. As far as I am aware no configuration in traffic manager allows it to be aware of sessions, therefore it is blinkered to its config setting of performance in your case. This means its acting erratic based on your expectation for it to use sessions to be persistent to an endpoint subject to it being available.
In terms of your solution, it is very much Enterprise. To move backwards with solution A probably doesn't fit the requirement given what you went to the effort of building. Solution B brings many more features that Traffic Manager lacks and one of them will resolve your issue. For other reasons I am looking at
http://kemptechnologies.com/uk/server-load-balancing-appliances/virtual-loadbalancer/loadmaster-azure
It is designed for Azure and is available as a pre-installed VM. There are others available but this has been my choice and what I would use if I were in your position and wanted to keep the level of resilience you currently have.
Hope this helps.

How to protect a website from DoS attacks

What is the best methods for protecting a site form DoS attack. Any idea how popular sites/services handles this issue?.
what are the tools/services in application, operating system, networking, hosting levels?.
it would be nice if some one could share their real experience they deal with.
Thanks
Sure you mean DoS not injections? There's not much you can do on a web programming end to prevent them as it's more about tying up connection ports and blocking them at the physical layer than at the application layer (web programming).
In regards to how most companies prevent them is a lot of companies use load balancing and server farms to displace the bandwidth coming in. Also, a lot of smart routers are monitoring activity from IPs and IP ranges to make sure there aren't too many inquiries coming in (and if so performs a block before it hits the server).
Biggest intentional DoS I can think of is woot.com during a woot-off though. I suggest trying wikipedia ( http://en.wikipedia.org/wiki/Denial-of-service_attack#Prevention_and_response ) and see what they have to say about prevention methods.
I've never had to deal with this yet, but a common method involves writing a small piece of code to track IP addresses that are making a large amount of requests in a short amount of time and denying them before processing actually happens.
Many hosting services provide this along with hosting, check with them to see if they do.
I implemented this once in the application layer. We recorded all requests served to our server farms through a service which each machine in the farm could send request information to. We then processed these requests, aggregated by IP address, and automatically flagged any IP address exceeding a threshold of a certain number of requests per time interval. Any request coming from a flagged IP got a standard Captcha response, if they failed too many times, they were banned forever (dangerous if you get a DoS from behind a proxy.) If they proved they were a human the statistics related to their IP were "zeroed."
Well, this is an old one, but people looking to do this might want to look at fail2ban.
http://go2linux.garron.me/linux/2011/05/fail2ban-protect-web-server-http-dos-attack-1084.html
That's more of a serverfault sort of answer, as opposed to building this into your application, but I think it's the sort of problem which is most likely better tackled that way. If the logic for what you want to block is complex, consider having your application just log enough info to base the banning policy action on, rather than trying to put the policy into effect.
Consider also that depending on the web server you use, you might be vulnerable to things like a slow loris attack, and there's nothing you can do about that at a web application level.

Why ww2 sub domains?

I have seen on the web some domain names having prefix of ww2 or ww3 or so (ww2.somedomain.example, ww3.yourdomain.example). And these happen mostly when traveling from a page to page. What would be the reason of having such subdomains? Is there anything special about them or are they just another sub domain? I mean, are they useful in any particular context?
People running large(-ish) sites used to do this when they needed to break up the load between more than one server. One machine would be called www then the next one would be called www2, etc.
Today, much better load balancing solutions are available that don't require you to expose your internal machine naming conventions to the browser clients.
Technically, the initials before the primary domain name (e.g. the "mail" in mail.yahoo.com) can be best though of as a machine name, identifying the web server/mail server, whatever. They can also identify a group of machines (a web farm).
So the person building up that machine can call it anything they want. The initials www are a (somewhat arbitrary) convention.
Oftentimes, ww{x} is used to indicate a particular server of a set of mirrored servers. If properly configured, I could have www.mydomain.example point to my web site on a load balancer, while I could use ww1, ww2, ww3, etc to access the site guaranteed from a specific LBed server.
I can see 3 possibilities
make the browser load resources more faster. the browser would open a fixed number of connection to same domain not to load the server
they are using more then one server so they can share the load between servers
separate some content to a separate virtual host or server. some kind of organization ...
As various answers have pointed out, modern day load-balancers can balance load without having to resort to using different sub-domains for each machine. However, there is still one benefit of dividing your site into various sub-domains: maximize browser connections.
All browsers limit the number of concurrent connections to a particular host (6 for most modern browsers). If a page contains lots of assets, page-load would be slow as the browser queue those requests because of connection limit. By loading different assets from different subdomain, you get around the connection limit, speeding up page-load.
Typically it's a partitioning strategy. When sites get sufficiently large that they can't run (or run well) on a single server you then have to look at solutions for scaling the application out horizontally (ie more servers) rather than vertically (ie bigger servers).
Some example partitioning strategies are:
Certain users always use certain servers. This can be arbitrary or based on some criteria (user type, geographic location, etc);
When a user gets a session that session is assigned to a particular server (sometimes called "sticky sessions" although this can also be used where such different machines are transparent); and
Certain activities are always on certain machines.
Another common case is organizational reasons. In an extremely large company, www might be for their main marketing website. And, ww2 might be, say, for product documentation pages.
In an ideal world, all departments would share perfectly. In practise, a big company might have their (www) marketing pages managed by an external agency. Their internal (ww2) pages done by their internal team. Often, the marketing agency just doesn't update pages quickly or refuses to run certain stacks, may be too limiting in terms of bureaucratic needs.
The marketing agency may insist on controlling the www and not sharing due to past situations where a company website went down due to internal reasons and yet the agency got blamed, or vice versa.
So, theoretically, there's no need to do this with modern load balancing and such. But, in practise, it can be a lot cheaper, straightforward and allow better business productivity.

Where should restricting IP address be handled?

We run a reverse proxy in front of our application tier and I'm wondering where the "best practice" place for handling the IP restriction is.
Currently, we use the application security to restrict access to specific resources by IP address but this has caused some issues when we moved to running behind a reverse proxy. It's quite easy to configure the allow/deny rules at the proxy instead of the application but since we run multiple applications behind the proxy, making modifications to the config there has the potential to affect other application (not a huge danger, but still present).
Is it better to do the filter further up the chain or closer to the application?
Are there any gotchas, like what we've encountered by doing application restriction and adding a reverse proxy where all the requests "come from" the proxy, forcing us to use a header to find the "real" IP address.
We filter as early as possible and keep it away from the application; these sort of things are better managed by network operations. The reason being is that app developers or maintainers are not always in on the loop when changing ip addresses and the network ops people are usually the first to know. Also network type tools are usually better at providing / restricting access that software level tools.
I would never restrict by IP address. Restrictions like that are the job of a security layer, not of the Network layer, which is where IP addresses live. I rarely find value in having an Application restrict the implementation of the Network.
This depends on the type of resources that need to be restricted by IP. If parts of the application need to be restricted via IP then the application should be handling it. If the entire application needs to be blocked then you should be further up the chain.
The general rule is to restrict as early as possible without compromising any audit systems you have in place (it is almost always a good idea to know when people try to break your security system).
I restrict by IP addresses as early as possible - this eliminates unnecessary traffic in the following layers or subnetworks. So my advice is similar to u07ch's do it as early as possible.

Resources