Currently we use DNS polling for four web servers.
The problem we met is that: When the user refreshes, he might go to other web servers. This feels very bad when a user has already logged in. Because we use a session to remember login status, but when refreshing to other web servers, the session is lost.
So the best solution should be to make the user still be on the same web server when he refreshes. Is there a way out?
Ok, I believe you mean "Round Robin DNS". Well, what you describe is a very common problem and there is no "right" solution for it, since the possible answers depend on many variables: are you trying to provide automatic failover or just load balancing? Are you willing to spend time and/or money in a load balancer? What technologies are you using? Java EE? PHP? Apache? IIS?
Having said that, if you're just after load balancing and failover is not much of an issue you may want to use different names for each server (www1,www2,www3 and so on) and redirect to them from your "main" web server (www) upon first access. It's simple (and simplistic) but practical in a few settings.
Can the web servers use a common database server to store the session information?
I know that certain hardware based load balancers will create a "sticky" relationship between a user and a server to avoid this type of problem.
You have quite a few options.
You can store sessions in a key:value storage, f.e. memcached (my personal favorite)
You can store sessions in a database
You can put reverse-proxy loadbalancers like in DNS and Your servers in the back. Then set it to make all requests from the same IPs go to the same servers, regardless of which loadbalancer they go through. In HAProxy this option is called balance source. Beware: if the number of node changes, the sessions can be lost. You can use the cookie or url_param features to avoid this.
See the HAProxy documentation. It's worth reading, really.
Are the four web servers all on the same site and network, or are they distributed?
If the former, you can include a server ID somewhere in the HTTP response, such that a reverse proxy in front of the real servers can identify which server is responsible for the session.
A DNS server that can respond based on the location of client could solve this problem. PowerDNS with the geoip module or GeoIPdns are some examples. You would need to make sure that the IP address sets were non-overlapping so a client always got the same response.
This would not provide any sort of fail over on its own.
Related
I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.
i dont want to do something illegal with it(e.g. vote continuously, in fact, somebody is doing it), but i only feel curious about it. For i have learned TCP/IP, and i found there are many software such like "IP changer",using which you can submit a website with different IP. WOW it is really magic! so i analysed some possible mechanism about it. But every possible way was denied by me.
i thought that they might connect and disconnect the internet continuously. because each time they connect the Internet, the ISP will dispatch a new IP address, and the hacker can make use of the new IP to submit the website, and disconnected after submitting successfully, and then connect for the next time...But it is impossible to some extent, for if do like this, every submitting will last a long time, and it doesn't work in some areas.
Modify TCP/IP data packets.For some time i did think it might be all right. but then i denied it. Assuming that i would submit a website, and i changed the IP address of the data packet which i will submit to the web site. it seems that everything is OK, but the web server will send message to the fake IP, so i wont get any information from the website. but in some circumstances where we only needn't reply it should work. Right? netfilter and iptables in linux may realize it, but i am not sure because i dont know the tools very well.
Using proxy server. i also think it is impossible to some extent.is there any method to get lots of free proxy servers? and most free proxy servers is very unstabitily, for there is a possible circumstance that you cannot use the proxy server in one day.Of course, paid proxy server may be permanent. but with these money you can do something better.
IMO the three methods all have disadvantages. and the realization may be none of them. Can anybody tell me the real mechanism of the technique?
Use lots of proxy servers. That will do the trick and since they can be harvested quite easily that's not very hard. Proxy's can be installed on hacked websites for example.
The added question:
Using proxy server. i also think it is impossible to some extent.is there any method to get lots of free proxy servers?
By simply hacking lots of webservers, totally automated, this is possible. For example searching for bad Joomla installs could allow you to install software at each webserver. Also normal computers can be used off course. Like a botnet.
and most free proxy servers is very unstabitily, for there is a possible circumstance that you cannot use the proxy server in one day. Of course, paid proxy server may be permanent. but with these money you can do something better.
Stability is off course important but in this case not really actually. You just send out lots and lots and lots of requests. Don't care which one succeeds and which one doesn't. It doesn't matter for your target.
1. ISP reconnect
This will not work for some (most?) ISPs which will reassign the same IP on a reconnect (as my provider does). Even if it works, you are likely to get the same IP address after some reconnects.
2. IP spoofing
That's the term describing your second method. You change the src-address of the outgoing IP packet. There are two problems with that:
Most ISP's routers don't allow it. They detect that the src address can't come from inside their network, so they simply drop it.
If you have a machine that is allowed to do this (maybe a dedicated server), you can only fake exactly one IP frame. This allows you to, e.g. spoof a DNS request but as you said, you will never get the response. Especially you cannot establish a connection within a stateful protocol like TCP, because this requires a bidirectional handshake. So you can't, e.g., fake a HTTP request using this (even if you don't need the answer)
Proxying
This is the only method that works. You have several options here:
Use open proxy servers (can be found using a search engine, although some will identify themselves as proxies and provide the original IP in the X-Forwarded-For HTTP header, which makes them basically useless for this use case)
Use hacked servers/desktop machines as proxies (maybe from a botnet)
Use free networks like JAP or TOR (the latter of which is probably your best bet, because you can change the exit nodes using some trickery)
If you are going to do something illegal, you might as well go all the way in. There ARE people who run "botnets" which are basically just armies of a few hundred to a few thousand indfected computers (that's what most viruses do). The people who run these armies, actually can charge people a certain amount of money for their "slaves" to visit a website for you (and rate/vote whatever) so you get a few hundred or a few thousand more ratings...
I can't exactly tell where or how much these services cost, since I haven't done it myself, but I know for sure that people over at "H#ckf0rums.net" will do it for you.
I will be running a dynamic web site and if the server ever is to stop responding, I'd like to failover to a static website that displays a "We are down for maintenance" page. I have been reading and I found that switching the DNS dynamically may be an option, but how quick will that change take place? And will everyone see the change immediately? Are there any better ways to failover to another server?
DNS has a TTL (time to live) and gets cached until the TTL expires. So a DNS cutover does not happen immediately. Everyone with a cached DNS lookup of your site still uses the old value. You could set an insanely short TTL but this is crappy for performance. DNS is almost certainly not the right way to accomplish what you are doing.
A load balancer can do this kind of immediate switchover. All traffic always hits the load balancer first which under normal circumstances proxies requests along to your main web server(s). In the event of web server crash, you can just have the load balancer direct all web traffic to your failover web server.
pound, perlbal or other software load-balancer could do that, I believe, yes
perhaps even Apache rewrite rules could allow this? I'm not sure if there's a way to branch when the dynamic server is not available, though. Customize Apache 404 response to your liking?
first of all is important understand which kind of failure you want failover, if it's app/db error and the server remain up you can create a script that do some checks and failover your website to another temp page. (changing apache config or .htaccess)
If is an hardware failover the DNS solution is ok but it's not immediate so you will lose some users traffic.
The best ideal solution is to use a proxy (like HAProxy) that forward the HTTP request to at least 2 webserver and automatically detect if one of those fail and switch over to the working one.
If you're using Amazon AWS you can use ELB - Elastic Load Balancer
I have two IIS servers running using NLB. Unfortunatelly I cannot use shared session server, so every server is using its own session. How can I ensure, that all requests from the same user are forwarded to the same IIS server?
Found this and decided to share with others:
Use the client affinity feature. When client affinity is enabled, Network Load Balancing directs all TCP connections to the same cluster host. This allows session state to be maintained in host memory. You can enable client affinity in the Add/Edit Port Rules dialog box in Network Load Balancing Manager. Choose either Single or Class C affinity to ensure that only one cluster host will handle all connections that are part of the same client session. This is important if the server application running on the cluster host maintains session state (such as server cookies) between connections. For more information about Network Load Balancing affinity, see Help in the Network Load Balancing snap-in.
I think what you're looking for is Sticky Sessions. Sticky sessions are implemented by your load balancer though. You probably need to setup an outside load balancer (BIG-IP, HAProxy, etc.) that can do sticky sessions.
You can do that easily as long as none of your customers use a distributed proxy system:
In the protieries of the NLB cluster, tab "port rules" you can choose the "filtering mode" and the affinity:
You cannot choose "none" because you don't have central sessions.
But "simple" would redirect every user to the same server as long as the ip stays the same.
If you e.g. anticiapte AOL proxy servers then "class C" might be a secure choice (albeit maybe reducing the load balancing a little bit), because the same class C net goes to the same server.
I guess that is easily implemented by MS in a way that both hosts know which ip is even or odd or which triplet of the class C net is even or odd and distribute the load always in the same way depending on the IP-address
Why would you want to do this? If it's because of session state then you should have a database or out-of-process server set up in a common place and have all nodes reference that.
I would consider a reverse proxy that sits in front of either server and remembers which external users are using which servers.
I know (from using it this way) Cherokee supports IPHash proxying but I'm sure there are lots more.
Just to add to Lloyd's answer, you should avoid using session in a load balanced environment anyway. The whole purpose behind using session is to avoid database calls; if you end up storing the session data back into the database you usually gain nothing.
The reason being that 1. you now have to make 2 database calls for each page load (retrieve and store) and 2. that data now has to go through serialization / deserialization boundaries. Most of the time this ends up being a more expensive operation than just retrieving the data you wanted to begin with.
Now, to your actual question. You do have the option to store the session data in the view state. Optionally, you could forgo session and instead use cookies. If you go this route, be sure to encrypt them on the way out and decrypt when receiving them.
How can I make that a site automagically show a nice "Currently Offline" page when the server is down (I mean, the full server is down and the request can't reach IIS)
Changing the DNS manually is not an option.
Edit: I'm looking to some kind of DNS trick to redirect to other server in case the main server is down. I can make permanent changes to the DNS, but not manually as the server goes down.
I have used the uptime services at DNSMadeEasy to great success. In effect, they set the DNS TTL to a very low number (5 minutes). They take care of pinging your server.
In the event of outage, DNS queries get directed to the secondary IP. An excellent option for a "warm spare" in small shops with limited DNS requirements. I've used them for 3 years with not a single minute of downtime.
EDIT:
This allows for geographically redundant failover, which the NLB solution proposed does not address. If the network connection is down, both servers in a standard NLB configuration will be unreachable.
Some server needs to dish out the "currently offline page", so if your server is completely down, there will have to be some other server serving the file(s), so either you can set up a cluster of servers (even if just 2) and while the first one is down, the 2nd is configured only to return the "currently offline page". Once the 1st server is back up, you can take down the 2nd safetly (as server 1 will take all the load).
You probably need a second server with 100% uptime and then add some kind of failover load balancer. to it, and if the main server is online redirect to that and if it isn't redirect to itself showing a page saying server is down
I believe that if the server is down, there is nothing you can do.
The request will send up a 404 network error because when the web address is resolved to an IP, the IP that is being requested does not exist (because the server is down). If you can't change the DNS entry, then the client browser will continue to hit xxx.xxx.xxx.xxx and will never get a response.
If the server is up, but the website is down, you have options.
EDIT
Your edit mentions that you can make a permanent change the IP. But you would still need a two server setup in order to achieve what you are talking about. You can direct the DNS to a load balancer which would be able to direct the request to a server that is currently active. However, this still requires 100% uptime for the server that the DNS points to.
No matter what, if the server that the DNS is pointing to (which you must control, in order to redirect the traffic) is down, then all requests will receive a 404 network error.
EDIT Thanks to brian for pointing out my 404 error error.
Seriously, DNS is not the right answer to server load-balancing or fail-over. Too many systems (including stub clients and ISP recursive resolve) will cache records for much longer than the specified TTL.
If both servers are on the same network, use routing protocols to achieve fail-over by having both servers present the same IP address to the network, but where the fail-over server only takes over if it detects that the (supposedly) live server is offline.
If the servers are Unix, this is easily done by running Quagga on each server, and then using OSPF as the local routing protocol. I've personally used this for warm standby servers where the redundant system was actually in another data center, albeit one that was connected via a direct link to the main data center.
Certain DNS providers, such as AWS's Route 53, have a health-check option, which can be used to re-route to a static page. AWS has a how-to guide on setting this up.
I'm thinking if the site is load balanced the load balancer itself would detect that the web servers it's trying to redirect clients to are down, therefore it would send the user to a backup server with a message dictating technical problems.
Other than that.....
The only thing I can think is to control the calling page. Obviously that won't work in all circumstances... but if you know that most of your hits to this server will come from a particular source, then you could add a java script test to the source, and redirect to a "server down" page that is generated on a different server.
But if you are trying to handle all hits, from all sources (some of which you can't control), then I think you are out of luck. As other folks are saying - when a server is down, the browser gets a 404 error when it attempts a connection.
... perhaps there would be a way at a point in between to detect 404 errors being returned by servers and replacing them with a "server is down" web page. You'd need something like an HTML firewall or some other intermediate network gear between the server and the web client.