I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)
Related
My app has been running fine for a while, and just started getting DDoS'ed. I took a look at the IPs, and they all originate from CloudFlare. This either means that CloudFlare is DDoSing me (ins't this illegal?) OR, someone is using CloudFlare as a proxy in order to DDoS my app, but why wouldn't CloudFlare catch this?
Verify you don't have any page rules or anything in your traffic tab/A records combo that could be basically creating a super loop. I once accidentally took down 800 websites by setting up a super infinite DNS/Page Forwarding loop. It was horrible, I thought I was getting attacked, but turned out I just messed up some page forwarding.
So, I've made a website which makes use of a SSL certificate and added Cloudflare free protection to it. Unfortunately, apart from the 443 port, every other port listed here for HTTPS give an 523 Origin is unreachable error. Even when I stop Apache, it's still the same. Is there some misconfiguration involved? I will provide any demo or source code neccesary, as requested in the replies.
Note: this problem is independent of wire/wireless, iPad (with Google DNS)/Linux/Windows
I can't access several sites including stackoverlow (cdn.sstatic.net), aws.amazon.com (d36cz9buwru1tt.cloudfront.net), heroku, github etc for 3 days from Turkey with ISP Superonline.
When I try to enter aws.amazon.com, browser downloads html and some images properly but can't download some of them, those hosted on d36cz9buwru1tt.cloudfront.net or subdomains like that.
Chrome says several images from this subdomain are pending. So the web page loading never finishes.
I can't access http://d36cz9buwru1tt.cloudfront.net, it keeps loading for a while (30 sec to minutes). But when I use proxy over Amsterdam, it loads immediately.
Without proxy, I can get its IP with ping:
64 bytes from server-54-240-162-83.fra6.r.cloudfront.net (54.240.162.83): icmp_req=1 ttl=53 time=58.2 ms
While writing these, the previous URL became available after several hours and now github.com can't be accessed due to css files on its CDN: https://github.global.ssl.fastly.net/assets/github2-f227c0e7c55002ba0645fc8d3761d00bce36e248.css
$ wget https://github.global.ssl.fastly.net/assets/github2-f227c0e7c55002ba0645fc8d3761d00bce36e248.css
--2013-11-19 21:39:32-- https://github.global.ssl.fastly.net/assets/github2-f227c0e7c55002ba0645fc8d3761d00bce36e248.css
Resolving github.global.ssl.fastly.net (github.global.ssl.fastly.net)... 185.31.17.184, 185.31.17.185
Connecting to github.global.ssl.fastly.net (github.global.ssl.fastly.net)|185.31.17.184|:443... connected.
...
...
waits but no response.
What could be the cause of this problem? My ISP did not help.
UPDATE: Changing my IP has solved the problem. Seems like someone using that IP before me got banned by Cloudfront.
I also had the exact same problem, Changing the DNS solved the issue. For me Coursera wasn't opening, neither 9GAG.
Changed my default DNS server provided by my ISP to the one given by google i.e.
8.8.8.8 and 8.8.4.4
I hope this solves your issue as well.
It seems there is a lot of problems with some ISPs and DNS resolution on CloudFront. See this https://forums.aws.amazon.com/thread.jspa?messageID=263168
Have you tried to change your DNS?
I also have the exactly same problem; same situation as you.
I think we really experience exactly the same. (but for me happen just today)
I first noticed problem on cloudfront then fastly then I can connect to cloudfront but fastly.
To answer your question I have a possible speculation about the root of the problem.
However, if this speculation is true the issue can't be solved on our end.
I think it's because of LSN (or NAT444, CGN) that installed in ISP network.
(ISP don't want customers to notice this change.)
To check if this speculation is plausible please check your modem/router
if the IP address received from ISP is in this block 100.64.0.0/10
then that should explain the phenomenon.
My ISP recently deploy LSN short before this problem arise.
I think IP address pool in LSN is too small (poorly deploy by ISP) so too many users share the same IP address.
this cause CDN networks to think they got DOS attack from particular IP address.
then CDN networks will temporary block (or null route) the LSN IP address.
some note: I'm sure this is not about the DNS because fastly deploy some trick called "round robin DNS" to use with "client retry" and I tried connect more than one IP address from fastly and also check that the values (All A records received) are correct.
To workaround the issue you can setup SOCKS proxy on a VPS and write PAC script to redirect some traffic thru the proxy.
I will be running a dynamic web site and if the server ever is to stop responding, I'd like to failover to a static website that displays a "We are down for maintenance" page. I have been reading and I found that switching the DNS dynamically may be an option, but how quick will that change take place? And will everyone see the change immediately? Are there any better ways to failover to another server?
DNS has a TTL (time to live) and gets cached until the TTL expires. So a DNS cutover does not happen immediately. Everyone with a cached DNS lookup of your site still uses the old value. You could set an insanely short TTL but this is crappy for performance. DNS is almost certainly not the right way to accomplish what you are doing.
A load balancer can do this kind of immediate switchover. All traffic always hits the load balancer first which under normal circumstances proxies requests along to your main web server(s). In the event of web server crash, you can just have the load balancer direct all web traffic to your failover web server.
pound, perlbal or other software load-balancer could do that, I believe, yes
perhaps even Apache rewrite rules could allow this? I'm not sure if there's a way to branch when the dynamic server is not available, though. Customize Apache 404 response to your liking?
first of all is important understand which kind of failure you want failover, if it's app/db error and the server remain up you can create a script that do some checks and failover your website to another temp page. (changing apache config or .htaccess)
If is an hardware failover the DNS solution is ok but it's not immediate so you will lose some users traffic.
The best ideal solution is to use a proxy (like HAProxy) that forward the HTTP request to at least 2 webserver and automatically detect if one of those fail and switch over to the working one.
If you're using Amazon AWS you can use ELB - Elastic Load Balancer
How can I make that a site automagically show a nice "Currently Offline" page when the server is down (I mean, the full server is down and the request can't reach IIS)
Changing the DNS manually is not an option.
Edit: I'm looking to some kind of DNS trick to redirect to other server in case the main server is down. I can make permanent changes to the DNS, but not manually as the server goes down.
I have used the uptime services at DNSMadeEasy to great success. In effect, they set the DNS TTL to a very low number (5 minutes). They take care of pinging your server.
In the event of outage, DNS queries get directed to the secondary IP. An excellent option for a "warm spare" in small shops with limited DNS requirements. I've used them for 3 years with not a single minute of downtime.
EDIT:
This allows for geographically redundant failover, which the NLB solution proposed does not address. If the network connection is down, both servers in a standard NLB configuration will be unreachable.
Some server needs to dish out the "currently offline page", so if your server is completely down, there will have to be some other server serving the file(s), so either you can set up a cluster of servers (even if just 2) and while the first one is down, the 2nd is configured only to return the "currently offline page". Once the 1st server is back up, you can take down the 2nd safetly (as server 1 will take all the load).
You probably need a second server with 100% uptime and then add some kind of failover load balancer. to it, and if the main server is online redirect to that and if it isn't redirect to itself showing a page saying server is down
I believe that if the server is down, there is nothing you can do.
The request will send up a 404 network error because when the web address is resolved to an IP, the IP that is being requested does not exist (because the server is down). If you can't change the DNS entry, then the client browser will continue to hit xxx.xxx.xxx.xxx and will never get a response.
If the server is up, but the website is down, you have options.
EDIT
Your edit mentions that you can make a permanent change the IP. But you would still need a two server setup in order to achieve what you are talking about. You can direct the DNS to a load balancer which would be able to direct the request to a server that is currently active. However, this still requires 100% uptime for the server that the DNS points to.
No matter what, if the server that the DNS is pointing to (which you must control, in order to redirect the traffic) is down, then all requests will receive a 404 network error.
EDIT Thanks to brian for pointing out my 404 error error.
Seriously, DNS is not the right answer to server load-balancing or fail-over. Too many systems (including stub clients and ISP recursive resolve) will cache records for much longer than the specified TTL.
If both servers are on the same network, use routing protocols to achieve fail-over by having both servers present the same IP address to the network, but where the fail-over server only takes over if it detects that the (supposedly) live server is offline.
If the servers are Unix, this is easily done by running Quagga on each server, and then using OSPF as the local routing protocol. I've personally used this for warm standby servers where the redundant system was actually in another data center, albeit one that was connected via a direct link to the main data center.
Certain DNS providers, such as AWS's Route 53, have a health-check option, which can be used to re-route to a static page. AWS has a how-to guide on setting this up.
I'm thinking if the site is load balanced the load balancer itself would detect that the web servers it's trying to redirect clients to are down, therefore it would send the user to a backup server with a message dictating technical problems.
Other than that.....
The only thing I can think is to control the calling page. Obviously that won't work in all circumstances... but if you know that most of your hits to this server will come from a particular source, then you could add a java script test to the source, and redirect to a "server down" page that is generated on a different server.
But if you are trying to handle all hits, from all sources (some of which you can't control), then I think you are out of luck. As other folks are saying - when a server is down, the browser gets a 404 error when it attempts a connection.
... perhaps there would be a way at a point in between to detect 404 errors being returned by servers and replacing them with a "server is down" web page. You'd need something like an HTML firewall or some other intermediate network gear between the server and the web client.