Slow Https response in Jetty - security

I'm using servlets 3 with jetty 8.1.1 and the SslContextFactory on an amazon ec2 machine (m1-small).
The first HTTPS request from localhost (of the amazone machine) is about 150ms and further
requests seem to get faster (down to ~40ms) but not as close as to
the HTTP response time of only 20ms - why? Is encryption really that
slow?
Also when comparing HTTPS and HTTP from outside of the amazon cloud
the difference gets even worse: HTTPS requests are at least 400ms
slower!? How can that be? Is the encrypted content also bigger? And
how can I debug it or make all faster?
Some more informations: all 'measurements' are unscientificly done via time curl http://mydomain.com/ping but are reproducable. Also there is an ec2 load balancer in between. I'm sure I've configured something wrong or there is a big misunderstanding from me. Let me know!

update to 8.1.7
check the time from localhost on the aws machine for reference
check using the IP vs DNS, quite often those sorts of long pauses involve dns issues
set your /etc/hosts to bypass a DNS look for host as a test as well
-Dorg.eclipse.jetty.LEVEL=DEBUG on the server side to enable debug, should help your correlate the roundtrip inside of jetty and compare to actual network results
ssl decryption does incur some performance hit, hard to say that that would be all of your differences here though
odds are this is not specific to jetty but something in the environment, which hopefully some bullet above will help steer you in the right direction

I need to find out how to enable SSL sessions. For this I've created a new question as it is unclear how to turn on in jetty and how to handle on the client side

Related

Cloudflare 524 w/ nodejs + express

I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)

What is the impact on API users when migrating whole site from HTTP to HTTPS?

I'm going to Migrate whole site from HTTP to HTTPS -- What could be a impact from API users?? if necessary, construct comm to notify all existing API users?? is there way I can redirect them?
Here is my infrastructure
AWS ELB
Apache
Weblogic
Thanks in advance.
Neal.
As for impacts there is an overhead for the https handshake. This means for any new connection it can take up to an additional second to connect. If the connection is long, reused, or you have multiple connections close together then the impact will be less. If you you have infrequent and short connections it will have a bigger impact. Overall I still think it is worth the extra security. Http is only good for public items.
You can have apache auto redirect traffic to htts using rewrites. Here is a post on the subject
The redirect will slow things down as well so it makes sense to update the API as soon as possible to directly use https.
One final note the ELB can handle https and forward on the traffic to your system on http. I would recommend this. Supporting https on your web server is extra load and headache that you not need. The ELB does a great job of handling this.

With a node.js powered server on EC2, how can I decrease the TCP connection time?

While profiling my application I've noticed that in the Firebug Net panel, the "Connecting" time—that is the time waiting for a TCP connection—is consistently around 70–100ms. See image below:
Of course in the grand scheme of things, 100ms is not long, but I have seen other services that respond with 0ms Connect time. So if other servers can, I should be able to as well.
Any thoughts on how I might even beging to troubleshoot this?
I would start with looking to see if iptables is doing anything that may get in the way. Also, if you were working with an ELB load balancer, (or any other load balancing), I'd remove it from the mix and see if you are still having the longer than expected connect time.
You could also separately install lighttpd or Apache and see what happens. If you get a lower connect time, than that would point to your Node.js build. Although not definitively.
I would suggest a simple test to check if this problem is related to your server:
Launch another instance in the same availability zone as your server.
Benchmark your server with Apache Benchmark from the second instance:
ab -c 1 -n 20000 http://<private_server_instance_ip>:<port>/<URL>
It is important to put private IP here, not private or public DNS to sweep aside domain name resolution effects.
Check the average time taken per request: If it will be around 1 ms - the problem described is not of your server.
Benchmarking with FireFox BTW may not be the best idea because the results might depend on a number of circumstances.

Webserver failover

I will be running a dynamic web site and if the server ever is to stop responding, I'd like to failover to a static website that displays a "We are down for maintenance" page. I have been reading and I found that switching the DNS dynamically may be an option, but how quick will that change take place? And will everyone see the change immediately? Are there any better ways to failover to another server?
DNS has a TTL (time to live) and gets cached until the TTL expires. So a DNS cutover does not happen immediately. Everyone with a cached DNS lookup of your site still uses the old value. You could set an insanely short TTL but this is crappy for performance. DNS is almost certainly not the right way to accomplish what you are doing.
A load balancer can do this kind of immediate switchover. All traffic always hits the load balancer first which under normal circumstances proxies requests along to your main web server(s). In the event of web server crash, you can just have the load balancer direct all web traffic to your failover web server.
pound, perlbal or other software load-balancer could do that, I believe, yes
perhaps even Apache rewrite rules could allow this? I'm not sure if there's a way to branch when the dynamic server is not available, though. Customize Apache 404 response to your liking?
first of all is important understand which kind of failure you want failover, if it's app/db error and the server remain up you can create a script that do some checks and failover your website to another temp page. (changing apache config or .htaccess)
If is an hardware failover the DNS solution is ok but it's not immediate so you will lose some users traffic.
The best ideal solution is to use a proxy (like HAProxy) that forward the HTTP request to at least 2 webserver and automatically detect if one of those fail and switch over to the working one.
If you're using Amazon AWS you can use ELB - Elastic Load Balancer

DNS-based strategies for showing a nice "Currently Offline" page when the server is down

How can I make that a site automagically show a nice "Currently Offline" page when the server is down (I mean, the full server is down and the request can't reach IIS)
Changing the DNS manually is not an option.
Edit: I'm looking to some kind of DNS trick to redirect to other server in case the main server is down. I can make permanent changes to the DNS, but not manually as the server goes down.
I have used the uptime services at DNSMadeEasy to great success. In effect, they set the DNS TTL to a very low number (5 minutes). They take care of pinging your server.
In the event of outage, DNS queries get directed to the secondary IP. An excellent option for a "warm spare" in small shops with limited DNS requirements. I've used them for 3 years with not a single minute of downtime.
EDIT:
This allows for geographically redundant failover, which the NLB solution proposed does not address. If the network connection is down, both servers in a standard NLB configuration will be unreachable.
Some server needs to dish out the "currently offline page", so if your server is completely down, there will have to be some other server serving the file(s), so either you can set up a cluster of servers (even if just 2) and while the first one is down, the 2nd is configured only to return the "currently offline page". Once the 1st server is back up, you can take down the 2nd safetly (as server 1 will take all the load).
You probably need a second server with 100% uptime and then add some kind of failover load balancer. to it, and if the main server is online redirect to that and if it isn't redirect to itself showing a page saying server is down
I believe that if the server is down, there is nothing you can do.
The request will send up a 404 network error because when the web address is resolved to an IP, the IP that is being requested does not exist (because the server is down). If you can't change the DNS entry, then the client browser will continue to hit xxx.xxx.xxx.xxx and will never get a response.
If the server is up, but the website is down, you have options.
EDIT
Your edit mentions that you can make a permanent change the IP. But you would still need a two server setup in order to achieve what you are talking about. You can direct the DNS to a load balancer which would be able to direct the request to a server that is currently active. However, this still requires 100% uptime for the server that the DNS points to.
No matter what, if the server that the DNS is pointing to (which you must control, in order to redirect the traffic) is down, then all requests will receive a 404 network error.
EDIT Thanks to brian for pointing out my 404 error error.
Seriously, DNS is not the right answer to server load-balancing or fail-over. Too many systems (including stub clients and ISP recursive resolve) will cache records for much longer than the specified TTL.
If both servers are on the same network, use routing protocols to achieve fail-over by having both servers present the same IP address to the network, but where the fail-over server only takes over if it detects that the (supposedly) live server is offline.
If the servers are Unix, this is easily done by running Quagga on each server, and then using OSPF as the local routing protocol. I've personally used this for warm standby servers where the redundant system was actually in another data center, albeit one that was connected via a direct link to the main data center.
Certain DNS providers, such as AWS's Route 53, have a health-check option, which can be used to re-route to a static page. AWS has a how-to guide on setting this up.
I'm thinking if the site is load balanced the load balancer itself would detect that the web servers it's trying to redirect clients to are down, therefore it would send the user to a backup server with a message dictating technical problems.
Other than that.....
The only thing I can think is to control the calling page. Obviously that won't work in all circumstances... but if you know that most of your hits to this server will come from a particular source, then you could add a java script test to the source, and redirect to a "server down" page that is generated on a different server.
But if you are trying to handle all hits, from all sources (some of which you can't control), then I think you are out of luck. As other folks are saying - when a server is down, the browser gets a 404 error when it attempts a connection.
... perhaps there would be a way at a point in between to detect 404 errors being returned by servers and replacing them with a "server is down" web page. You'd need something like an HTML firewall or some other intermediate network gear between the server and the web client.

Resources