Testing if your web browser does support SPDY - browser

Is there site which would test, if your browser supports SPDY, and is configured for optimum performance?
If not, maybe we should create one?

There was a site called http://spdytest.com/ but it's currently down due to maintenance.
Supported Browser are currently listet on wikipedia
Alternatively you can visit https://www.modspdy.com/ and look at your debugging tool at how the files are beeing loaded. I would post example images for both cases but i don't have enough reputation points :-)

SPDY indicator is a Chrome Plugin that shows a green thunder icon for sites where SPDY is enabled. Alternatively you can also go to chrome://net-internals/#spdy live SPDY sessions.
It is not possible for a site or plugin to tell whether site is optimized for SPDY because there are so many factors that decides a site's performance (page load time). Here are a few important factors that can be crucial for a site's adaptability towards SPDY
Distribution of traffic between SPDY and non SPDY enabled browser - There are some techniques like domain sharding that was useful to improve performance with HTTP 1.0 but can hurt a site's performance with SPDY. Since server decides how many shards to use for a domain without the knowledge whether the client supports SPDY it becomes tricky to pick the correct number of mirrors.
SPDY supports server push which means the servers now should intelligently decide which resources to push. A site inspector or browser plugin has hardly any knowledge about the interdependency or the relevancy between resources. So it is really difficult to suggest a combination that is optimal.
In short a site's performance depends on several factors and there are a zillion combinations for those factors to try out. To suggest an optimized combination for a site the tool / plugin should be capable of evaluating those zillion combinations.

You could try using this site ( https://spdy.centminmod.com/flags.html ) which is a SPDY demo site displaying hundreds of country flags on the same page.
It provides comparison videos to see how it would load for SPDY and non-SPDY enabled browsers.

Related

Is there a benefit to embedding microformat information in the HTML for a web app?

Is there a benefit to implementing micro-formats (or itemscope) in the html for a web app? So far it only looks like it is useful for seo and my web app is behind a login screen, so I will not have to worry about that. Are there any plugins or browsers which automatically process the information.
Or as unor proposed, is there a benefit in adding structured data to non-public sites?
If you generalize the question like unor proposed:
Is there a benefit in adding structured data to non-public sites?
It has got a definit advantage! For someone with visual disability it is easier to navigate through sites if micro-formats are implemented. If there is a possibility that someone with a screen reader will use the application it worth the effort. Not to mention that it is a one-time task with a long term positive effect. I think it is a good thing to proactively thrive to serve all kind of user.
Answer to the original question: browsers do not have to process those information, but some advanced technology could use it in the browser like preload. There was a research at my University about which pages to reload and those were using this feature. (Eg: in free time of the processor the browser plugin preloaded home to enhance the browsing experience.)

By hosting some assets on a server and others on a CDN will this allow a browser to negotiate more connections?

Short question
Im trying to speed up a static website - html,css,js the site also has many images that weight allot, during testing it seems the browser is struggling managing all these connections.
Apart from the normal compression im wandering if by hosting my site files - html,css,js on the VPS and then taking all the images onto a CDN would this allow the browser to negociate with 2 server and be quicker over all. (Obviously there is going to be no mbps speed gain as that limited by the users connection, but im wandering if this would allow me to have more open connections, thus having a quicker TTFB)
Long question with background
Ive got a site that im working on speeding up, the site itself is all client side html,js, css. The issue with the speed on this site tends to be a) the size of the files b) the qty of the files. ie. there are lots of images, and they each weight allot.
Im going to do all the usual stuff combine the ones that are used for the UI into a sprite sheet and compress all image using jpegmini etc.
Ive moved the site to a VPS which has made a big difference, the next move im pondering is setting up a CDN to host the images, im not trying to host them for any geographical distribution or load balancing reason (although that is an added benefit) but i was wandering if the bottle neck on downloading assets would be less if the users browser would be getting all the site files : html, js, css from the VPS but at the same time getting the images from the CDN, is this were the bottle neck is ie. the users browser can only have so many connections to 1 server at a time, but if i had 2 servers it could negotiate the same amount of connections but on both servers concurrently ?
Im guessing there could also be an issue with load on the server, but for testing im using a 2gb multi core VPS which no one else is on so in that regard that shouldnt be a problem that comes up during my tests.
Context
Traditionally, web browsers place a limit on the number of simultaneous connections a browser can make to one domain. These limits were established in 1999 in the HTTP/1.1 specification by the Internet Engineering Task Force (IETF). The intent of the limit was to avoid overloading web servers and to reduce internet congestion. The commonly used limit is no more than two simultaneous connections with any server or proxy.
Solution to that limitation: Domain Sharding
Domain sharding is a technique to accelerate page load times by tricking browsers into opening more simultaneous connections than are normally allowed.
See this article) for an example by using multiple subdomains in order to multiply parallel connections to the CDN.
So instead of:
http://cdn.domain.com/img/brown_sheep.jpg
http://cdn.domain.com/img/green_sheep.jpg
The browser can use parallel connections by using a subdomain:
http://cdn.domain.com/img/brown_sheep.jpg
http://cdn1.domain.com/img/green_sheep.jpg
Present: beware of Sharding
You might consider the downsides of using domain sharding, because it won't be necessary and even hurts performance under SPDY. If the browsers you're targeting are mostly SPDY is supported by Chrome, Firefox, Opera, and IE 11, you might want to skip domain sharding.

Web Page Compression - Looking for a definitive answer

In order to try and get to a resolution about web page compression, I'd like to pose the question to you 'gurus' here in the hope that I can arrive at some kind of clear answer.
The website in question: http://yoginiyogabahrain.com
I recently developed this site and am hosting it with Hostmonster in Utah.
My reasons for constructing it as a one page scrollable site was based around the amount of content that does not get updated - literally everything outside of the 'schedule' which is updated once a month. I realise that the 'departments' could have been displayed on separate pages, but felt that the content didn't warrant whole pages devoted to their own containers which also requires further server requests.
I have minimised the HTML, CSS and JS components of the site in accordance with the guidelines and recommendations from Google Page Speed and Yahoo YSlow. I have also applied server and browser caching directives to the .htaccess file to complete further recommendations.
Currently Pingdom Tools rates the site at 98/100 which pleases me. Google and Yahoo are hammering the site on the lack of GZIP compression and, in the case of Yahoo, the lack of CDN usage. I'm not so much worried about the CDN as this site simply doesn't warrant a CDN. But the compression bothers me in that it was initially being applied.
For about a week, the site was being GZipped and then it stopped. I contacted Hostmonster about this and they said that if it was determined that there were not enough resources to serve a compressed version of the site, it would not do so. But that doesn't answer the question about whether it would do so if the resources detrmined it could. To date, the site has no longer been compressed.
Having done a lot of online research to find an answer about whether this is such a major issue, I have come across a plethora of differing opinions. Some say we should be compressing, and some say it's not worth the strain on resources to do so.
If Hostmonster have determined that the site doesn't warrant being compressed, why do Google and Yahoo nail it for the lack of compression? Why does Pingdom Tools not even take that aspect into account?
Forgive the lengthy post, but I wanted to be as clear as possible about what I'm trying to establish.
So in summary, is the lack of compression on this a major issue or would it be necessary to perhaps look at a hosting provider who will apply compression without question on a shared hosting plan?
Many thanks!

I need to speed up my site and reduce the number of files calls

My webhost is aking me to speed up my site and reduce the number of files calls.
Ok let me explain a little, my website is use in 95% as a bridge between my database (in the same hosting) and my Android applications (I have around 30 that need information from my db), the information only goes one way (as now) the app calls a json string like this the one in the site:
http://www.guiasitio.com/mantenimiento/applinks/prlinks.php
and this webpage to show in a web view as welcome message:
http://www.guiasitio.com/movilapp/test.php
this page has some images and jquery so I think this are the ones having a lot of memory usage, they have told me to use some code to create a cache of those files in the person browser to save memory (that is a little Chinese to me since I don't understand it) can some one give me an idea and send me to a tutorial on how to get this done?. Can the webview in a Android app keep caches of this files?
All your help his highly appreciated. Thanks
Using a CDN or content delivery network would be an easy solution if it worked well for you. Essentially you are off-loading the work or storing and serving static files (mainly images and CSS files) to another server. In addition to reducing the load on your your current server, it will speed up your site because files will be served from a location closest to each site visitor.
There are many good CDN choices. Amazon CloudFront is one popular option, though in my optinion the prize for the easiest service to setup is CloudFlare ... they offer a free plan, simply fill in the details, change the DNS settings on your domain to point to CloudFlare and you will be up and running.
With some fine-tuning, you can expect to reduce the requests on your server by up to 80%
I use both Amazon and CloudFlare, with good results. I have found that the main thing to be cautious of is to carefully check all the scripts on your site and make sure they are working as expected. CloudFlare has a simple setting where you can specify the cache settings as well, so there's another detail on your list covered.
Good luck!

Tux, Varnish or Squid? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We need a web content accelerator for static images to sit in front of our Apache web front end servers
Our previous hosting partner used Tux with great success and I like the fact it's part of Red Hat Linux which we're using, but its last update was in 2006 and there seems little chance of future development. Our ISP recommends we use Squid in reverse caching proxy role.
Any thoughts between Tux and Squid? Compatibility, reliability and future support are as important to us as performance.
Also, I read in other threads here about Varnish; anyone have any real-world experience of Varnish compared with Squid, and/or Tux, gained in high-traffic environments?
Cheers
Ian
UPDATE: We're testing Squid now. Using ab to pull the same image 10,000 times with a concurrency of 100, both Apache on its own and Squid/Apache burned through the requests very quickly. But Squid made only a single request to Apache for the image then served them all from RAM, whereas Apache alone had to fork a large number of workers in order to serve the images. It looks like Squid will work well in freeing up the Apache workers to handle dynamic pages.
In my experience varnish is much faster than squid, but equally importantly it's much less of a black box than squid is. Varnish gives you access to very detailed logs that are useful when debugging problems. It's configuration language is also much simpler and much more powerful that squid's.
#Daniel, #MKUltra, to elaborate on Varnish's supposed problems with cookies, there aren't really any. It is completely normal not to cache a request if it returns a cookie with it. Cookies are mostly meant to be used to distinguish different user preferences, so I don't think one would want to cache these (especially if you they include some secret information like a session id or a password!).
If you server sends cookies with your .js and images, that's a problem on your backend side, not on Varnish's side. As referenced by #Daniel (link provided), you can force the caching of these files anyway, thanks to the really cool language/DSL integrated in Varnish...
If you're looking to push static images and a lot of them, you may want to look at some basics first.
Your application should ensure that all correct headers are being passed, Cache-Control and Expires for example. That should result in the clients browsers caching those images locally and cutting down on your request count.
Use a CDN (if it's in your budget), this brings the images closer to your clients (generally) and will result in a better user experience for them. For the CDN to be a productive investment you'll again need to make sure all your necessary caching headers are properly set,
as per the point I made in the previous paragraph.
After all that if you are still going to use a reverse proxy, I recommend using nginx in proxy mode, over Varnish and squid. Yes Varnish is fast, and as fast as nginx, but what you're wanting to do is really quite simple, Varnish comes into it's own when you want to do complex caching, and ESI. So Keep It Simple, Stupid. nginx will do your job very nicely indeed.
I have no experience with Tux, so I can't comment on it sorry.
For what it's worth, I recently set up nginx as a reverse-proxy in front of Apache on a 6-year-old low-power webserver (running Fedora Core 2) which was under a mild DDoS attack (10K req/sec). Pages loading was snappy (<100ms) and system load stayed low at around 20% CPU utilization, and very little memory consumption. The attack lasted 1 week, and visitors saw no ill effects.
Not bad for over half a million hits per minute sustained. Just be sure to log to /dev/null.
It's interesting that no one mentioned the Apache Traffic Server (formerly, Yahoo! Traffic Server) http://trafficserver.apache.org/
Please have a look at it, it works beautifully.
We use Varnish on http://www.mangahigh.com and have been able to scale from around 100 concurrent pre-varnish to over 560 concurrent post-varnish (server load remained at 0 at this point, so there's plenty of space to grow!). Documentation for varnish could be better, but it is quite flexible once you get used to it.
Varnish is meant to be a lot faster than Squid (having never used Squid, I can't say for certain) - and http://users.linpro.no/ingvar/varnish/stats-2009-05-19 shows Twitter, Wikia, Hulu, perezhilton.com and quite a number of other big names also using it.
both Squid and nginx are specifically designed for this. nginx is particularly easy to configure for a server farm, and can also be a frontend to FastCGI.
I've only used squid and can't compare. We use squid to cache an entire site on a server in the USA (all data gets pulled from a machine in Germany). It was pretty easy to set up and works nicely. I've found the documentation to be kind of lacking unless you already know what to look for.
Since you already have apache serving the static and dynamic content I would recommend you to go with Varnish.
In this way you can use your apache to deliver the static content and use varnish to cache it for you. Varnish is very flexible, giving you both caching and loadbalancing features for growing your website in the best ways.
We are about to roll out a varnish 2.01 server in front of an IIS 6 installation. The only caveats we've had was with our SSL (as varnish can't handle SSL). So we've also installed Nginx to handle those requests.
In all our testing we've shown a 66% percent increase in the amount of traffic the site can handle.
My only gripe is that varnish doesn't handle cookies well, and the documentation is still a bit scattered.
Nobody mentions that Squid follows the HTTP specification to the letter (or at least they try to) whereas Varnish does not. In my opinion, this means Varnish is better suited for caching content for individual sites (by extensively tuning Varnish) and Squid is better for caching content for many sites (each of which will have to make their content "cachable" according to spec).

Resources