How does redirecting work in PageSpeed-Insights? - pagespeed-insights

When I test my mobile website with my full URL: https://www.managerup.com I got a very good result on the PageSpeed-Insights.
When I enter however only managerup.com, Page Speed Insights redirects it to http://managerup.com and the score drops significantly on the mobile score.
Why is that?
By the way, in any browser, the redirecting from managerup.com to https://www.managerup.com works perfectly.
Why doesn't it work in Page Speed Insights?

Generally speaking, www.site.com is different from site.com, and any redirect 301 / CNAME setting should influence it in negative, for an obvious reason: you need one more step than standard one. Usually any domain has a domain canonical version, and you can find it here.
Usually PSI test exactly URL you give it, and score may be influenced by it - i.e example.it is generally different from www.example.it. If you configure at its best domain "variants", every single step may be taken into account for PSI.
You need also that rel=canonical attribute is right respect on it.
301 redirect would take worst (not better) PSI score than direct URL.
You should optimize website using a single, by default, single version.

Related

301 Redirect Best Practices - multiple sites to single site

I have around 15+ sites, and we want to drop these sites and merge them into only one site (create pages for each one in the new site).
The 15+ site's domains should be redirected to one site as below:
a.com -> z.com/a
b.com -> z.com/b
c.com -> z.com/c
..
Also, we want to redirect (301) page by page from the old domains to the new the new domain to keep the page's ranking:
a.com/about-us -> z.com/a/about-us
b.com/about/abouus -> z.com/b/about-s
c.com/contactus -> z.com/c/contact-us
Each one of the 15+ sites is running on its own server with a different platform while the new server is IIS.
Currently, I'm thinking of two approaches:
Point the old DNS records to the new server of z.com, and handle all the redirects on the server.
Keep the old site running, and configure redirect rules on each server to redirect each page to the matching page on the new site.
Which approach is better, any other approaches? So far I think the first approach is better since we will control all the redirects in one place - but from the performance wise, is it going to add more headache on the server?
The Scenario
You created a new site and you want to redirect all old sites(15+) to new site (page to page).
All old sites are running in it's own server on different platforms whereas the new one is IIS.
Your options
Point the old DNS records to the new server of z.com, and handle all the redirects on the server.
Advantage
Less cost - you can remove all old sites and the server cost can be saved.
Internal redirection. So, time will be less
Disadvantage
Complex to perform
take care of conflict of similar pages(double check the redirection path)
Keep the old site running, and configure redirect rules on each server to redirect each page to the matching page on the new site
If you can afford the cost of running 15 different sites on 15 servers just to redirect, then only go for this method.
It is just a waste of money and the redirection time will be more.
I think eventually you'll have to shutdown all the other sites as in the long term unlikely does it make sense to keep 15 sites running just to do the redirects.
So as I understand the question is rather on how to better organize the migration to the new system in the short term. So here are my thoughts on this:
how huge is your system
what's your QPS?
how many pages do you have across your sites farm?
do you need to remap URLS for a decent amount of pages?
what's the migration procedure? Will you switch your sites one-by-one or it's technically infeasible and they all need to be swtiched over at once?
If we're talking about a system handling 10 QPS and 1K pages or about a system handling
50K QPS and having 1B pages we need to dynamically remap system load may be a concern and p.2 may look better
rollbacks
note that DNS records can be cached by intermediate servers and if you need to quickly rollback to the previous version if something goes wrong it can be an issue
what kind of systems do you have
Is it actually possible to easily extract URLs from 15 diverse systems and put them to a single point without a risk to lose something valuable?
ease of maintenance
At first glance the first approach looks easier from the maintenance perspective, but I don't know what kind of system you use and how complex the redirection rules need to be.
If they are complex dynamic ones like a.com/product.php?id=1 => z.com/a/iPhone6S moving millions of such urls to a single point could be tricky
SEO
I don't follow the industry closely, but a few years ago both would work ok. I think it's worth consulting those keeping up to date with this industry - it changes very rapidly
Your first approach is definitely the best.
It is easy to maintain
You needn't to keep old infrastructure (though in your second case you'll need to keep only redirecting frontend like apache, nginx or lighthttpd)
There are no performance risks as from one visitor request to old location, redirect answer and than request to new location will come in turn and not simultaneously.
DNS records are not capable of HTTP redirection which is crucial for SEO. To make sure your redirect is 301 HTTP redirect you can use sniffer.
The answer is just make sure your redirect is 301 HTTP redirect so you get your SEO right. Other than that it's just a matter of taste / architecture / money rather than standards.
UPDATE
Read more:
wikipedia
Both Bing and Google recommend using a 301 redirect to change the URL of a page as it is shown in search engine results.
ehow.com
A 301 redirect is a search-engine friendly way to move a domain. The 301-redirected domain does not cause duplicate content in the search engines so that you do not harm your search engine rank. Using a new DNS setting is required to have a new domain name, but it does not redirect browsers or search engines. Both of these methods are used to move to a new domain.
webmasters.stackexchange.com
Duplicate content occurs when the same content is available on two different URLs. To prevent duplicate content on www vs no-www, use 301 redirects to redirect one to the other. To implement redirects, it is the webserver that needs to be configured properly. As long as DNS is pointing to the webserver (either CNAME or A record), then the webserver can be configured properly.
I was stuck with this problem some months back. I wanted to redirect a whole site into a new site's structure. The old site was .php which I know nothing about.
I figured I'd point the old website's DNS at my server and write some MVC code to catch every request, and then use a set of rules using the vb.net Like operator to compare the inbound Url with my ruleset.
It worked a treat. I redirect 300+ pages to my new site with about 10 rules. These include changes of folder structure, a forum (which was mainly junk but had a few good questions), and I implement a "catchall" rule which points to the new home page, in case I missed something.
It worked so well I've packaged it up as a commercial product and it publically available. It is free with link from the destination site (in your case just the single destination site).
https://301redirect.website/
There are a couple of demo videos on the homepage which will explain the setup in a few minutes.

Yslow Cookieless Domain

I have a Concrete5 site which already has a bunch of contents and I want to point the images to my cookieless domain without replacing the urls.
I created an htaccess that will redirect all images from my main site to the cookieless domain
http://www.example.com/images/header.jpg
to
http://static.example.com/images/header.jpg
It's actually working but YSlow doesn't seem to honor this. It's still giving me a low score on that part.
Since you didn't change the image links, browsers will still make a request to the original URL and will send the cookies. That's probably why YSlow is still giving you a poor score for that.
To properly change it you would need to:
Change all links to the new cookie-less domain (static.example.com)
Change cookies to be issued for www.example.com only (per Croises comment above)
Remove the redirects for images
It's a lot of work to achieve, and depending on your site traffic it may not be worth it. Like all YSlow rules (and those from other tools), it's important to understand the recommendations. Not all of them are worth the effort for all websites.
Reference: Cookie-less domains best practices

Enabling SSL for a subdomain in IIS

I recently bought SSL for my website and want to create a section within the site in the form of https://secure.example.com/member/upgrade.aspx. However, I am having a hard time solving this issue since currently my website URL rewrite prohibits any subdomain and the user is logged out if he or she gets transferred to the above link.
I have search online and found some good information such as dynamically create the url without actually creating a subdomain in IIS.
Questions:
What steps are needed to achieve the objective above?
Should I have bought the wildcard certificate instead of one for a specific subdomain?
Thank you.
One option would be ignoring that url pattern for rewrite purposes or ignoring the url if the protocol is HTTPS. That said, I would take a slightly different approach here and just put the entire site behind SSL -- rewriting all the queries to the other protocol works and google is now giving rankings bumps to HTTPS so there are good business reasons to make the switch. You are already taking the pain of getting SSL involved at all -- the dedicated IP and certficate cost the same if you use them on a single page or all the pages, might as well take advantage of it and ease your management burden in the same motion.

Google juice with subdomains and porting an application using rewrite rules

Background: I've got a web app on sub.domain.com. My primary website is on domain.com. My sub.domain.com pages are stuffed with keywords that I would like to use to get upped in pagerank on domain.com. however, the whole app has been written on sub.domain.com, and it'll be some effort to host it at domain.com/subdirectory, due to how URLs are written, etc.
First question: would you expect that migration (from sub.domain.com to domain.com/subdirectory) to substantially improve the pagerank of domain.com over how it is now? I've done a lot of research, and opinions are split on if google with link the subdomain with the main domain.
Next question: if I do want to do the migration, it'll be difficult to do in the actual codebase (more tedious than difficult). Does anybody have some advice for how I could do this with mod_rewrite? I know there has to be a clever way to do it, but I can't even start to sketch out a solution. Maybe this means it's not a good thing to do, but I was hoping for kind of a quick hack, rather than rewriting all my URLs. Plus, I would like it to be pretty easily reversible, which wouldn't be the case if I change my URLs (dev is ongoing, so it's not as simple as just rolling out a previous version).
Pagerank isn't a property of domains, it's a property of individual documents. So it'd be more accurate to say that migration from sub.domain.com to domain.com/subdirectory will improve the pagerank of domain.com/subdirectory. If you're concerned solely about the ranking of the domain.com home page, the impact on that will mostly depend on your internal linking. For example, if all pages on sub.domain.com currently have a "home" navigation link that leads to the home page of sub.domain.com, and when you do your move they'll now lead to the home page of domain.com, then this will contribute to the domain.com home page's ranking. If this "home" navigation link went to domain.com/subdirectory, on the other hand, then that's what they'll be contributing pagerank to.
mod_rewrite doesn't change the outbound links in your HTML, it changes how inbound links are interpreted. So it would let you put this in the virtual host file or .htaccess for sub.domain.com:
RewriteEngine on
RedirectRule (.*) http://domain.com/subdirectory/$1 [R=301]
to mass redirect any requests coming in on sub.domain.com where they need to go. It won't help you produce correct new-form URLs in your codebase. (You could, in theory, leave all your links how they are and rely on the 301 redirect to keep you from having to change them, but this is really sloppy and wasteful, generating two HTTP requests instead of one for no good reason).

How to best normalize URLs

I'm creating a site that allows users to add Keyword --> URL links. I want multiple users to be able to link to the same url (exactly the same, same object instance).
So if user 1 types in "http://www.facebook.com/index.php" and user 2 types in "http://facebook.com" and user 3 types in "www.facebook.com" how do I best "convert" them to what these all resolve to: "http://www.facebook.com/"
The back end is in Python...
How does a search engine keep track of URLs? Do they keep a URL then take what ever it resolves to or do they toss URLs that are different from what they resolve to and just care about the resolved version?
Thanks!!!
So if user 1 types in "http://www.facebook.com/index.php" and user 2 types in "http://facebook.com" and user 3 types in "www.facebook.com" how do I best "convert" them to what these all resolve to: "http://www.facebook.com/"
You'd resolve user 3 by fixing up invalid URLs. www.facebook.com isn't a URL, but you can guess that http:// should go on the start. An empty path part is the same as the / path, so you can be sure that needs to go on the end too. A good URL parser should be able to do this bit.
You could resolve user 2 by making a HTTP HEAD request to the URL. If it comes back with a status code of 301, you've got a permanent redirect to the real URL in the Location response header. Facebook does this to send facebook.com traffic to www.facebook.com, and it's definitely something that sites should be doing (even though in the real world many aren't). You might allow consider allowing other redirect status codes in the 3xx family to do the same; it's not really the right thing to do, but some sites use 302 instead of 301 for the redirect because they're a bit thick.
If you have the time and network resources (plus more code to prevent the feature being abused to DoS you or others), you could also consider GETting the target web page and parsing it (assuming it turns out ot be HTML). If there is a <link rel="canonical" href="..." /> element in the page, you should also treat that URL as being the proper one. (View Source: Stack Overflow does this.)
However, unfortunately, user 1's case cannot be resolved. Facebook is serving a page at / and a page at /index.php, and though we can look at them and say they're the same, there is no technical method to describe that relationship. In an ideal world Facebook would include either a 301 redirect response or a <link rel="canonical" /> to tell people that / was the proper format URL to access a particular resource rather than /index.php (or vice versa). But they don't, and in fact most database-driven web sites don't do this yet either.
To get around this, some search engines(*) compare the content at different [sub]domains, and to a limited extent also different paths on the same host, and guess that they're the same if the content is sufficiently similar. Of course this is a lot of work, requires a lot of storage and processing, and is ultimately not terribly reliable.
I wouldn't really bother with much of this, beyond fixing up URLs like in the user 3 case. From your description it doesn't seem that essential that pages that “are the same” have to share actual identity, unless there's a particular use-case you haven't mentioned.
(*: well, Google anyway; more traditional ones traditionally didn't and would happily serve up multiple links for the same page, but I'd assume the other majors are doing something similar now.)
There's no way to know, other than "magic" knowledge about the particular website, that "/index.php" is the same as fetching "/".
So, your problem, as stated, is impossible.
i'd save 3 link as separated, since you can never reliably tell they resolve to same page. it all depends on how the server (out of our control) resolve the url.

Resources