SEO, content duplication and pagination [closed] - pagination

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I have a layout similar to this, only it's not debates, so I'll use this as an example for the question.
As you can see, they have 3 different tabs, for rounds, comments and votes but these are all in one page, whereas in my case, I have different pages for comments and votes like this
example.com/post/1 <- main post's url
example.com/post/1/comments
example.com/post/1/votes
and both comments and votes are paginated, so there can be urls like this:
example.com/post/1/comments/page/3
So I wonder how I should manage this kind of situation from the SEO perspective, won't the fixed part of the debate above the tabs considered a duplication? And what happens if I add a canonical link to let's say, comments page, leading to the main post's url, will the comments be indexed or only the main post's page will?

won't the fixed part of the debate above the tabs considered a duplication?
No, if it is repeated on every page, it will be considered as boiler plate content and be ignored for ranking, because it is not specific to the page itself.
And what happens if I add a canonical link to let's say, comments page, leading to the main post's url, will the comments be indexed or only the main post's page will?
If Google trusts and agrees with your canonical link, then only the main post will be used for indexing.

Related

Console appear automatically in every web page [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
When I browse any page, Inspect element appear in screen. Console shows some error code like,
403, 404, 400.It happened every single minute. Even when I write my question in stack overflow, it appears four or five times. It's impact in my device too. It's really disgusting, it hampers my workflow.
I really need your help.
The classic 404-error is annoying for webmasters and users. If 404-errors accumulate, this can be a sign for users and search engines that the website is badly maintained. 404 errors can be identified using a number of tools, for example the Google Search Console, or the Ryte Suite.
It is also advisable to create a special 404 error page so that usability is not affected.
Humor is often used on 404-error pages, or a search bar to animate users and encourage them to search for the desired content on the target page.
A 404 error page should contain the following elements:
Polite or humorous apology for the mistake
Alternatives to the desired page, the desired product (for example, online stores), or alternative articles (such as blogs)
Option for the user to report the error so that it can be removed
Direct reference to the main navigation
A separate search bar to search for further content
Design of the error page to conform to the corporate design so that it is not perceived as a foreign object
Contact options
If you manage to keep the visitor on your website despite a 404-error page, the purpose of a proper 404-error page would be achieved and it would decrease the bounce rate and possibly still make a conversion.
Source

Search Filters and SEO -- nofollow, canonical, or nothing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have an eCommerce site that I am working on. The primary way customers find products is by filtering down results with a search menu (options include department, brand, color, size, etc.).
The problem is that the menu creates a lot of duplicate content, which I am afraid will cause problems with search engines like Google and Bing. A product can be found of multiple pages, depending on what combination of filters are used.
My question is, what is the best way to handle the duplicate content?
As far as I can tell, I have a few options: (1) Do nothing and let search engines cache everything; (2) use a canonical link tag in the header so search engines only cache departments; (3) put rel="nofollow" in the filter links-- though, to be honest I'm not sure how that works internally; (4) put noindex in the header of filtered pages.
Any light that can be shed on this would be great.
This is exactly what canonical URLs are for. Choose a primary URL for those pages and make that the canonical URL. This is usually one that isn't found using filters. This way the search engines know to display that URL in the search results. And if they find the filtered pages from incoming links they give credit to the canonical URL which helps its rankings.

SEO using .htaccess? And, how to redirect a fake subdirectory to an individual page? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Since a recent redesign of our website, we've noticed that the search rankings for certain pages has plummeted as individual publications are no longer on their own page, but rather on publications.php?magazine=xx, where xx is a unique ID number for the publication.
Is there any way to use a .htaccess file to redirect fake subdirectories to the pages, i.e. visiting /publications/magazine-name takes you to publications.php?magazine=xx, and if so: would this even have an effect on their SEO?
If not, is there any other way you can make these URL query strings more search engine-friendly?
I'm only halfway there, but using the mod_rewrite tool with something like:
RewriteRule ^advanced-lift-truck?$ pub-automotive.php?mag=1 [NC,L]
can get me a URL that Google will understand and trawl.
Now, it's just a case of figuring out what I can do about each page effectively having the same "content", just with different CSS showing/hiding parts.
I'm investigating the following:
http://www.webdesignerdepot.com/2013/10/how-to-optimize-single-page-sites-for-search-engines/

How does google search finds important links on a website [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I wanted to know that sometimes when I search for something on google it shows some results(website links), but it also shows some important links on that website.
I wanted to know that is it a feature of the website or Google uses something to find those main links of the website? Is it related to search engine optimization?
You probably mean Google’s sitelinks.
We only show sitelinks for results when we think they'll be useful to the user. If the structure of your site doesn't allow our algorithms to find good sitelinks, or we don't think that the sitelinks for your site are relevant for the user's query, we won't show them.
(See this [closed] question.)
It has to do with click-through rates of those links. For example, Googling 'Amazon' brings up amazon.com, with a handful of links below: Books, Kindle e-Books, Music, etc.
These are obviously popular categories on Amazon, and Google tracks where users click, then uses that data to make serps more relevant.

robots.txt - disallow page without querystring [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a page that serves up dynamic content
/for-sale
the page should always have at least one parameter
/for-sale?id=1
I'd like to disallow
/for-sale
but allow
/for-sale?id=*
without affecting the bot's ability to crawl the site or the possibility of affecting negatively on SERP's.
Is this possible?
What you want does not work using robots.txt:
There is no such thing as Allow: in the robot exclusion standard, although the RFC written by M. Koster suggests so (and some crawlers seem to support it).
No such thing as query strings or wildcards is supported, so disallowing the "naked" version will disallow everything. Surely not what you want.
Anything in robots.txt is an entirely optional, and merely a hint. No robot is required to request that file at all or respect anything you say.
You will almost certainly find one or several web crawlers for which any or all of the above is wrong, and you have no way of knowing.
To address the actual problem, you could put a rewrite rule into your Apache configuration file. There is readily available code available for turning an URL with query string into a normal URL (example from a quick web search).
(Alternatively, you could just leave the id query string in place. The One Search Engine that makes up 85% of your traffic eats them just fine, and the other two that make up 90% of what is not Google do as well.
So your fear is really only about search engines that nobody uses, and about spam harvesters.)
I think this should work
Disallow: /for-sale
Allow: /for-sale?id=*&*
Allow: /for-sale?id=*

Resources