Tell GoogleBot To Skip Part of a Page? - search

I've read many links trying to figure out how to tell Google not to index parts of a page. All the answers seem to be no, or do something lame like use IFrames. In our case legal wants a lengthy disclaimer in the footer on every page. This is causing an SEO issue. Any brand new techniques to deal with this?

In our case legal wants a lengthy disclaimer in the footer on every page. This is causing an SEO issue.
No it is not for Google. If it is present on every page, then it will be considered as boiler plate content rather than primary content for the relevancy and indexation of each page. Boiler plate content is ignored for rankings.
Google knows that sometimes content must be written on all pages for legal reasons. It does not penalize such websites dixit John Mueller in his hangout videos.

Related

SEO search result indentation (google)

I want my website to have indentation in google result search.
After taking reference of many websites, I found this one website "www.traveloka.com"
Inside the website, I can't find any meta keywords stuffs.
But the website is well indented.
My question is :
- does meta keywords really needed to have google indent my search result ?
- if yes, why the website www.traveloka.com is well indented without meta keywords ?
- if no, what matters then ? Beside having the page have href linking to each other ?
UPDATE :
While doing SEO, I found this website :
chlooe.com
It reports SEO advises, which ones to be changed, etc.
I'll follow the instructions there. any thoughts ?
If by indentation you mean ... it's called sublinks.
Meta tags are no longer important for most search engines. They now rank the pages according to content so in your site's content, use strong keywords to get better ranking.
Having a specific page title helps a lot too.
As for the meta tags, personally, I like to leave it in but they are no longer mandatory.
The Google site links are generated automatically by Google depending on your content.
Here are a few tips:
1) Have a sitemap.xml in your website. This will tell the crawlers which pages are available on your site. To generate a sitemap.xml, I use http://www.xml-sitemaps.com/
2) Submit that sitemap to google webmaster tools.
3) Use clean urls. For example www.mydomain.com/contact, .../about-us, .../portfolio, ... etc. These help search engines seperate the content and create sub links depending on the most important content.
4) Most important of all, get traffic on your website... no traffic = poor ranking.
This is not a full tutorial but just some tips. Search for "google sub links" to learn more.
Hope this helps
https://support.google.com/webmasters/answer/47334?hl=en

What is a ghost page?

Can somebody please tell me what exactly a ghost page is? How can we create a ghost page?
I have very little information about it, like ghost pages are used by firms for promotional purposes.
Simply put, it's deliberate deception of search engines for SEO purposes. It's bad practice.
Longer answer:
Say you're optimizing your cheese-selling site. You want people to find it when googling "cheese". So you would create a bunch of content where "cheese" has a high keyword density of 12 % or so. Of course, this renders the page pretty useless for the user - the user just wants an image, some data on the cheese and a header, but that isn't SEO-friendly enough for you.
So, you'll create all the content you need for your SEO purposes and serve that up - in an if/else statement. Basically, if the visiting computer is GoogleBot, you'll serve up all that great text. If it's not GoogleBot, you'll serve up correct content.
So a Ghost Page (more accurately known as Shadow Page) is pretty much just a (bad) way for you to get ranking that you don't deserve.

Handle default web page with little information for search?

Would like to garner opinions. We've created a website for a gay members club and they wanted the default landing page to mysterious with little information on it.
As such the Default.aspx only contains a form asking for some personal details. Users can click a button to skip this content and go to an AboutUs page.
The problem is, because we cannot control what information Google uses for the site description in search results, it is picking up the forms fields - which obviously do not makes sense as a description.
I think there are two options to counter this:
Use Robots.txt to block access to Default.aspx and only allow access to AboutUs.aspx
Write a description and title in a H1 tag but make the text colour the same as the background colour
Could I get opinions which method people will think is best for search results?
Thanks.
I would not block or try and deceive Google.
Make sure the title tag for the page is good and descriptive. Around 70 characters to explain what the website is about.
Same goes for your meta description. About two sentences to continue on from the title information.

How would I best make this SEO_able?

I have a search engine that searches albums.
For each music album, I have a page.
So, the work flow goes like this:
People search for music titles
The search engine displays a list of albums.
People click on an album to go to a details page.
I want google to index my front page and the details page. I want the details page to be highly ranked. How can I build a sitemap for this?
By the way, I have about 5 million albums (but I want the top 1000 ones to be highly ranked on google)
You would not use a sitemap for that many results. You would want each album to appear as a page with a unique URI to reference that page. That way the search engine can crawl your site by crawling links since search bots cannot submit form data. Each of those URIs should be simple, meaning limited to this part of the URI syntax:
scheme://authority_segment/path
Program your web application to remove and throw away any extraneous data, such as query string or parameters. If you do this you have to be sure that you are watching for URI poisoning or SQL injection even through means of character encoding.
How can I build a sitemap for this?
By pulling the addresses out of your database and creating a XML file with a high priority for some selected pages. Somehow I think that isn’t your real question …
If I wanted to automate building a site map for a site like this, I'd employ Python. I'd pretty much write everything from the ground up (except the data store access). The format is quite simple.
I'm not sure I quite understand your question...

How does google return "searches" from other websites?

Let's say I'm performing a google search for search term.
Sometimes, one of the suggestions will be to a URL like this: www.someothersearch.com/search+term/
How does "someothersearch.com" do this?
In general, a page will only be in Google if some other page links to it. Google is not going to go to someothersearch.com and submit "search term" into the form, it is likely a hidden or nonhidden link on someothesearch.com.
Why not? someothersearch.com presumably has its own index pages for terms searched previously; the Google spider is just indexing those index pages as well.
Just a guess. Maybe these sites support OpenSearch?
I misunderstood your question at first; What these sites are doing is rewriting their requests. How they know which terms people will search for is a bit of a mystery to me, but it probably relies on things like watching google.com/trends, scraping their own and other log files for referral from google that include the search term, buying lists of well ranking terms people might use AdSense for and instead trying to generate natural traffic for them... etc. Probably when they add new pages with these terms they're also adding them to their xml sitemap that Google will crawl.
Redacted:
I have added the Open-Search tag to your question; please follow it. You'll find this post on https://stackoverflow.com/questions/20830/firefox-and-ie7-users-here-is-your-stackoverflow-search-pluginlink textthe most informative; however I recommend you use image/png for your icon format.

Resources