google not indexing my page - search

I search for a complete phrase in google "visitors had the opportunity to swing them to-and-fro. Never had i experienced so " ", the comment from the post "http://radhanathswamiweekly.com/radhanath-swami-describes-jhulan-yatra-festival/"
the comment is posted by "kiran shetty" a month ago. in that post.
the Google search results are:
No results found for "visitors had the opportunity to swing them
to-and-fro. Never had i experienced so ".
Google cache says:
This is Google's cache of
http://radhanathswamiweekly.com/radhanath-swami-describes-jhulan-yatra-festival/.
It is a snapshot of the page as it appeared on 18 Sep 2014 08:00:49
GMT. The current page could have changed in the meantime. Learn more
Using "Fetch as Google" from the webmaster for the post:http://radhanathswamiweekly.com/radhanath-swami-describes-jhulan-yatra-festival/
the fetch status shows as completed.
Google fetch's Downloaded HTTP response can be found at: "http://pastebin.com/v4L1nuG3"
The Downloaded HTTP response contains the complete phrase "visitors had the opportunity to swing them to-and-fro. Never had i experienced so ".
That means google is able to see the text.
Since the cache shows that its cached on 18-sep. the comment is one month old (23-aug) from today (23-sep). Then why it is not getting indexed, as we see its not showing in the search results, even though the text exists in the http response which google sees the page as.

Your page is known and indexed by Google, you can verify this by running the following command in the Google search box:
site:radhanathswamiweekly.com/radhanath-swami-describes-jhulan-yatra-festival/
The query you are using is very very specific and it is a bit long for Google to prepare search results for it. Not many people will type that query.
You can go through a checklist I maintain if you are looking for more reasons why your page is not ranking.

Related

After 3 years my website with 1338 URLs in the sitemap still shows 1232 pages "Crawled - currently not indexed" in Search Console. Is this normal?

I started my wildlife photography website (www.stevenbrumby.com) in 2018 and since the site relies heavily on Javascript to display content I was aware from the outset that a sitemap would be crucially important. Initially the sitemap included more than 1000 urls but since then I have periodically updated the sitemap and it now includes 1338 urls. The sitemap status has been "success" all along. I've also checked with other sitemap validators and no errors were found.
In Search Console I have 43 valid pages all with the status "Indexed, not submitted in sitemap". But these pages are actually in the sitemap (I have not checked all 43, but the ones I checked were all there.) This is the first thing I don't understand!
There are 1.26K excluded pages of which the majority (1232) have the status "Crawled - currently not indexed". Maybe I am impatient, but I would have thought that by now some of these pages should have been indexed.
I would welcome any advice on where I am going wrong and how I might improve things.
After much trial and error I have found the answer to my question. Most of the urls in my sitemap had four query parameters. When I reduced this to one query parameter, immediately Google started indexing my site. 4 weeks after the change, 88% of the urls in my sitemap have been indexed. I expect this percentage to increase further in the weeks ahead.
sometimes it takes 2-3 days or maybe a couple of weeks before the newly added urls are been index, specially if there's a lot of pages involved.
Good day! :)

Chrome Web Store promotional tile Image has been rejected

I have tried (about a dozen times now) to add promotional tiles to my extension's web store listing.
I am getting this one every single time:
"This small tile image has been rejected due to the following reasons:
Text is too small
Too much detail
Please review the guidelines, upload a new image and republish."
I thought for a while that it's about text, but at my last try it was even without a single character in there and it was still rejected. Also I think the text rule is not that enforced since every single one on the front page has it's name on the tile.
Here is the last one I tried (instantly rejected this time so most likely automatic?) https://i.imgur.com/B2Qh7qO.png
Another one I tried a few days ago: https://i.imgur.com/WMcmF3O.png
Any advice would be appreciated.
The Chrome Webstore Developer support got back to me with the response
"I've checked your item and your promotional image is now fixed"
So it seems like a bug somewhere in their system so if anyone else runs into this don't do what I did and spend months trying to tweak your promotional images over and over, just contact them..
EDIT: For some reason the Developer Support contact form is extremely hard to find. Here it is: https://support.google.com/chrome_webstore/contact/developer_support?hl=en
The follow up support emails came from these address: cws-developer-support#google.com and developer-support#google.com

How can i get website report showing links from each page

I want to get a report which specifies what all links are there in each page of the website.I tried using different softwares,but the problem is they are just giving all links without showing exactly which links are there in each page.Also the website i am trying to make a report on is very unstructured,so it's not possible to just classify links,based on url forward slashes.For example,links starting with https://example.com/blog, will not give me all links inside the
'https://example.com/blog' page,because links inside 'https://example.com/blog' page can contains links without 'https://example.com/blog/' in the beginning of the link.
What can i do about this?
Thanks.
In Google analytics, there is no such concept as the next page.
Rather, it only knows the previous page.
It is due to the disconnected nature of the web.
You can, however, use the previous page to trace back to get the data you want.
Instead of looking for all links inside the https://example.com/blog, you will be looking at getting all links where the previous page is https://example.com/blog
More detailed explanation

Back to search results

On the website I run we have a single search field where you can enter a name or profession. When you search you are served with a page full of results that come from 3 seperate sources.
Once you click on one field e.g. John Do, you will be taken to his page. On that page we have a back to search, but it goes to a blank screen.
I want to go back to the actual search results so the person doesn't have to do it all again, but I'm not sure where to start. Any suggestions?
That's a tricky situation.
There can be many solutions for this issue but I'll will name some of them.
Activate the cache of the pages (Quick trick, no suitable for websites that relies on users (*login)), you can go back and your form will be the same with the results without any issue.
Manage the load of the page of Jhon Do as a ajax load and #hashtag references, you don't reload the page but you just manage the states of the HTML. (Can be done with JS frameworks or React)
Depends on which platform are you working try to manage the variable of the search with this concept post-redirect-get
Hope that is helps!
Cheers.

Why and how does the googlebot use my website's search engine?

Looking through my search logs from time to time, I notice that by far the biggest user of my search engine is the google-bot. What gives? Is it looking for content that might not be directly accessible through navigation? If so, how does it know which words and phrases to look for (they're surprisingly relevant). Does it check the most popular keywords on the site? I know I seem to be answering my own question here, but this is really only working it out from first principles. I'd like to hear from someone who knows what they're talking about (i.e. not me).
If your search form's method is get instead of post, each search has its own url, and people might be posting those urls elsewhere. Or if you have a (possibly inadvertently) publicly accessible webstats page that listed those urls, that's another common way for search engines to stumble upon your internal search urls. A third way I've seen is sites that list recent searches on their pages, but this is more intentional. "MySQL Performance Blog" does this to an annoying extent, so any search of their site from google yields hundreds of pages of similar searches, even if none of them found what they were looking for.
Edit: Looks like it does on occasion, but only GET forms:
http://googlewebmastercentral.blogspot.com/2008/04/crawling-through-html-forms.html
Google will use words that occur on your site in search boxes to try to find pages that it can't otherwise.
Google says that for the past few months, it has been filling in forms
on a "small number" of "high-quality" web sites to get back
information. What words has it been entering into those forms? Words
automatically selected that occur on the site, with check boxes and
drop-down menus also being selected.
http://searchengineland.com/google-now-fills-out-forms-crawls-results-13760

Resources