Invalid domain in google search despite rule in webmaster tool - search

why my domain remain without "www." in google search despite the rule in google webmaster tool. I don't understand this problem... the site is www.commissariopedemontana.it

Im not sure, but here it shows-up with the www.
You might have already fixed it by now.

Related

Multilingual Umbraco Website cannot be scraped?

I have created a multilingual Umbraco website which has 3 domain names pointing to it for each language. The site has gone live and people are starting to share links to it on LinkedIn and other social media. I have metadata in the website which should be picked up when these links are shared. On LinkedIn when the link is shared it has 'coming soon' as the strap-line, which is what was in the holding page months ago suggesting the site isn't being re-scraped.
I used the Facebook link debugging tool and that was returning a run-time error with a 500 response code.
My co-worker insists that there is nothing wrong with the DNS and there aren't any errors in the code of the website so I am wondering if anyone has any ideas why the website cannot be scraped?
It also has another issue where one of the domains sometimes doesn't redirect to it's www. version despite have a redirect on the DNS which may be related.
Is there some specific Umbraco configuration that I may have missed? Or a bug within Umbraco that may cause this?
Aside from this issue the website is working fine, it is just these scrapers seem to be unable to hit the website successfully.
Do you have meta data set for encoding? see https://www.w3.org/International/questions/qa-html-language-declarations probably long shot.

Website breaking after 301 Redirect

Quite of a newbie question here but recently our Web Developer left our (small) company and has left us in a bind.
We recently (2 days ago) redirected our site to a newer and mobile friendly model and was working well for quite some time. For whatever reasons management deemed they needed to roll back the site to its original model and the site is breaking whenever you type in http://www.example.com. However, https:// works perfectly fine, and it seems like it has something to do with the htaccess file -- but being just the project manager, coding comes second in terms of skill.
If it helps our site is www.mauriprosailing.com -- currently still trying to figure out why the "www" and "http" is breaking the site.
If needed I can post a .txt of our htaccess if that helps.
I appreciate all the help and apologize if this was too broad of a question!
Solution: Granted this may not apply to everyone -- but the problem was not within the htaccess file but with caching of the server. The server was not pulling the right the .css file therefore causing an "explosion" of our site and I found that purging all of cached files did the trick.

Best practice to redirect old static mobile website

I made an update of my non-responsive to responsive template and there is no need of my old fashion static mobile website and redirection to it (www.mysite.com/mobile/index.html). I want to completely remove directory with mobile site so that my old mobile site is not available anymore.
I'm concern with numerous 404 errors afterwords and their effect on my current Google search appearance. Maybe somebody could help me with advice what would be the best practice in this case.
I'm using CSM Joomla, Apache server and I have configured .htaccess file.
I would like the most to 301 redirect whole mobile directory to my home website link (www.mysite.com) but I'm aware that would be really bad from the Google's point of view. Any help would be greatly appreciated.
Redirecting the whole site to the home page would be seen as a soft 404 by Google. Either redirect each page to the new equivalent, or return a 404/410 response.

Will my website be indexed as usual after hiding temporarily using htaccess and then bringing it live?

I would password-protect/hide the website using htaccess to disallow robots. Then If I bring it live after some weeks, will the indexing work as usual or should be notified using Google webmaster tools?
Thanks :)
robots crawl your page more than once so I think it will be reindexed after some weeks

How To Get Indexed Again After Removal of Robot.txt

While building a Webiste, i created a robot.txt on the server, to prevent the yet unfinished application from getting into Google's Index...
Now that i am done with the site, i removed the robot.txt and i expected that my site would show up on Google, since the robot.txt is gone!
But this is not happening! I have removed the robot.txt now for about 3 to 4 weeks, and yet the site is still not showing up :(
Is there something that one needs to do after the removal of robot.txt to get into the indexes of search engines again? Or isn't this suppose to happen naturally?
Or is this my case that of not being patient enough?
You can add your site for crawling in here.
Create a sitemap file and submit it to google, bing, and others. For google you can use their webmaster tools for this.
I would just setup a new default robots.txt file:
User-agent: *
Disallow:
Also sign up for Google Webmaster Tools and setup sitemap files. This might help Google to recognize things have changed.
As the first step step get site verified in Webmaster Tools. So you can see google crawler visits and reasons for denies.
Read more # http://www.google.com/support/forum/p/Webmasters/thread?tid=671635798b0e75ba&hl=en
For an optimal position in the Google search results you should definitely check this document:
Search Engine Optimization Starter Guide

Resources