Why are random sites linking to my site? What is their purpose? - statistics

In the "links from an external site" portion of my website statistics I see many sites that shouldn't have a reason to link to my site. Is there a reason for them doing this? Are they related in some way to spam/bots?
For example, the sites supposedly linking to mine are sites like:
****-store(dot)com
*****-imperia(dot)com
handbags****(dot)com
hunde****(dot)com
iphone****(dot)com

You might simply the victim of so called “Referral Spam” (fake referrer for various dirty online marketing reasons). For a first overview you could read this Wikipedia entry or learn how to fight against it here in Stackoverflow.

Related

fixing "Protocol-Relative Resource Links"

I'm a WordPress website designer. I'm not a programmer. I'm trying to fix some security/SEO issues with a website.
I ran a diagnostic test on a website and one of the things I'm trying to fix is a security issue on "protocol-relative resource links". This is how the program tells me to fix it: "Update any resource links to be absolute links including the scheme (HTTPS) to avoid security and performance issues."
I have googled everything I can think of and I think I have found a way to fix it, but it is going to mean going through 177 pages of the websites code and finding the specific links and fixing them each individually. That is because they are not image links or bigger links like that, but they are obscure links like font links. The only way I've been able to find them is through the source code (I think that's what it's called.) As far as I know they all unique links so I can't do search and replace and even if I could I don't know what links need to be replaced.
Here is the description of the problem that was given to me: "URLs that load resources such as images, JavaScript and CSS using protocol-relative links. A protocol-relative link is simply a link to a URL without specifying the scheme (for example, //screamingfrog.co.uk). It helps save developers time from having to specify the protocol and lets the browser determine it based upon the current connection to the resource. However, this technique is now an anti-pattern with HTTPS everywhere, and can expose some sites to 'man in the middle' compromises and performance issues"
I am self-taught and know basically no code or programming languages, so I need basic beginner help if possible. Links to tutorials are welcome. I'm trying to find a quicker way to solve this problem or at least have it confirmed that this is the way I'll have to do it. Thank you for any help you are able to give!

Requesting removal of stored website data from search engines

Greetings fellow developers,
I would like to ask for help regarding the following problem: Is there a way to request removal of stored website data from search engines? Most of the links that show up when searching my domain are old and non-existent.
What I've found from personal research regarding this question/problem:
From my personal research I have found that removal requests can be made individually to the well-known search engines such as Google, Yahoo and Bing, but this is not what I am looking for, since I am well-aware that it would take a lot of time for the requests to be processed and the removal of the data to be done. Also, I wasn't able to find this "removal-request" webpage for the other search engines.
To be more precise/clear...
... I want to request this website-data-removal to all (most) search engines at once, so that when I upload my new website (to the same domain), working and functional links (URLs) would be displayed. Can this be anyhow achieved and, if so, how? Also, how much time would it take for this removal to be finished?
Hope my question is clear enough, and any answer/help would be very much appreciated.
No, there is not a way to do this for all search engines at once. You will have to request it from each site individually. As for the smaller search engines you can try and find any contact information or customer support however their is a chance they will ignore your request (heck, some sites ignore the robot.txt file and just search your site anyways... it's just a part of being on the web).

How to know my website is available all over the world?

I have a website at e.g. http://example.com
This website is not available in China! I searched a lot but can not find the reason. My website is available elsewhere but not in china. Why?
How can I test my website availability for different countries?
You could use a proxy hosted in the desired country to browse as if you were there.
https://www.google.es/search?q=proxy+china&oq=proxy+china
I Suddenly find the best solution. You can use this website:
host-tracker.com
Just type your website and it loads it using lots of servers all over world and reports the results.

Add search feature to simple website without mySQL database

I have a simple HTML site with 100+ pages or so. I want to add a search bar at the top so the user can search the site. I know about Google Custom Search, but it shows ads unless you pay at least $100. Obviously I'd like ad-less search on my site for free if at all possible!
I've also heard about Lucene/Solr, but they do not actually crawl the site. For that I would apparently need Nutch.
Anyway, the site I have runs on a Microsoft IIS6 server, but I have basically no knowledge as to how Solr, Nutch, etc. gets "installed" on the server.
Also: I'd like to point out that I do have a local copy of the site. Perhaps I can do one big initial nutch "crawl" locally that will create an .xml for Solr?? That would help me get "up and running", but probably wouldn't be a good long-term solution.
..so should I just use Google Custom Search? or is there a not-extremely-painful-to-implement alternative? The brain hurts folks.
You did not mention how many search requests you want to handle but if you use the json-rest-api of google's custom search you have 100 searchqueries a day for free and you can display them without any ads on your page.
An simple example request can be found here.
Here is an easy way that works pretty well, although you may be looking for something more than this.
http://sitecomber.com/getsitecomber/
You can create code to paste into your site in about 2 minutes. It doesn't get easier than that. Search is powered by Google, but results are isolated to your website.
EDIT: This no longer works.

If I keep another webiste's link to my website via comment or foroum based facility then Is it help my site any how?

Say If I keep more number of web links of other sites on my web site then is it going to help my website in any way?
No it will help the other sites as you will be basically providing backlinks to theirs. Unless they have links going back to your site then you won't see any benefit at your end.
Maybe you are not quite clear about in-bound link and out-bound link.
What you are talking about now is out-bound link, which, according Google's PR, is a vote to other site(The linked site), not yours.
What you need, though, is in-bound link. In plain English, links of your sites on other sites.
Be warned, only putting our links on other sites that share the same themes can be counted as valid links. On top of it, do not tempted into Black Hat SEO!

Resources