Recently I developed a script in order to log the visits of my site.
I noticed that many times per day there are some strange visits. The visits are related with a specific page of my site and come from sites like the ones in the list below:
nayra.ru
zl-news.ru
bizlog.ru
opel.barsavto.ru
tovray.ru
www.vk-mail-hack.com
danelkon.net/news.php?readmore=74
pronekut.com
www.wallpapers.su
pornogig.com
spb.ceramic.ru
zl-news.ru
renkele.net
ublaze.ru
mug-na-chas-moscow.ru
Does anyone know what they are trying to do with these visits? Are they any spam bots or attempts?
What I have found is that most of these visits have referrals that smell strange. If you check them you will be redirected to a page and seems that this is the bot's intention, to gather visits thru the referrals, so simply ignore them.
Related
I have:
a simple static website;
hosted on a shared server;
with SSL;
which I have recently redesigned.
Google tells me there were two url crawl errors for my website:
apple-app-site-association;
.well-known/apple-app-site-association
For reference, here is the error report for the first (the second is the same):
Not found
URL:
https://mywebsite.com/apple-app-site-association
Error details
Last crawled: 5/5/16
First detected: 5/5/16
Googlebot couldn't crawl this URL because it points to a non-existent page. Generally, 404s don't harm your site's performance in search, but you can use them to help improve the user experience. Learn more
From looking around here, these appear to be related to associating an apple app with related website.
I have never tried to implement any sort of "apple app / site association" - at least not intentionally.
I can't for the life of me figure out where these links are coming from.
I will be removing these urls but am concerned the error may arise again.
I have looked at several related questions here, but they seem to be for errors from people trying to make that verification - which I haven't - or from people querying why their server logs show requests to these urls.
Can anyone shed any light on why this is happening?
So the answer is that Googlebot is now searching for these urls when they crawl your site, and as part of their effort to map associations between sites and their related apps. See: https://firebase.google.com/docs/app-indexing/ios/app
It seems that Googlebot hasn't (at this time) been told not to return a crawl error if the url /folder is not there.
Here is a link to an answer to a very similar (but slightly different) question that gives more detail if you are so inclined: https://stackoverflow.com/a/35812024/4806086
I am having some issues with spam links visiting my site returning a 404 error.
My site was hacked with a secret spam links folder on public_html that redirected users to pornographic sites, those links were plastered across the internet. I have since remedied the malware issue, but have several hundred visitors hitting a 404 page because these links no longer exist, messing up all my analytics accounts, using bandwidth, etc.
I have searched for a way to block (so that they never hit my website) anyone that tries to access these URL paths, but cannot possibly redirect every single link (there were over 2000) using a wildcard, or something. My search led me here: Block Spam Referrer Traffic and it is not quite the solution I need.
The searches go to pages like this: www.mywebsite.com/spampage/morespam/ (which have been deleted and are now 404 errors)
There are several iterations of the /spampage/ and /morespam/ urls.
The referrer is generally a google search, so I can't block the referrer using .htaccess. I'd like to somehow block www.mywebsite.com/spampage/*/ and all iterations.
Apologies, I am by no means a programmer. I do appreciate any help that can be offered.
Update#1:
Seems that perhaps the best way is to block these links/directories using the robots.txt file, I have done so and will report back if I have success!
Update#2:
Reporting back. I am new to this all, so I was going about the solution wrong in my original question. Essentially, I found that I needed all of the links de-indexed, as they were generating all the traffic by being indexed by google. I was able to request de-indexing of the directories in question manually through the google webmaster tools account. One requirement for de-indexing was to have the robots.txt on the site block the directories in question from being crawled. Once I did that I submitted the request to remove the directory from the google index. Those pages were taken off in about 3 hours by google (thanks google!), so it was pretty quick once I found out the proper way to go about it. No .htaccess editing needed. Once the pages were no longer index, traffic went back down to normal levels and my keywords, etc, will be back to normal.
I have in my main website root the file...
lib.php
So hackers keeps hitting my website with different IP addresses, different OS, different everything. The page is redirected to our 404 error page, and this 404 error page tracks visitors using standard visitor tracking analytics do allow us to see problems as they may arise.
Below is an example of the landing pages as shown in analytics by the hackers, except that I get about 200 hits per hour. Each link is a bit different as they are using a variable to set as a page url to goto.
mysite.com/lib.php?id=zh%2F78jQrm3qLoE53KZd2vBHtPFaYHTOvBijvL2NNWYE%3D
mysite.com/lib.php?id=WY%2FfNHaB2OBcAH0TcsAEPrmFy1uGMHgxmiWVqT2M6Wk%VD
mysite.com/lib.php?id=WY%2FfNHaB2OBcAH0TcsAEPrmFy1uGMHgxmiWVqJHGEWk%T%
mysite.com/lib.php?id=JY%2FfNHaB2OBcAH0TcsAEPrmFy1uGMHgxmiWVqT2MFGk%BD
I do not think I even need the file http://www.mysite.com/lib.php
Should I need it? When I visit mysite.com/lib.php it is redirected to my custom 404 page.
How can I stop this best? I am thinking by using .htaccess, but not sure the best setup?
This is most probably part of the Asprox botnet.
http://rebsnippets.blogspot.cz/asprox
Key thing is to change your password and stop using FTP protocol to access your privileged accounts.
I have a website ranking well in Google, my current website has dashes in and looks like so...
this-is-mine.com
Ive just also bought
thisismine.com
I'd like to point the latter to my first site, but I dont want it to be classed as duplicate content.
I'm unsure if I just do this through 123-reg but will this affect my Google rankings, or is there a correct way of doing this without penalising myself?
According to the link below, my thoughts are confirmed.
A 301 is fine as it forwards everything including page rank to the "new" site. In your case this-is-mine.com.
A 302 could/would be a problem for SEO.
http://seo-hacker.com/301-302-redirect-affect-seo/
If your current website is ranking well then don't disturb it. There is no benefits in pointing multiple domains on one website. You can also make a single page website on the new domain and optimize it for Google and link with your old one.
If you still want to do this then do a 301 redirect but make sure that the domain is new and has no spammy back links pointing to it.
I've been asked by a family friend to completely overhaul the website for their business. I've designed my own website, so I know some of the basics of web design and development.
To work on their website from my own home, I know I'll need to FTP into their server, and therefore I'll need their FTP credentials, as well as their CMS credentials. I'm meeting with them in a couple of days and I don't want to look like a moron! Is there anything else I need to ask them for during our first meeting (aside from what they want in their new site, etc.) before I start digging into it?
Thanks!
From an SEO point of view, you should be concerned with 301 redirects as (i suppose) some or all URL adressess will change (take a different name, be removed and etc)
So, after you`ve created a new version of the site - and before you put it online - you should go ahead and list all "old site" URLs and decide, preferably for each one, it's new status (unchanged or redirected and if so - to what URL).
Mind that even is the some content will not re-appear on the new site, you still have to redirect the URL (say to HomePage) to keep link juice and SERP rankings.
Also, for a larger sites, (especially dynamic sites) try looking for URL patterns for bulk redirects. For example, if you see that google indexes 1,000 index.php?search=[some-key-word] pages, you don`t need to redirect each one individually as these are probably just search result pages that can be grouped with REGEX to be redirected to main search result page.
To index "old site" URLs you should:
a. site:domainname.com in Google (then set the SERP to 100 results and scaped manually of with Xpath)
b. Xenu or other site crawler (some like screamingfrog) to get a list of all URLs.
c. combine the lists in excel and remove all duplicates.
If you need help with 301 redirects you can start with this link:
http://www.webconfs.com/how-to-redirect-a-webpage.php/
If the website is static, knowing html, css and javascript along with FTP credentials is enough for you to get started. However if the site is dynamic interactive and database driven, you may need to ask if they want to use a php, In that case you might end up building this site in wordpress.
If you are going to design the website from scratch then also keep this point in mind.. Your friend might have hosted this website at somewhere (i.e. hosting provider). You should get its hosting control panel details as well which will help to manage the website (including database, email, FTP, etc.).