Hacker (Multiple IP's) attacking one page (lib.php) with a variable attached, what to do? - .htaccess

I have in my main website root the file...
lib.php
So hackers keeps hitting my website with different IP addresses, different OS, different everything. The page is redirected to our 404 error page, and this 404 error page tracks visitors using standard visitor tracking analytics do allow us to see problems as they may arise.
Below is an example of the landing pages as shown in analytics by the hackers, except that I get about 200 hits per hour. Each link is a bit different as they are using a variable to set as a page url to goto.
mysite.com/lib.php?id=zh%2F78jQrm3qLoE53KZd2vBHtPFaYHTOvBijvL2NNWYE%3D
mysite.com/lib.php?id=WY%2FfNHaB2OBcAH0TcsAEPrmFy1uGMHgxmiWVqT2M6Wk%VD
mysite.com/lib.php?id=WY%2FfNHaB2OBcAH0TcsAEPrmFy1uGMHgxmiWVqJHGEWk%T%
mysite.com/lib.php?id=JY%2FfNHaB2OBcAH0TcsAEPrmFy1uGMHgxmiWVqT2MFGk%BD
I do not think I even need the file http://www.mysite.com/lib.php
Should I need it? When I visit mysite.com/lib.php it is redirected to my custom 404 page.
How can I stop this best? I am thinking by using .htaccess, but not sure the best setup?

This is most probably part of the Asprox botnet.
http://rebsnippets.blogspot.cz/asprox
Key thing is to change your password and stop using FTP protocol to access your privileged accounts.

Related

Block or redirect website page URLs using .htaccess

I am having some issues with spam links visiting my site returning a 404 error.
My site was hacked with a secret spam links folder on public_html that redirected users to pornographic sites, those links were plastered across the internet. I have since remedied the malware issue, but have several hundred visitors hitting a 404 page because these links no longer exist, messing up all my analytics accounts, using bandwidth, etc.
I have searched for a way to block (so that they never hit my website) anyone that tries to access these URL paths, but cannot possibly redirect every single link (there were over 2000) using a wildcard, or something. My search led me here: Block Spam Referrer Traffic and it is not quite the solution I need.
The searches go to pages like this: www.mywebsite.com/spampage/morespam/ (which have been deleted and are now 404 errors)
There are several iterations of the /spampage/ and /morespam/ urls.
The referrer is generally a google search, so I can't block the referrer using .htaccess. I'd like to somehow block www.mywebsite.com/spampage/*/ and all iterations.
Apologies, I am by no means a programmer. I do appreciate any help that can be offered.
Update#1:
Seems that perhaps the best way is to block these links/directories using the robots.txt file, I have done so and will report back if I have success!
Update#2:
Reporting back. I am new to this all, so I was going about the solution wrong in my original question. Essentially, I found that I needed all of the links de-indexed, as they were generating all the traffic by being indexed by google. I was able to request de-indexing of the directories in question manually through the google webmaster tools account. One requirement for de-indexing was to have the robots.txt on the site block the directories in question from being crawled. Once I did that I submitted the request to remove the directory from the google index. Those pages were taken off in about 3 hours by google (thanks google!), so it was pretty quick once I found out the proper way to go about it. No .htaccess editing needed. Once the pages were no longer index, traffic went back down to normal levels and my keywords, etc, will be back to normal.

Can .htaccess be configured to retain the same address on different pages?

Im configuring a desktop and mobile version of my site and was looking to use js to test for browser dimensions and then load the relevant version, however the problem is if someone shares a link from the mobile version and sends it to a desktop user then they circumvented the check. Is there a way to configure .htaccess (or some other method) to have the address bar show 'mysite.com' even though i would be loading 'mysite.com/mobile.htm'? I know i can always use media queries but that has the downfall of loading unused assets, so this method would be alot better.
Use a rewrite instead of a redirect. With a redirect, the browser is instructed to go to another address. With a URL rewrite, the server just responds with the contents of a different URL.
For just this page it will be simple, but it could be complicated, based on your site.
Another way is to include a little JS in every page to make sure you are on the right one for the device and redirect to the other if not. It would help if there was some pattern to easily determine the corresponding page.

Is there any way to tell a browser that this is a bad URL to remember?

I'm sending emails to customers, and I'm providing a custom URL for each, which when they go to, will log them in.
This is fine, except if they are using a shared browser that will remember the URL.
Is there any way at all to suggest to the browser that it shouldn't remember a URL?
Edit: This question has nothing to do with caching of the page.
Have the link log them in once. Then make them create credentials that let them access the site in the future. Whats to stop a random person from typing in the url and gaining access to the content?
Yes. You can redirect them with a 301 or 302. Then the browser won't save the URL they went to. At least that work with the Mozilla based browsers and I would imagine others too.
Another way, it is uglier though is to reply with an error and include a body which does a refresh. Whether that works in most browsers, probably not. However, browsers do not cache pages that return an error (404 Page Not Found would work, you could also use 403 Forbidden.)
Other than that, there isn't much you can do. JavaScript does not allow you to temper with the history anymore...

OWASP TOP10 - #10 Unvalidated Redirects and Forwards

I read many of the articles to this topic, including the OWASP PAGE and the Google blog article about open redirects...
I also found this question on open redirects here on stack overflow but it's a different one
I know why i should not redirect ... this makes totaly sense to me.
But what I really don't understand: Where is exactly the difference between redirecting and putting this in a normal <a href link?
Maybe some of the users are looking in the status bar but i think most of them are not really looking to the status bar, when they klick a link.
Is this really the only reason?
like on this article they wrote:
Click here to log in
The user may assume that the link is safe since the URL starts with their trusted bank, bank.example.com. However, the user will then be redirected to the attacker's web site (attacker.example.net) which the attacker may have made to appear very similar to bank.example.com. The user may then unwittingly enter credentials into the attacker's web page and compromise their bank account. A Java servlet should never redirect a user to a URL without verifying that the redirect address is a trusted site.
So, if you have something like a guestbook, where the user can put the link to their homepage, then the only difference is that the link is not redirected, but it still goes to the evil webpage.
Am I seeing this problem right?
From my understanding, it is not that the redirect is the problem. The main problem here is allowing a redirect (where the target is potentially controllable by the user) that contains an absolute url.
The fact that the url is absolute (meaning it begins http://host/etc), means that you are un-intentionally allowing cross-domain redirects. This is very similar to classic XSS vulnerabilities whereby javascript can be reflected to make cross-domain calls (and leak your domain's information).
So, as I understand, the way to fix most of these sorts of problems is to make sure that any redirect (on the server) is done relative to the root. Then there is no way for the user-controlled query string value go somewhere else.
Does that answer your question or just create more?
The main problem is that its possible for an attacker to make the URL appear to be trustworthy as it’s actually a URL to web site the victim trusts, i. e. bank.example.com.
The redirect target does not need to be that obvious as in the example. Actually, the attacker will probably use further techniques to trick both the user and possibly even the web application if necessary with special encodings, parameter pollutioning, and other techniques to spoof a legitimate URL.
So even if a victim is so security-conscious to check a URL before clicking a link or requesting its resource otherwise, all they can verify is that the URL points to the trustworthy web site bank.example.com. And that alone suffices too often.

Implementing HTTP or HTTPS depending on page

I want to implement https on only a selection of my web-pages. I have purchased my SSL certificates etc and got them working. Despite this, due to speed demands i cannot afford to place them on every single page.
Instead i want my server to serve up http or https depending on the page being viewed. An example where this has been done is ‘99designs’
The problem in slightly more detail:
When my visitors first visit my site they only have access to non-sensitive information and therefore i want them to be presented with simple http.
Then once they login they are granted access to more sensitive information, e.g. profile information for which HTTPS is used to deliver.
Despite being logged in, if the user goes back to a non-sensitive page such as the homepage then i want it delivered using HTTP.
One common solution seems to be using the .htaccess file. The problem is that my site is relatively large meaning that to use this would require me to write a rule for every page (several hundred) to determine whether it should be server up using http or https.
And then there is the problem of defining user generated content pages.
Please help,
Many thanks,
David
You've not mentioned anything about the architecture you are using. Assuming that the SSL termination is on the webserver, then you should set up separate virtual hosts with completely seperate and non-overlapping document trees, and for preference, use a path schema which does not overlap (to avoid little accidents).

Resources