Just looking for some general advice, nothing specific.
I'm more of a programming hobbyist, so I know there are some holes in my understanding of a greater overall programming picture at times. I have a few websites. One is just a simple HTML website for a family member's business, which is my primary domain, and I have a subdomain for an educational website, which uses PHP and has a login system that uses MySQL to store the usernames and passwords.
I knew I was getting some internal server errors with my subdomains, and then the other day when I tried to access my primary domain through a search engine, I got a malicious redirect to another website. I called my hosting service (GoDaddy) and they basically had me buy a service called Sucuri to clean up the malicious code.
Before the service began working, I did a little research and looked at the htaccess file, and there was the RewriteCond and RewriteRule, which pointed to a file called "default.php". The default.php file got cleaned up before I could look at it, but I'm assuming that is what redirected my website.
(Sorry, long setup!)
My question is this:
Is it possible that the hackers accessed the htaccess through an SQL injection? Looking through my username table I saw a weird one from Russia. It was something like xxxxxx.ru, and obviously not one of my students. I use stripslashes, stripwhitespace, and real_escapes_string to prevent injection, but really there isn't any sensitive information in my database that I thought I had to worry about.
Is it possible for a hacker to get access to htaccess and other files through an SQL injection or do you think they just got in another way?
I never thought anyone would care about my little websites...
Related
I'm using Drupal 8. Multiple sites sharing a single codebase. One .htaccess file for all.
I am receiving the same "page not found error" across all sites. Hackers attempting to break in to the site, presumably.
For example, someone tries to visit https:domain1/wp-admin/admin-ajax.php and https:domain2/wp-admin/admin-ajax.php ... Different domain names, but always the same addresses.
Other addresses include /phpmyadmin/scripts/setup.php and /1/wp-includes/wlwmanifest.xml and so on.
Using .htaccess, what is the most efficient means of redirecting all of these to an internal or external site so that my pages are not even served?
Thank you!
So, the way Drupal and the web server work is that when request arrives, if it matches “serverName” and document root and they points to Drupal then the web-server will hand that to Drupal to handle.
So, you have to ask if this is Drupal destined and if so, handle the redirect at Drupal (probably using the redirect module )
If you want set it up at at web-server level and you have access or using .htaccess then like :
RedirectMatch ^/wp-admin/(.*)$ http://example.com/404/$1
Note, there are plenty of other ways to write the above , but it’s simplest and lightest
I think this is a very common issue about CMS vulnerabilities and hosting security. And security issues is something that can not be done by a simple static action because there's always a new vulnerability. So be careful to always run :
composer update
To have always the last bug fixes and securities updates. Specially when you use modules like webform. At the moment Drupal offers more than one module for better securing your app. And in your case you need to identify IP addresses used by hacking robots and blocking them by using Perimeter .
The good news that the community arround Drupal is very concerned about security. For further reading and securing Drupal you can uses those modules but the more modules you install the more you have performance issues:
https://www.drupal.org/project/clamav
https://www.drupal.org/project/file_upload_secure_validator
https://www.drupal.org/project/key
https://www.drupal.org/project/csp
https://www.drupal.org/project/noopener_filter
https://www.drupal.org/project/hsts
https://www.drupal.org/project/securelogin
...
I also recommend the use of fast 404/403 Drupal error pages to not allow using of Database or more code running to serve that kind of pages.
I have an E-commerce site (built on OpenCart 2.0.3.1).
Using an SEO pack plugin that keeps a list of 404 errors, so we can make redirects.
As of a couple of weeks ago, I keep seeing a LOT of 404s that don't even look like links:
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39
...and so on, until it reaches:
999999.9" //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39,0x393133353134353632342e39,0x393133353134353632352e39,0x393133353134353632362e39,0x393133353134353632372e39,0x393133353134353632382e39,0x393133353134353632392e39,0x39313335313435363231302e39,0x3931
This isn't happening once, but 30-50 times per example. Over 1600 lines of this mess in the latest 404s report.
Now, I know how to make redirects for "normal" broken links, but:
a.) I have no clue how to even format this.
b.) I'm concerned that this could be a brute-hacking attempt.
What would StackOverflow do?
TomJones999 -
As is mentioned in the comments (sort of), this is a security issue for you. The reason for so many URL requests is because it is likely a script that is rifling through many URL requests which have SQL in them and the script / hacker is attempting to either do a reconnaissance and find if your site / pages are susceptible to an SQL Injection attack, or, since they likely already know what E-Commerce Site (AND VERSION) you are using, they could be intending to exploit a known vulnerability with this SQL Injection attempt and achieve some nefarious result (DB access, Data Dump, etc).
A few things I would do:
Make sure your OpenCart is up to date and has all the latest patches applied
If it is up to date, it might be worth bringing up in the forums or to an OpenCart Moderator in case the attacker is going after a weakness he found but that OpenCart has not pushed a patch for yet.
Immediately, you can try to ban the attacker's IP address, but it is likely that they are going to use several different IP addresses and rotate through them. I might suggest looking into either ModSecurity or fail2ban ( https://www.fail2ban.org/ ). Fail2Ban can be a great add on for security in these situations because there are several ways for it to 'dynamically' thwart this attack attempt.
The excessive 404 errors in a short time span can be observed by fail2ban and fail2ban can then ban the client that is causing all of them
Also, there is a fail2ban filter for detecting attempted SQL injections and consequently banning the users. For example, I quickly searched and found this fail2ban filter with a few adjustments/improvements/fixes to the Regular Expression that detects the SQL injection.
I would not concern yourself at all with "how to format" that error log heh...
With regards to your code (or the code in OpenCart), what you want to be sure of is that all user submitted data is sanitized (such as data sent to your server as a GET parameter as in your case).
Also, if you feel uneasy about the attempted hack, it might be worth watching the feed provided on the haveibeenpwned website because data resulting from exploits targeted at databases very commonly tend to end up on sites like pastebin etc and haveibeenpwned will try to parse some of the data and identify these hacks so that you or your users can at least become aware and take appropriate measures.
Best of luck.
I have a rogue website that is redirecting to my website. It doesn't have any content, 1 page, and some bad backlinks and no relationship with us or our niche, so it's safe to say its up to no good.
We want to distance ourselved from this as best as possible. I've requested that the registrar identify the culprit or remove the redirect, however, I wondered if it was possible to stop the site redirecting to our site full-stop.
We're using IIS7.5 on a Windows 2008 Server, and to date I've looked at blocking requests through urlrewriting but I've had no success. I've also read that request filtering may be an option but again little knowledge as to the capabilities of using this.
I would appreciate any advice regarding the 2 approaches above as to whether they are suited to what I want to achieve and if possible links to a clear example.
Here is an existing post that covers this exact topic. You will need to use URL Rewrite to implement the solution.
For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).