WordPress - Security Risk? [closed] - security

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a WordPress site and the following link is accessible: www.domain.com/wp-admin/ (obviously not the real domain name). Someone told me that this is a security risk. Any truth to this?

In essence, the more information an attacker has about your setup, the worse off you are.
That being said, however, the information gained by knowing your admin login page is pretty trivial - as it's the default login location for all WordPress sites. Therefore, once an attacker figured out your site was a WordPress site, he/she would naturally try that link.
As long as you keep your WordPress files up to date, the only thing you're really vulnerable (that you would be protected from if that page was inaccessible) to is an 0day on that specific page...
So, really, it doesn't matter much either way. Personally, I would deny access to that as much as was convenient - but, on the other hand, you may like having that link always open so you can login and admin your site from anywhere. I dare say you'll be fine either way, so long as you have sufficiently strong passwords.
Update: Another thing to consider, the login pages of (well-written, tested)open-source software are rarely ever the point of failure for authentication attacks. Usually, compromising a system involves disclosure of credentials using another vulnerable page, and then using the login page as it was intended to be used. The WordPress devs have combed over the code in your login page because they know it's going to be the first place that anybody looks for an exploit. I would be more concerned about any extensions you're running than leaving the login page viewable by the public.

That's simply Wordpress. Nothing inherently wrong with it. But if you are concerned overall with security, see http://codex.wordpress.org/Hardening_WordPress and http://www.reaper-x.com/2007/09/01/hardening-wordpress-with-mod-rewrite-and-htaccess/ and http://www.whoishostingthis.com/blog/2010/05/24/hardening-wordpress/ etc., on protecting admin with .htaccess, removing some WP identifiable clues, changing the database prefix, SSL access, and on and on. Some things are more worthwhile to do than others, some more obscurity than security, but it's all a learning experience.

Well a lot of sites have open wp-admin , however you can put in a .htaccess file and password protect the directory, provided you are on apache.

it's not a big deal... there's a lot of stuff to avoid it being there... you could even have your whole wp install in a subdirectory of the server

Not sure for WordPress, but I know at least two e-commerce softwares (Zen Cart and PrestaShop) recommending to rename the admin directory to some other name (and not to print the URL in orders...).
Perhaps there are some known exploits using this information...

Related

Is it possible to prevent man in the browser attack at the server with hardware device [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Recently I found a hardware device that can prevent bot attacks by changing html DOM elements on the fly The details are mentioned here
The html input element id and name and also form element action will be replaced with some random string before page is sent to client. After client submit, the hardware device replace its values with originals. So the server code will remain on change and bots can not work on fixed input name, id.
That was the total idea, BUT they also have claimed that this product can solve the man in the browser attack.
http://techxplore.com/news/2014-01-world-botwall.html :
Shape Security claims that the added code to a web site won't cause
any noticeable delays to the user interface (or how it appears) and
that it works against other types of attacks as well, such as account
takeover, and man-in-the-browser. They note that their approach works
because it deflects attacks in real time whereas code for botnets is
changed only when it installs (to change its signature).
Theoretically is it possible that some one can prevent the man in the browser attack at the server?!
Theoretically is it possible that some one can prevent the man in the browser attack at the server?!
Nope. Clearly the compromised client can do anything a real user can.
Making your pages more resistant to automation is potentially an arms race of updates and countermeasures. Obfuscation like this can at best make it annoying enough to automate your site that it's not worth it to the attacker—that is, you try to make yourself no longer the ‘low-hanging fruit’.
They note that their approach works because it deflects attacks in real time whereas code for botnets is changed only when it installs (to change its signature).
This seems pretty meaningless. Bots naturally can update their own code. Indeed banking trojans commonly update themselves to work around changes to account login pages. Unless the service includes live updates pushed out to the filter boxes to work around these updates, you still don't win.
(Such an Automation Arms Race As A Service would be an interesting proposition. However I would be worried about new obfuscation features breaking your applications. For example imagine what would happen for the noddy form-field-renaming example on the linked site if you have your own client-side scripts were relying on those names. Or indeed if your whole site was a client-side Single Page App, this would have no effect.)

Detect websites that try to install programs [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am new to PHP and trying to learn if there is a way to catch websites that install programs in to your computer without your authorization. For example, when you visit some websites, your computer might catch a virus just by going to that web page. Just by looking at its html code, is there a way I can see if a webpage is trying to install something in to my computer? Any help would be greatly appreciated.
You are fundamentally mistaking about the concept of "infecting a computer" via a website.
Usually an attacker would use an exploit to target certain browsers, this will load a "payload" and from there the computer is powned. This "expoit" could be anything from crafted JavaScript to malicious flash files. This is a direct manner of infecting a computer, note that this is not effective unless you don't have an antivirus, up to date browser/software or the attacker is using a 0-day exploit.
The effective way an attacker could infect his visitors is by letting them download something and infecting them directly. Note that a website can't just install something on your computer unless the user downloads it and manually installs it.
It sounds like an anti-virus program is the solution, but how do they detect malicious code ?One of the techniques they use is scanning for certain "signs" of a program/code. The AV has a database of those signs, and scans against it.
To answer your question, it may be possible to do it with PHP but it's like using a fork to dig a cave. Note that you will need to develop a method to detect malicious code, this can be done by comparing hex codes(signs), you'll need a full database of it. And the most fun part is, the attacker could just change slightly his code and your scanner will fail. Also obfuscated code will let your scanner fail.
That's why one should never even think about building a virus scanner with PHP. Use an antivirus. They are smarter, faster and the people working behind it are hackers. Just one technique of my head they use heuristic analysis.
To run code without your consent (or install malicious software) in context of the whole system (not just web application / browser), coders use known or unknown bugs in browsers. Example of Javascript exploit: Help me understand this JavaScript exploit. My antivirus tries not to let me on that page ;)
To check with php if given page contains malicious code, you'd need to use php-based antivirus or one that has php bindings / lets scan files on demand from command line and works against web-based (html,css,js-based) malware.
Not really, antivirus, antispyware and that sort of software does that for you.

Protecting Wordpress Blog [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have an independent site and I am using Wordpress as Content Management System for that. My site has been hacked two times now. Pardon me, am a newbie, but can anyone guide me to how to protect it from being hacked? I will be really thankful.
Here are some links, maybe they are helpful for you:
http://www.mastermindblogger.com/2011/14-ways-to-prevent-your-wordpress-blog-from-being-hacked/
http://semlabs.co.uk/journal/how-to-stop-your-wordpress-blog-getting-hacked
http://getinternetmarketingstrategies.com/2010/04/how-to-secure-wordpress-blogs-prevent-the-hacking-of-your-blog/
http://blogcritics.org/scitech/article/10-tips-to-make-wordpress-hack/
http://tek3d.org/how-to-protect-wordpress-blog-from-hacks
There is also a plugin, which backups your wordpress data into your dropbox account !
But you could specify what you understand by hacked ? Got it deleted, spam comments ?
Here are some links , check it out.
http://wordpress.org/extend/plugins/wp-security-scan/
http://www.1stwebdesigner.com/wordpress/security-plugins-wordpress-bulletproof/
http://designmodo.com/wordpress-security-plugins/
And also keep ASKIMAT plugin activate , it saves your wordpress site from spam e-mails.
Good luck.
This is a new kid on the block but the are getting some impressive reviews.
cloudsafe365.com - a free wp plugin that prevents hacking and content scrape.
Apparently they even clean you dirty dishes.
Insure correct File and Directory permissions.
No 'admin' user
Refresh auth and salt values in wp-config
Use complex passwords
If you did not completely remove (or rename) your old site directories, you may be leaving the hacker's backdoor intact.
Completely delete any unused plugin and theme directories.
Check your web access logs for hackers fishing for exploits.
Cheers
Security is mandatory for every websites, you can try this following ways for strong protection
Disable file browsing via .htaccess file
Use plugins like limit login attempts especially this is for limit brute force login attempts. You can completely kick off brute force logins by changing WordPress login url
Always up to date with WordPress and plugins
Don't use poor coded plugins or themes.
Use plugins like securi for monitoring whole site from malware.
Don't use pirated themes or plugins
Tons of security plugins available on WordPress plugins repository for every security vulnerabilities problems.

Do form submissions by spam bots ever pose a security risk? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
A spam bot has found my sign-up form and is filling my database with spam submissions. The form is a basic asp.net registration that creates a new membership user and captures account information such as name, address, phone, etc. Rather than implement a captcha I plan to try a honeypot field. However, my question is not about prevention* but rather about security. What potential risk does form spam pose? I already parameterize all of my SQL to handle the obvious SQL injection stuff. What are the other risks? Is anyone aware of how one might use a bot to attack a site through the site's form(s)? When do spam submissions represent more than just spam?
**Here are some posts related to prevention for anyone who is interested:*
fighting spam bots
How to deal with botnets and automated submissions
When the bots attack!
Any security risks you may have are completely independent of whether the form is being submitted in bulk.
The only new security risk relates to what happens if the bots fill up your disk.
I guess one problem could be the kind of spam they post. If they post links to other websites which in turn try infect the visitor with malware it doesn't pose a direct threat to your site but to your visitors.
You should also make sure they can't insert scripts etc to prevent XSS.
XSS on wikipedia
From a security perspective, this is really a question about how secure your website is in general. Yes, a spambot could exploit vulnerabilities but then so could any user, be they human or robot.
You mentioned parametrisation of SQL which is a good start, try these as well:
Are you validating all input against a whitelist of trusted values?
Are you applying the principle of least privilege and not allowing the SQL account public users connect with to do more than it needs? (more on that here)
Are you output encoding every piece of data when it's presented back via the UI?
If you're doing all this then you're in good shape security wise. Dealing with the inconvenience created by bots is another issue altogether.

How would you attack a domain to look for "unknown" resources? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).

Resources