Is it possible for a 3rd party to reliably discern your CMS? - security

I don't know much about poking at servers, etc, but in light of the (relatively) recent Wordpress security issues, I'm wondering if it's possible to obscure which CMS you might be using to the outside world.
Obviously you can rename the default login page, error messages, the favicon (I see the joomla one everywhere) and use a non-default template, but the sorts of things I'm wondering about are watching redirects somehow and things like that. Do most CMS leave traces?
This is not to replace other forms of security, but more of a curious question.
Thanks for any insight!

Yes, many CMS leave traces like the forming of identifiers and hierarchy of elements that are a plain giveaway.
This is however not the point. What is the point, is that there are only few very popular CMS. It is not necessary to determine which one you use. It will suffice to methodically try attack techniques for the 5 to 10 biggest CMS in use on your site to get a pretty good probability of success.

In the general case, security by obscurity doesn't work. If you rely on the fact that someone doesn't know something, this means you're vulnerable to certain attacks since you blind yourself to them.
Therefore, it is dangerous to follow this path. Chose a successful CMS and then install all the available security patches right away. By using a famous CMS, you make sure that you'll get security fixes quickly. Your biggest enemy is time; attackers can find thousands of vulnerable sites with Google and attack them simultaneously using bot nets. This is a completely automated process today. Trying to hide what software you're using won't stop the bots from hacking your site since they don't check which vulnerability they might expect; they just try the top 10 of the currently most successful exploits.
[EDIT] Bot nets with 10'000 bots are not uncommon today. As long as installing security patches is so hard, people won't protect their computers and that means criminals will have lots of resources to attack. On top of that, there are sites which sell exploits as ready-to-use plugins for bots (or bots or rent whole bot nets).
So as long as the vulnerability is still there, camouflaging your site won't help.

A lot of CMS's have id, classnames and structure patterns that can identify them (Wordpress for example). URLs have specific patterns too. You just need someone experienced with the plataform or with just some browsing to identify which CMS it's using.
IMHO, you can try to change all this structure in your CMS, but if you are into all this effort, I think you should just create your own CMS.
It's more important to keep everything up to date in your plataform and follow some security measures than try to change everything that could reveal the CMS you're using.

Since this question is tagged "wordpress:" you can hide your wordpress version by putting this in your theme's functions.php file:
add_action('init', 'removeWPVersionInfo');
function removeWPVersionInfo() {
remove_action('wp_head', 'wp_generator');
}
But, you're still going to have the usual paths, i.e., wp-content/themes/ etc... and wp-content/plugins/ etc... in page source, unless you figure out a way to rewrite those with .htaccess.

Related

How could I protect the users from my webpage from being tracked?

This is maybe quite a broad question and I tried to look for other stack exchanges where addressing my question would suit better – but in the end I decided that it might be still a question of a technical nature, and so I am posting it here:
I recently started to think more about privacy and security and I realized that I as a web user can only do so much about staying untracked. VPN, (slow) Tor, privacy helpers, add-blockers, Firefox are just a few tools to name, but still I realize that the information that I normally share (like installed add-ons, browser size, IP location etc.) can still very well be fingerprinted.
Normally as a web-developer I am told that we should add analytics, that we should find out more about the users to «make a better service», but I think I would like to do the opposite.
So:
Are there steps I could take, when building a website, that help the visitors to stay untracked? And I don't mean «not installing google analytics», I mean things like somehow actively messing with the statistics, so that my hosters server is incapable of tracking things correctly or similar things...
Right now I can't really think of anything, but I somehow believe that I as a person who builds bricks of the internet could and should be able to influence these kind of things directly...
For now I see the obvious things:
- not using statistic services
- use https
- not using any third party tools that might include tracking or open doors for other trackers
But still this seems to just omit the bad things, but I can't actually do active stuff...
So I would be very glad to hear your thoughts about this. (Or guide me to a place, where this discussion fits a better..)
Cheers
merc
As a web developer, you can only control your website.
Assuming you aren't caching any data or using cookies, then users shouldn't be tracked while using your website by tools like 3rd party cookies.
Here is a good article about online tracking and how it works.
As far as I know, there isn't an effective way to actively mess with tracking statistics. Your best bet is to avoid installing libraries or tools that track your users.

Is Joomla user management safe enough to handle potential data?

How difficult/easy is it to break into Joomla backend & to access the pages which are only set to be accessible by selected Joomla users of the website? Is it safe enough to rely on Joomla's management system?
Yes, Joomla is quite secure system by itself. Although you have to be careful with third party extensions and always track update news for all components (including core) you have installed and use your judgement about updating them. Usually security issues spotted quite quickly and you have time before succeed attack.
Another thing keep in mind is proactive defense with all possible means you have in hands, this includes .htaccess and .htpasswd, also good idea to restrict ftp access to only local ips and use sftp instead.
Also check out the security extensions around JED, the ones which prevents high level DDoS and extend admin page access protection might be also helpful, usually they are simple modules or plugins.
And yes, do not forget change default username for superuser. And change all passwords ftp/superusers/mysql/htpasswd on regular basis.
Follow this simple rules and you will be fine, at least most of the time you will be fine.
While Joomla security is fairly good, you need to keep up with the patches and, as dmi3y mentioned, you need to watch the third party extensions.
When it comes to information security, nothing is ever perfect. This solution may or may not be appropriate depending on the type of information that you are looking to secure, the number of users accessing it and how you manage the user rights.

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

How would you attack a domain to look for "unknown" resources? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).

Website administration - Integrated into main website or separate section?

From a usability perspective, is it better to integrate admin section on the main website or have a separate section to manage content?
Any thoughts are greatly appreciated.
EDIT: the application is a CMS for very non-techno friendly staff.
It depends on the project and part you want to administer, imho.
For example comments on newsposts should be administered in the website itself by showing a "delete" linkbutton for each comment. Otherwise the mods would have to look up the comment in the admin section => not very user friendly.
But in general I think a seperate admin section will usually be more clear to your client. You'd want them to see the site as a normal user would see it.
At the very least I would recommend moving all your administration files to a separate folder. That way if you're using a platform like .NET you can very easily control folder access though role and user-based web.config permissions.
Having your administration files all segregated allows you to do other things easily too, like delete them if you decide to move them to another server later. Also you can exclude them in your robots.txt file (although by putting it in the robots.txt file you will be telling other people this section exists, and robots don't have to mind this file).
Edit:
I feel like my answer missed the mark a little considering your question. Of course in-line editing is easier than going to a separate page from a usability perspective, but whenever I hear of mixing admin users with regular users giant alarm bells go off in my head.
I think that it depends on the function of the site and how intrusive it will be to your staff. Does it make sense for them to make changes while browsing the site and will they eventually become discouraged with your system because it forces them to inject unnecessary steps into their process? How long will the edits take? Does it make sense to show a completely different interface to administrators? I think an answer to this question requires a lot more understanding of what specific function you're trying to accomplish and will vary on a case by case basis.
I have never liked adding administration pages into the main site. It seems like it is too much of a risk from someone accidently getting access to the administration portion of the site. Even if you review the security over and over, there is always that possibility that something unexpected will happen, and it is a huge mess to clean up (what was modified, who accessed what etc. etc.). In my opinion keeping it as a seperate site is the easiest way to go.
While there is a level of convenience in being able to login to the main part of a site as a privileged user, and then clicking on parts of a page to edit/alter it, there is also a measure of security in having it in a separate area.
I have found that having a separate section of the website that is devoted specifically to administrative tasks makes the tasks easier to organize and use. Just look at Wordpress (especially the new 2.7 release), Drupal (a very popular cms), and Joomla (another very popular cms). If you would like to test these feature out to see why I think the separate section is better, you can go to www.opensourcecms.com and test out both Drupal and Joomla.
From a usability, the more integrated the better but it tends to add severely to the complexity. Its always best if you can perform the action within the context in which it occurs - lock out a bad user, trim a log thats too big etc. however since these tend to have significant side effects the security aspect trumps this a lot of the time out of fear.
I think you need to look at doing a risk assessment with regards to integrating the administration capabilities right into the application.
What would be the implication for the system if someone were able to escalate privelages and gain access to the admin functions. if every user was locked out maliciously - damage to the site, the reputation, SLA etc?
What destructive functions can an admin perform from this section ? delete lots of data? crash the app? alter costs that have material impact on users/customers?
Are the admin functions integrated in the app or isolated into specific admin functions?
Does the application have a public face or is it an intranet that is assumed secure?
I personally have never felt at ease integrating the admin section with the site out of fear that a security breach caused by my ineptness or something beyond my control like bad defaults, unpatched exploit. However, when the client is writing the cheque I tend to do what they feel is best.
I come from the school of Usability that says "minimise guess work". So, what kind of information did you get from your user group sessions?
Remember, you're not (and we're not) typical users of your system. Anything we say is going to be a guess. What's worse is that any opinion shown here is likely to be ill informed for your situation (but probably awesome at what they do). The only way you'll have an informed opinion on what to do is by going out and asking your users in intelligent ways.
Because this is a security issue some of your user mix should be people who specialise in security. Just because it's easier to use doesn't mean it's the most appropriate.

Resources