Are Apache Realms secure? - linux

I have been told by our senior developer that Apache Realms (using either .htaccess files or the httpd config) are insecure and therefore bad practise to use in any sort of admin backend system. Is this true?
The way I see it, people cant see any .htxxx files under apache anyway, so how else can someone get around a Realm? Of course if they're inside your sever then they can do as they please, but at this point any security is insecure anyway, right?
Are there anyways of hacking or bypassing a Realm? Are they insecure to use?
If so, examples would be useful.

here are some general weaknesses of http basic authentication, in no particular order and with no particular endorsement of them.
using apache as your security layer means ops people are responsible for a critical layer instead of developers. This might mean bypassing code reviews and other measures, depending on your departments' setup.
using realms means you are vulnerable to various well known attacks, whereas a custom login php solution might be more obscure and less likely to be probed for. (maybe)
apache realms rely on specific well known files with well known formats, meaning any flaw involving adding or editing files to the system might compromise your login
apache realms may also be attacked thru shell commands, meaning any flaw involving exec or system might compromise your login
apache realms are vulnerable to brute force attacks, as they do not support captcha, or rate limiting
theft of your password file means a potential compromise of all your passwords, which may cause a lot of headaches if you have a lot of users.
having a sensitive password file might complicate deployment, backup, etc etc
All of the above issues are dealable, and as with most security arguments there are probably right answers, most likely your senior dev has numerous reasons you are not privy to.

Related

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

php user management systems

I'm on my last steps to open my website, but the only thing that drove me crazy is the php user management. I found a lot of resources about building these systems and I believe that I can write them in my own way. The thing is that when it comes to security I get so freaking out what to go with. For example, when it comes to sending sensitive information over SSL, some people suggest to make sure that the info is encrypted in the registration form so that attacker can't hack it. And some other suggest to make sure that the debugging messages don't show when an error happen so that the attacker can't retrace the links .etc.
now as I read from here and there that md5 is not safe anymore so I'm wondering how would hash new user password and etc... I found a link to some programmers who already offer some user management, but not sure if they are good enough since I'm concerned about security as a priority CodeCanyon
so now what are the security measures that I have to be focusing on?
are there any resources related to that?
Thanks,
You don't have to (you shouldn't) choose between the different things people tell you to implement. Good security is always layered, meaning that you implement as many protections as you can. This approach has multiple purposes. Each layer can prevent different attacks. Each layer can prevent attackers with different experience. Each layer can increase the time needed for an attacker.
Here are some tipps useful for authentication systems.
Don't show debugging outputs
Don't use MD5 hashes. SHA2 or even better, bcrypt are much better
Use salts when storing passwords
Use nonces on your forms (one time tokens)
Always require SSL encryption between server and client
When accessing your database on the server, make sure that information leakage or its client-side manipulation not possible (eg.
avoid injection attacks, with database drivers use prepared
statements, etc.)
Make sure all failed logins (no matter what the reason) take the same amount of time to prevent timing attacks
When a logged-in user starts a risky operation (changing pwd, payment etc.), re-authgenticate him
Never store passwords cleartext, not ever, not anywhere
Require a minimum complexity for the password
!!! Secure your php sessions (another large topic, worth its own discussion) -
As you can see, there a lot you can do (and more people will probably tell you even more stuff), what you really should do depends on the risks you are willing to accept. But never rely on a single security measure, always have a layered approach.
Answering your direct question: It has been proven that MD5 does have collisions and there are rainbow tables floating around (see Wikipedia). PHP does have quite some hash functions available all having different advantages and disadvantages. Please also see the comment section on php.net.
Concerning general web application security I'd recommend you take a look at the OWASP project that is about making web applications more secure. A good start would be to take a look at the Top Ten security vunerabilities ("Top Ten" in the blue box).
use sha1 for storing password , prevent sql injection and xss script as input field.
session hijacking , fixation prevention.
At first you should send your data via SSL (TSL) to the server, this will encrypt. Also you should use a CSRF protection for any form you send to the server.
When you have implemented your functions and they work you should try to hack your site by yourself. Try to inject SQL, JS through the forms, try to manipulate the date after the form was send, you can also try to produce erros that will be written to you PHP error log even that could be executed if your server settings are weak. (http://en.wikipedia.org/wiki/Hardening_(computing))
When you store the password in your database use an seeded hash function, if anyone is able to hack your database and get the hashs he will not be able to encrypt them without the seed.
Your will find many information about all the techniques via google.

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

HTTP Authentication Security

My client needs a simple database CMS faster than I can tackle the ins and outs and security flaws of register globals, sql injection, and cookie filtering.
I installed phpMyEdit and secured the edit page with .htaccess. For the security experts, does this provide at least a moderate level of security?
It is a moderate level of security, yes.
The attack you need to be aware of is a brute-force attack where a bad guy tries different username and password combinations over and over. To fix this you can lock a user out after n (10 is reasonable) failed login attempts.
There are lots of ways to configure htaccess files as far as valid users go but depending on the source you are using be extra careful of there being any default or guest-type users that your htaccess would let in.
It all comes down to things that no one here has a way of knowing, like whether the passwords are secure or if you've bungled up somehow. If you want assurance that HTTP authenticating works, then yes, it does work. There's also more than way you can set it up, so just calling it "htaccess security" is ambiguous. All in all, simply make sure you haven't left any parts accessible to the public and that the passwords aren't "123" or "qwerty", and you'll be fine (probably).
I also recommend to ip protect your protected directories or files for admin.
Also I can't be OK with the automated programs, just you need more practice, you have to be aware about most used hacking tricks, just read more and more about sql injection and so forth...
Good luck

Can an Apache-served pure-HTML website be hacked?

Assume you are running a pure-HTML website on Apache. Just serving static files, nothing dynamic, nothing fancy.
Also assume all passwords are safe, and no social-hacking (i.e. phishing attacks, etc...)
Can a website of this nature basically be hacked? Can the server become compromised? Are there any examples for this?
Yes, such a server can become compromised. A very common vector, sadly, is FTPing to the server over an insecure wifi connection. Anyone listening closely can pick your password out of the air. (It's fun to be at a tech conference and have your password displayed on a screen for all to see, along with the other fools that sent their credentials in the clear over wifi.)
Another common vector is using a simple password and having it fall to a dictionary attack.
Sure it can be compromised, through security flaws in Apache itself. While it's true that adding more layers (like php, sql, etc) onto the server itself increases the potential for vulnerability, nothing is infallible.
Apache, however, is a very well-known open-source program, and the community does a good job of flushing out bugs like this.
Short answer: any internet-connected device has the potential to be "hacked"
You might be curious to read an article or two about "Securing Apache 2".
It is reasonable to say it is secure, but you may want to take note that Apache does come shipped with some modules already enabled.
Also, the goal of securing Apache should not only to secure the server instance itself, but to sandbox the server in such a way that you would limit the damage any such intrusion could do.
All of this information is of course contingent on the web server being the only exposed component on the box.
Anything "hackable" about the website itself would have to be a hack in Apache itself. If so, you've got bigger problems than just one website.
So, on a practical level, nope, given the "all password files are safe" conditions.
In theory it could be hacked but in theory anything can be hacked. In practice no. Because Apache didn't had any important vulnerabilities for several years.

Resources