Isn't a password a form of security through obscurity? - security

I know that security through obscurity is frowned upon and considered not really secure, but isn't a password security through obscurity? It's only secure so long as no one finds it.
Is it just a matter of the level of obscurity? (i.e. a good password well salted and hashed is impractical to break)
Note I'm not asking about the process of saving passwords (Assume they are properly hashed and salted). I'm asking about the whole idea using a password, which is a piece of information, which if known could compromise a person's account.
Or am I misunderstanding what security through obscurity means? I guess that's what I assume it to mean, that is there exists some information which if known would compromise a system (in this case, the system being defined as whatever the password is meant to protect)

You are right in that a password is only secure if it is obscure. But the "obsure" part of "security through obscurity" refers to obscurity of the system. With passwords, the system is completely open -- you know the exact method that is used to unlock it, but the key, which is not part of the system, is the unknown.
If we were to generalize, then yes, all security is by means of obscurity. However, the phrase "security through obscurity" does not refer to this.

Maybe it's easier to understand what Security-by-Obscurity is about, by looking at something that is in some sense the opposite: Auguste Kerckhoffs's Second Principle (now simply known usually as Kerckhoffs's Principle), formulated in 1883 in two articles on La Cryptographie Militaire:
[The cipher] must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience.
Claude Shannon reformulated it as:
The enemy knows the system.
And Eric Raymond as:
Any security software design that doesn't assume the enemy possesses the source code is already untrustworthy.
An alternative formulation of that principle is that
The security of the system must depend only of the secrecy of the key, not the secrecy of the system.
So, we can simply define Security-by-Obscurity to be any system that does not follow that principle, and thus we cleverly out-defined the password :-)
There are two basic reasons why this Principle makes sense:
Keys tend to be much smaller than systems, therefore they are easier to protect.
Compromising the secrecy of a key only compromises the secrecy of all communications protected by that key, compromising the secrecy of the system compromises all communications.
Note that it doesn't say anywhere that you can't keep your system secret. It just says you shouldn't depend on it. You may use Security-by-Obscurity as an additional line of defense, you just shouldn't assume that it actually works.
In general, however, cryptography is hard, and cryptographic systems are complex, therefore you pretty much need to publish it, to get as many eyeballs on it as possible. There are only very few organizations on this planet that actually have the necessary smart people to design cryptographic systems in secrecy: in the past, when mathematicians were patriots and governments were rich, those were the NSA and the KGB, right now it's IBM and a couple of years from now it's gonna be the Chinese Secret Service and international crime syndicates.

No. Let's look a definition of security through obscurity from wikipedia
a pejorative referring to a principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security.
The phrase refers to the code itself, or the design of a system. Passwords on the other hand are something a user has to identify themselves with. It's a type of authentication token, not a code implementation.

I know that security through obscurity is frowned upon and considered not really secure, but isn't a password security through obscurity? It's only secure so long as no one finds it.
In order to answer this question, we really need to consider why "security through obscurity" is considered to be flawed.
The big reason that security through obscurity is flawed is that it's actually really easy to reverse-engineer a system based on its interactions with the outside world. If your computer system is sitting somewhere, happily authenticating users, I can just watch what packets it sends, watching for patterns, and figure out how it works. And then it's straightforward to attack it.
In contrast, if you're using a proper open cryptographic protocol, no amount of wire-sniffing will let me steal the password.
That's basically why obscuring a system is flawed, but obscuring key material (assuming a secure system) is not. Security through obscurity will never in and of itself secure a flawed system, and the only way to know your system isn't flawed is to have it vetted publicly.

Passwords are a form of authentication. They are meant to identify that you are interacting with who you are supposed to interact with.
Here is a nice model of the different aspects of security (I had to memorize this in my security course)
http://en.wikipedia.org/wiki/File:Mccumber.jpg
Passwords are an aspect of the confidentiality aspect of security.
While probably the weaker of the forms of authentication (something you know, something you have, something you are), I would still say that it does not constitute security through obscurity. With a password, you are not trying to mask a facet of the system to try to keep it hidden.
Edit:
If you follow the reasoning that passwords are also a means of "security Throguh Obscurity" to it's logical end then All security, including things like encryption, is security through obscurity. Then that means, the only system that is not secured through obscurity is one that is surrounded in concrete and sunk to the ocean floor, no one ever being allowed to use it. This reasoning, however, is not conducive to getting anything done. Therefore we use Security Through obscurity to describe practices that use not understanding the implementation of a system as a means of security. With passwords, the implementation is known.

No, they are not.
Security through obscurity means that the process that provides the access protection is only secure because its exact details are not publicly available.
Publicly available here means that all the details of the process are known to everyone, except, of course, a randomized portion that constitutes the key. Note that the range from which keys can be chosen is still known to everyone.
The effect of this is that it can be proven that the only part that needs to be secret is the password itself, and not other parts of the process. Or conversely, that the only way to gain access to the system is by somehow getting at the key.
In a system that relies on the obscurity of its details, you cannot have such an assurance. It might well be that anyone who finds out what algorithm you are using can find a back door into it (i.e. a way to access the system without the password).

The short answer is no. Passwords by themselves are not security by obscurity.
A password can be thought of as analogous to the key in cryptography. If you have the key you can decode the message. If you do not have the key you can not. Similarly, if you have the right password you can authenticate. If you do not, you can not.
The obscurity part in security by obscurity refers to how the scheme is implemented. For example, if passwords were stored somewhere in the clear and their precise location was kept a secret that would be security by obscurity. Let's say I'm designing the password system for a new OS and I put the password file in /etc/guy/magical_location and name it "cooking.txt" where anyone could access it and read all the passwords if they knew where it was. Someone will eventually figure out (e.g. by reverse engineering) that the passwords are there and then all the OS installations in the world will be broken because I relied on obscurity for security.
Another example is if the passwords are stored where everyone can access them but encrypted with a "secret" key. Anyone who has access to the key could get at the passwords. That would also be security by obscurity.
The "obscurity" refers to some part of the algorithm or scheme that is kept secret where if it was public knowledge the scheme could be compromised. It does not refer to needing a key or a password.

Yes, you are correct and it is a very important realisation you are having.
Too many people say "security through obscurity" without having any idea of what they mean. You are correct in all that matters is the level of "complexity" of decoding any given implementation. Usernames and passwords are just a complex realisation of it, as they greatly increase the amount of information required to gain access.
One important thing to keep in mind in any security analysis is the threat model: Who are you worried about, why, and how are you preventing them? What aren't you covering? etc. Keep up the analytical and critical thinking; it will serve you well.

Related

What if security through obscurity fails?

I know that security through obscurity means that a system of any sort can be secure as long as nobody outside of its implementation group is allowed to find out anything about its internal mechanisms.
But what if someone does find it? Are there still any mechanisms build within the system that still protect the system if anyone tries to attack it? Are there any examples of systems using security through obscurity?
Security through obscurity refers to systems which are only secure inasmuch as their design and implementation remain a secret. This is like burying a treasure chest on a deserted island. If somebody finds a map, it's just a matter of time before your treasure gets dug up. It's typically hard to keep the design and implementation details secure when multiple parties will need access to the thing in order to use it. For personal standalone systems which do not require frequent access, security through obscurity is not a bad choice.
However, personal standalone systems which do not require frequent access are not the kind of system computer security typically considers. Computer security typically concerns itself mostly with multi-user connected systems which are accessed all the time. In such cases, effectively concealing all the relevant implementation details is prohibitively difficult. In the treasure chest analogy, imagine if there were a thousand people who required regular access to the treasure. Aside from it being difficult to access the treasure, any bad actor has lots of opportunities to steal the map, look at the map, follow somebody to the treasure, or just trick somebody into divulging the location.
So, what's the alternative? Imagine a safety deposit box in a vault at a large bank in the middle of a city. Everybody knows (hypothetically) everything about the security setup: there are alarm systems, cameras, guards, police, the vault itself, and then locks on the boxes. All of this can be known and still many of the attack vectors which would succeed against the buried treasure would fail this case: stealing one key would not be enough and, even if all keys were stolen, you'd still have to defeat surveillance, guards and police. Furthermore, access to the contents of the safety deposit box is relatively more convenient for end users: properly authenticated users (has ID, has key, etc.) just need to go to the (local) bank and present credentials and get access.
The most secure systems - imagine defense systems - use a combination of these techniques. Design and implementation details are not publicly known and are actively protected. However, knowing these details does not give you a map to the treasure, just an understanding of the various other systems protecting the loot. Governments use obscurity because no (useful) system is impervious to all attacks.

What are the security implications (if any) of allowing any password?

As Ars and many others have pointed out, even large organizations with extensive, presumably intelligent IT departments employ rules for password creation that are not always in the best interest of security.
Sometimes though, it is still hard to understand why some applications restrict certain characters but allow others, and why other applications have rules that contrast all other applications. All of this makes me feel like I am missing something.
I have been taught to never trust raw input from a user, so it feels wrong to allow input from the user and not validate it, however I am conflicted. In this case I feel there is a strong argument for allowing the user to enter anything, and not validate it (because there would be no need).
For illustrative reasons, take the following hypothetical user registration system:
A Hypothetical User Registration System
A user navigates to a form for registering an account with a website.
Among other inputs, the user is prompted for a password. The user is told they may enter any desired password. There are no rules. It can be absolutely any string of characters.
Upon receiving the submitted form (or any request, legitimate or malicious), the server does not validate the password field: it simply hashes whatever was given.
The hash is stored in a database for use later.
for instance, in PHP:
password_hash($_POST['password'], PASSWORD_DEFAULT);
If there are any, what are the security implications of this system?
Is there any kind of specially-crafted request that someone can possible submit that would cause unintended consequences in this system? If so, please provide examples.
Would some server-side languages be susceptible to attacks from this, but others not? Which ones?
Again, this system is illustrative. Please do not argue what merits or downfalls a system like this may have in terms of the security of its users' passwords themselves, only what security implications it may have on the system itself.
The most common reason for preventing certain characters is that the developers don't know how to correctly handle passwords in whatever language they are working in, and rather than learn to do so, they try to limit what data they accept (often incorrectly). Alternately, they rely on third party components that handle passwords incorrectly and believe that they are powerless to fix this. (This is described in the article you link.)
If the code is precisely as you describe, with no fancy JavaScript in the middle touching the input, no middleware unpacking data structures, no logging systems writing passwords, no writing raw passwords into the database, no SQL queries built up as strings that might include the password, no unhashed passwords in the database, no incorrectly encoded strings in URLs, etc., then yeah, it's great. It's almost perfect (I'd much rather you apply some hashing before posting to the server, but there are some arguments there either way).
In modern applications, developers slap together all of the things mentioned in the last paragraph, and then apply a layer of hope and prayer and then scramble to patch when someone publishes an exploit. Everything in the last paragraph is horrible, and common.
So if you see restrictions on what characters are acceptable, it means the password handling system was either built poorly or the developers don't trust it. That is common, so restrictions are common.

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

Is using a GUID security though obscurity?

If you use a GUID as a password for a publicly facing application as a means to gain access to a service, is this security through obscurity?
I think the obvious answer is yes, but the level of security seems very high to me since the chances of guessing a GUID is very very low correct?
Update
The GUID will be stored in a device, when plugged in, will send over the GUID via SSL connection.
Maybe I could generate a GUID, then do a AES 128 bit encrption on the GUID and store that value on the device?
In my opinion, the answer is no.
If you set a password to be a newly created GUID, then it is a rather safe password: more than 8 charcters, contains numbers, letters ans special characters, etc.
Of course, in a GUID the position of '{', '}' and '-' are known, as well as the fact that all letters are in uppercase. So as long as nobody knows that you use a GUID, the password is harder to crack. Once the attacker knows that he is seeking a GUID, the effort needed for a brute force attack reduces. From that point of view, it is security by obscurity.
Still, consider this GUID: {91626979-FB5C-439A-BBA3-7715ED647504} If you assume the attacker knows the position of the special characters, his problem is reduced to finding the string 91626979FB5C439ABBA37715ED647504. Brute forcing a 32 characters password? It will only happen in your lifetime, if someone invents a working quantum computer.
This is security by using a very, very long password, not by obscurity.
EDIT:
After reading the answer of Denis Hennessy, I have to revise answer. If the GUID really contains this info (specifically the mac address) in a decryptable form, an attacker can reduce the keyspace considerably. In that case it would definitely be security by obscurity, read: rather insecure.
And of course MusiGenesis is right: there are lots of tools that generate (pseudo) random passwords. My recommendation is to stick with one of those.
Actually, using a GUID as a password is not a good idea (compared to coming up with a truly random password of equivalent length). Although it appears long, it's actually only 16 bytes which typically includes the user's MAC address, the date/time and a smallish random element. If a hacker can determine the users MAC address, it's relatively straightforward to guess possible GUID's that he would generate.
If one can observe the GUID being sent (e.g. via HTTP Auth), then it's irrelevant how guessable it is.
Some sites, like Flickr, employ an API key and a secret key. The secret key is used to create a signature via MD5 hash. The server calculates the same signature using the secret key and does auth that way. The secret never needs to go over the network.
GUID is to prevent accidental collisions, not intentional ones. In other words, you are unlikely to guess a GUID, but it is not necessarily hard to find out if you really want to.
At first I was ready to give an unqualified yes, but it got me thinking about whether that meant that ALL password based authentication is security by obscurity. In the strictest sense I suppose it is, in a way.
However, assuming you have users logging in with passwords and you aren't posting that GUID anywhere, I think the risks are outweighed by the less secure passwords the users have, or even the sysadmin password.
If you had said the URL to an admin page that wasn't otherwise protected included a hard coded GUID, then the answer would be a definite yes.
I agree with most other people that it is better than a weak password but it would be preferable to use something stronger like a certificate exchange that is meant for this sort of authentication (if the device supports it).
I would also ensure that you do some sort of mutual authentication (i.e. have the device verify the servers SSL certificate to ensure it is the one you expect). It would be easy enough of me to grab the device, plug it into my system, and read the GUID off of it then replay that back to the target system.
In general, you introduce security vulnerabilities if you embed the key in your device, or if you transmit the key during authentication. It doesn't matter whether they key is a GUID or a password, as the only cryptographic difference is in their length and randomness. In either case, an attacker can either scan your product's memory or eavesdrop on the authentication process.
You can mitigate this in several ways, each of which ultimately boils down to increasing the obscurity (or level of protection) of the key:
Encrypt the key before you store it. Of course, now you need to store that encryption key, but you've introduced a level of indirection.
Calculate the key, rather than storing it. Now an attacker must reverse-engineer your algorithm, rather than simply searching for a key.
Transmit a hash of the key during authentication, rather than the key itself, as others have suggested, or use challenge-response authentication. Both of these methods prevent the key from being transmitted in plaintext. SSL will also accomplish this, but then you're depending on the user to maintain a proper implementation; you've lost control over the security.
As always, whenever you're addressing security, you need to consider various tradeoffs. What is the likelihood of an attack? What is the risk if an attack is successful? What is the cost of security in terms of development, support, and usability?
A good solution is usually a compromise that addresses each of these factors satisfactorily. Good luck!
It's better than using "password" as the password, at least.
I don't think a GUID would be considered a strong password, and there are lots of strong password generators out there that you could use just as easily as Guid.NewGuid().
It really depends on what you want to do. Using a GUID as password is not in itself security through obscurity (but beware the fact that a GUID contains many guessable bits out of the 128 total: there is a timestamp, some include the MAC address of the machine that generated it, etc.) but the real problem is how you will store and communicate that password to the server.
If the password is stored on a server-side script that is never shown to the end user, there is not much risk. If the password is embedded in some application that the user downloads to its own machine, then you will have to obfuscate the password in the application, and there is no way to do that securely. By running a debugger, a user will always be able to access the password.
Sure it is security by obscurity. But is this bad? Any "strong" password is security by obscurity. You count on the authentication system to be secure, but in the end if your password is easy to guess then it doesn't matter how good the authentication system is. So you make a "strong" and "obscure" password to make it hard to guess.
It's only security through obscurity to the extent that that's what passwords are. Probably the primary problem with using a GUID as a password is that only letters and numbers are used. However, a GUID is pretty long compared to most passwords. No password is secure to an exhaustive search; that's pretty obvious. Simply because a GUID may or may not have some basis on some sort of timestamp or perhaps a MAC address is somewhat irrelevant.
The difference in probability of guessing it and something else is pretty minimal. Some GUIDs might be "easier" (read: quicker) to break then others. Longer is better. However, more diversity in the alphabet is also better. But again, exhaustive search reveals all.
I recommend against using a GUID as a password (except maybe as an initial one to be changed later). Any password that has to be written down to be remembered is inherently unsafe. It will get written down.
Edit: "inherently" is inaccurate. see conversation in comments

Are there any studies for or against frequent password changes?

I'm looking for studies on the security effect of frequent password changes, looking at the security benefits / problems from having a mandatory password change every one or two months or similar.
Does anyone know of any?
Here is a research article on password policy. It mentions the frequency at which people should change their passwords and some other really interesting stuff. Below is an extract.
Some experts say that periodic
password changes will reduce the
damage if an attacker intercepts a
password: once the password is
changed, the attacker is locked out.
This assumes that the recovered
password will not give the attacker
any hints about the victim's current
password. In fact, periodic password
changes tend to encourage people to
design sequences of passwords, like
secret01a, secret01b, secret01c, and
so on.
This allows users to easily choose and
remember a new password when the old
one expires. Such sequences are
usually pretty obvious to an attacker,
so any one of the victim's old
passwords will probably provide the
attacker with a reasonably small
number of passwords to guess at.
The TechReport Do Strong Web Passwords Accomplish Anything? states “changing
the password frequently helps only if the attacker is extremely slow to exploit the harvested credentials.”
In my opinion, forcing people to change their password too often, reduces security because the only way people can remember so many passwords, is to start using stupid passwords like Computer123 or January1 followed by February1 etc...
A better idea is to reduce the frequency and then train people how to create strong passwords.
While not exactly the study you're looking for, it is closely related and might push you in the right direction. I have seen a few studies on the specific topic you're looking for, but can't find the references just yet.
Microsoft Security Guru advice: "Write down your password"
There are a number of bad things that can happen with passwords, and want to mitigate as many as possible without creating new problems. The "change your password" policy is there to mitigate the damage over time that could be caused if your password gets out, by limiting the window of opportunity for an attacker. While not the end-all of security measures, it can sometimes make a huge difference. As a security consultant, I have personally made (this year alone) many tens of thousands of dollars cleaning up messes that could have been avoided entirely if the company had changed important passwords at least yearly.
The danger of changing your password frequently is that you'll pick poor passwords. This makes the situation even worse, because it now allows attacks that would have otherwise not been possible.
The new wisdom, as mentioned in the linked article, is pick (or be assigned) a random password, possibly changed on a regular basis, and write it down somewhere that you keep safe. Obviously you don't leave it with your computer any more than you would leave you keys with your car. The justification is that people are already trained to know how to secure "things" but are naturally poor at securing information. So if you turn the password into a thing you can hold, then you can just secure it the same way you secure your keys. In practice, this works very well, however it tends to make IT departments nervous.
Update, August 2016:
This article: "Frequent password changes are the enemy of security, FTC technologist says" http://arstechnica.com/security/2016/08/frequent-password-changes-are-the-enemy-of-security-ftc-technologist-says/?imm_mid=0e6947&cmp=em-webops-na-na-newsltr_security_20160809
has been showing up everywhere this week, including Bruce Scheier's blog, O'Reilly's Security Newsletter, ArsTechnica, Slate, and NewsCombinator, and may be just exactly what you'd asked for earlier this year.
It references:
https://www.cs.unc.edu/~reiter/papers/2010/CCS.pdf
TL;DR summary: Stop expiring/changing passwords. It's a really bad security practice.
I don't know of any studies that exist, but to get you thinking about both sides of the issue, here's a paper against forcing password changes:
Managing network security — Part 10: Change your password
And an instructional site for an educational institute that makes at least a somewhat compelling case (Written by a Ph.D.) for forcing users to change their passwords frequently. These are the main arguments the site gives FOR forcing password changes, after the link to the page:
"Why Do I Have to Change My !#$%#* Password?"
If you're required to change your
password at least every six months,
someone who's hacked your password
and has been accessing your account
without your knowledge will
immediately be shut out once your
password is changed. Some may think
this is an uncommon scenario, but
people commonly sell an old computer
and forget to erase passwords they
may have saved for dialing in or for
accessing their email.
If you change your password at least
every six months, hackers who may be
trying to crack your password using
brute force (as described above)
basically need to start over because
your password may now have been
changed to some pattern they've
already tried and rejected.
Forcing a password change also
discourages users from using the same
password on multiple accounts. (Using
the same password on multiple
accounts is bad because then your
password is only as secure as the
least secure of the systems sharing
that common password, and if your
account does get compromised, the bad
guy suddenly has access not just to
one account, but to multiple
accounts, magnifying the scope of the
problem).
As far as "research" goes, these might not cut it, but seem to be at least a good introduction to both sides of the argument.
It's not a study, but Gene Spafford posted a short article that discusses the reasons why a policy of frequent password changes doesn't make much sense:
http://www.cerias.purdue.edu/site/blog/post/password-change-myths/
Passwords should be changed when you believe that your password may have been or may soon be compromised. Otherwise, it is usually a useless waste of time, and actually a security hazard. See this link:
http://old.news.yahoo.com/s/ytech_wguy/20100413/tc_ytech_wguy/ytech_wguy_tc1590
This is an excellent paper published in an ACM publication (from 2010) that discusses the cost-benefit analysis of security and user expectations.
http://research.microsoft.com/en-us/um/people/cormac/papers/2009/SoLongAndNoThanks.pdf
It does make the claim that changing passwords is certainly good advice, but it may not be worth the cost to the users since there is so little evidence to support the idea that we are safer with more frequent password changes. It really is such a good article.

Resources