Best way to soft brute-force your own GPG/PGP passphrase? - security

I created a nice long passphrase, used it a few times, then forgot it ;) The twist is, I know the general theme and probably almost all of the characters. The perfectionist in me doesn't want to revoke the key or anything like that (and I think I need the passphrase to revoke it anyway, right?). I feel I should be able to have a good go at this by brute-forcing the likely layouts/characters that I've got wrong/mis-typed. I wrote a C program to produce such combinations. Unfortunately I don't have the code to hand (I'll go with the "it's not relevant" excuse for now ;). I also came across some code on the web using GPGME to do exactly this as a proof-of-concept. It had the comment "this could easily be 100 times faster". Problem is, profiling the code shows the bottleneck to be the GPGME call itself. Is this expected, or is it a limitation of GPGME that could be solved using the full library or a dedicated implementation?
How would you go about doing this? Obviously this method is infeasible for any decent unknown passphrase, but I think the key is that I know what I typed without knowing the exact formatting of how I typed it - should be feasible, no?

(and I think I need the passphrase to revoke it anyway, right?)
No, you need the revocation key. Which you should have generated and printed out when you created your key. Then stored it in a safe place, not where someone could use it to revoke your key when you don't want them to.
I've tried to brute-force passwords that I almost remembered, but without success. There are still a lot of permutations, and it takes a lot of rules on what can come after what to narrow it down to a reasonable problem size. I never tried too hard on this, since I luckily have never forgotten my GPG passphrase. Mostly when I've forgotten a password it's a login to a remote machine at the university, and I've never wanted to hammer on the ssh port, or webmail, with my guesses.
Maybe the function you're calling does a lot of setup that is non-key-dependent? So you could speed it up by copying the code out of the library and putting your brute-force loop later on in it.

Related

Can I encrypt a string such that it can be cracked in a calculable timeframe?

I want to hide a password to make it difficult, but not impossible, to retrieve. I want to create two programs, one which hashes the password and then other brute forces the answer. I'm hoping that I can control the length of time that the decryption process takes to something like 24 hours.
I'm most familiar with node.js.
I think you’re looking for something called time capsule cryptography, which is based on proof of work systems that require some large calculation to be done before something can be decrypted. I’m not super familiar with how this works, but perhaps knowing that term will help you find what you’re looking for!

How secure is the "if" statement?

Regardless of the language I'm always puzzled by the concept of security through an if. All the code I write relies on success of that one line with if statement:
user = getUserName();
password = getPassword();
if (match(user, password)) {
print secret information;
}
Since it's only one line I feel like sabotage can be relatively simple. Am I overlooking things, or is a single if really the best way to do this?
You are right, an if like this is easily hacked. If one reverse engineers this application, you can easily modify a few instructions to skip the if.
There are various options, like obfuscating the executable or adding more complex checks and in add them in various places in your application. But whatever you do, your application can always be hacked.
Best thing is not to worry about it. By the time your application is so good and great and widely used that people are actually willing to put effort in cracking it, you will probably make enough money to protect it better. Until then, it's a waste of time to even think about it.
In the specific case you are showing, if you were really worried about unauthorized people seeing the secret information output by "print secret information;" you would encrypt the "secret information" with the supplied password. This would ensure that only the person who was able to provide the proper password would be able to see the secret information.
There's one thing about IF's that is often overlooked. It's called timing attack. Suppose you have a web application that does comparison based on direct matching of password sent against password stored in the DB (yes, I know that nobody in his mind will store passwords in the DB, but as Cheshire Cat said, "we are all mad here"). Then comparison procedure takes different time depending on whether the passwords don't match on the first character, on the second one or on the last one. While it might seem that the time difference is tiny, it's enough for attacker to attempt to guess the password even across internet, not talking about local analysis. Timing attack is a bit more complicated, than I described, but in general IF comparison is not 100% safe, at least not in all cases.
The if statement is absolutely secure, and can never be the cause of a vulnerability. Vulnerabilities arise from nearly everything else in your code.
It is possible that the comparison operator that you are using is flawed. For instance the == operator employs fuzzing matching where a range of possible values are accepted. This might not be good for secuirty but its hard to come up with a good example, it doesn't really matter for a password. A simple $password==$_GET['password'] should work just fine.
Your if statement could also be relying on bad regular expression such as
if(preg_match('/(.+)\\.js/'.$_GET['file'])){
readfile($_GET['file']);
}
In this case the regex is looking for a .js anywhere in the string, not enforcing it to be at the end.
?file=../../.js/../../../../../../../etc/passwd
(And this vulnerability won me $3,000 in the Mozilla bug bounty program ;)
If this is a server code - this is not a problem, as long as you keep your server secure.
If this is a client code - you are right. Someone can manipulate your code - either the binary file or the memory image (once loaded). However, this is true for any client application. You can only make it harder (by using tools like PECompact + Anti-debug plugin for example), but you can't achieve very strong security.
I'm not sure to understand your question.
Software security techniques are imperfect, and AFAIK they pre-suppose few bugs in the compiler, and a "perfect" hardware (that is, the processor is interpreting correctly the machine code).
I am not familiar (but interested) with approaches for imperfect hardware (except of course by using redundancy or other techniques, e.g. ECC, to detect hardware errors).
There is nothing insecure about one line with an if in it.
If the code is running on your server, what matters is how secure that server is. If an hacker gains access to it, it doesn't matter how complicated your code is, he will be able to circumvent it.
Similarly, if your code runs on the computer of a potential attacker (like a computer game that you want to protect), there is nothing you can do to stop the attacker. You can make his work slightly more difficult, but that's all.
You shouldn't worry about the security of one line, but of the system as a whole. If you make your code more complicated, all you did is introduce more potential for bugs. Using more complicated code is an attempt at security through obscurity, which doesn't work.
If you can't trust your computer to execute a simple if correctly, you can't trust it at all.

How can I write a program that can detect by itself that it has been changed?

I need to write a small program that can detect that it has been changed. Please give me a suggestion!
Thank you.
The short answer is to create a hash or key of the program and have the program encrypt and store that key within itself. From time to time the program would make a checksum of itself and compare it against that hash/key. If there is a difference then handle it accordingly.
There are lots and lots of ways to go about this. There are lots of very smart engineers out there that know how to work around it if that is what you are trying to avoid.
The simplest way would be to use a hash function to generate a short code which is a digest of the whole program and then check this.
It would be fairly easy to debug the code and replace the hash value to subvert this.
A better way would be to generate a digital signature using your private key and with the public key in the program to check it.
This would then require changing the public key and the hash as well as understanding the program, or changing the program code itself to subvert the check.
All you can do in the case described so far is make it more difficult to subvert but it will be possible with a certain amount of effort. I'd suggest looking into cryptographic techniques and copy protection for more information to suit your specific case.
Do you mean that program 'foo' should be able to tell if some part of it was modified prior to / during run time? That's not the responsibility of the program, its the responsibility of the security hooks in the target OS.
For instance, if the installed and trusted 'foo' has signature "xyz1234" , the kernel should refuse to run a modified (or completely new) 'foo'. The same goes for 'foo' while its currently running in memory. Look up 'Trusted Path Of Execution', aka TPE to start.
A better question to ask would be how to sign your released version of 'foo', which depends upon your target platform.
try searching for "code signing"
The easiest way would be for the program to detect its own md5 and store that in a separate file, but this isn't totally secure. An MD5 + CRC might work slightly better.
Or as others here have suggested, a sha1, sha2 or sha3 which are much more secure than md5 currently.
I'd ask an external tool to do the check. This problem reminds me of the challenge to write a program that prints itself. In Bash you could do something like this:
#!/bin/bash
cat $0
which really asks for an external tool to do the job. It's kind of solving the problem by getting away from solving the problem...
The best option is going to be code signing -- either using a tool supplied by your local friendly OS (For example, If you're targeting Windows, you probably want to take a look at Authenticode where the Operating System handles the tampering), or by rolling your own option storing MD5 hashes and comparing
It is important to remember that bets are off if someone injects a thread into your process (to potentially kill your ongoing checks, etc.), or if they tamper with your compiled application to bypass said checks.
An alternative way which wasn't mentioned is to use a binary packer such as UPX.
If the binary gets changed on the disk then the unpacking code is likely to fail.
This however doesn't protect you if someone changes the binary while it is in memory.

What real life examples of security by obscurity have you seen/worked with? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Bonus points for explaining how you improved it.
Real life security by obscurity?
The key to the front door is stashed under a rock nearby, or under the welcome mat, or on top of a high railing.
These are all instances of security through obscurity, as in, it is right out in the open for anyone to grab, but most people wont be able to find it without huge amounts of searching. However, a dedicated attacker can walk right in.
Some people like to make their javascript difficult to read (and therefore hack) by using obfuscation. Google is among the users of this technique. At the simplest level, they change the variable and method names to a single inscrutable letter. The first variable is named "a", the second is named "b" and so on. It does succeed in making the javascript exceedingly difficult to read and follow. And it adds some protection to the intellectual property contained in the javascript code, which must be downloaded to the user's browser to be usable, therfore making it accessible to all.
In addition to making it difficult to read the code, this shortening of variable names reduces the size of the javascript code that has to be downloaded to the user's browser. Theoretically, this can reduce network traffic.
Here's an article about Google's obfuscation, and here's a list of available tools.
On a website I did some contract work on I noticed that they were storing double-hashed passwords. From memory, they were storing something like
$encrypted_password = md5( sha1( plaintext_password ) );
When I asked what the purpose of this was, I found out that the guy who wrote the account creation/login script had been reading about dictionary attacks. He figured that no one would ever think to create a dictionary where they hash inputs with md5 and sha1.
I improved the system by adding a random salt column to their user table. I left the double-hashing in though. It doesn't do anything to hurt the security of the system, and to be honest, I thought it was pretty clever for someone who didn't really know much about security to think of this.
Seen: Websites use a complex url to access ajax components rather than actually password protect them such as:
domain.com/3r809d8f09feefhjkdjfhjdf/delete.php?a=03809803983djfhkjsdfsadf
the string has remained constant, the query is random and is designed to stop attackers.
Improvement: Restrict the page to being accessed only from certain IP addresses. Add an authentication string to the query that is a salted hash of the access time.
In a more "real life" example, I don't know if it's intentional or not, but I like the way none of the doorbells in my block have any names on them, and that their numbers seem to have no correlation to the apartement numbers whatsoever. Ie. ring on #25 for apartement 605, #13 for apartement 404 and so on. :)
One vendor we deal with requires us to post the username and password in the querystring in ROT-13 "encrypted" format. No joke.
Security through obscurity is a valid tactic. Plenty of people have turned off replying with version information as a best practice for ftp and apache. Honeypots can be considered an obscured practice, since the attacker doesn't know the layout of the network and gets sucked into them. One high security site I know of assigns their username by a five digit alphanumeric phrase (such as '0a3bg') instead of using logical usernames. Anything that makes breaking into a system more difficult, or take longer, is a valid measure.
Security exclusively through obscurity is bad.
People writing their password on pieces of paper and putting it under their keyboard.
I solved it by logging into their computer with their account and sending out an embarrassing email to the group.
Seen: phpMyAdmin moved into the directory _phpmyadmin
Improvement: Disallowed access from outside the company's network.
Similar to #stech's solution.
Some of the admin pages in our application on the web, check for a local IP subnet range, else display access denied.
Improvement is accessed is restricted to users who are inside the network or VPNed to it.
Back in the old DBase/Clipper days I worked for a guy who developed an application for a friend of his. This friend wanted to have some "secretly" accessible program or data (I don't recall) that required a password only known to him.
The solution, I was told, was that Clipper opened a DOS prompt in the secret directory, with black text on black background colors (some ANSI control characters accomplished this).
The user had to type in the password, but this being input line of the DOS command prompt, the "password" was really the name of a batch file that was then executed.
I once saw a photography website where you could strip some characters off from the photo thumbnail pictures url to get the full version.
Many professional photographer websites use Javascript to prevent people from right-clicking on images to "save as ...". Most of those sites also don't do any watermarking.
I used to surf with referer headers disabled... it's quite surprising how many websites will blow up or flat-out reject you if they don't know where you came from.
One website had a poll and used cookies to prevent you from voting multiple times. You could simply erase that cookie and keep voting. And you could script it all using wget, too.
The example I see of this all the time is something being done in source code that the developer assumes no one will ever see. You see this a lot with crypto-keys in particular, embedded right in the source code. A lot of times it is not even a question of decompiling the code, they could outright just use the library.
The solution is always to teach the developer to assume that someone has the source code and can use it against you.
Going to great lengths to hide software names and version numbers .
Ie. changing Tomcat server name and version to some quotes and random numbers (like 666), changing the name and version numbers of regular javascript libraries like scriptaculous and prototype and so on.
In a current project we're using Google Web toolkit (GWT) and this sneaky little thing compiles Java to javascript (which you have little to no control over) and includes the string "GWT" and version number. Totally unacceptable of course so we'll need to make a script that will run after GWT compile to remove all these references(!).
/admin without password.
Yes I've seen it, it's very real.

Why is security through obscurity a bad idea? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I recently came across a system where all of the DB connections were managed by routines obscured in various ways, including base 64 encoding, md5sums and various other techniques.
Why is security through obscurity a bad idea?
Security through obscurity would be burying your money under a tree. The only thing that makes it safe is no one knows it's there. Real security is putting it behind a lock or combination, say in a safe. You can put the safe on the street corner because what makes it secure is that no one can get inside it but you.
As mentioned by #ThomasPadron-McCarty below in a comment below:
If someone discovers the password, you can just change the password, which is easy. If someone finds the location, you need to dig up the money and move it somewhere else, which is much more work. And if you use security by obscurity in a program, you would have to rewrite the program.
Security through obscurity can be said to be bad because it often implies that the obscurity is being used as the principal means of security. Obscurity is fine until it is discovered, but once someone has worked out your particular obscurity, then your system is vulnerable again. Given the persistence of attackers, this equates to no security at all.
Obscurity should never be used as an alternative to proper security techniques.
Obscurity as a means of hiding your source code to prevent copying is another subject. I'm rather split on that topic; I can understand why you might wish to do that, personally I've never been in a situation where it would be wanted.
Security through obscurity is an interesting topic. It is (rightly) maligned as a substitute for effective security. A typical principle in cryptography is that a message is unknown but the contents are not. Algorithms for encyrption are typically widely published, analyzed by mathematicians and, after a time, some confidence is built up in their effectivness but there is never a guarantee that they're effective.
Some people hide their cryptographic algorithms but this is considered a dangerous practice because then such algorithms haven't gone through the same scrutiny. Only organisations like the NSA, which has a significant budget and staff of mathematicians, can get away with this kind of approach.
One of the more interesting developments in recent years has been the risk of steganography, which is the practice is hiding message in images, sound files or some other medium. The biggest problem in steganalysis is identifying whether or not a message is there or not, making this security through obscurity.
Last year I came across a story that Researchers Calculate Capacity of a Steganographic Channel but the really interesting thing about this is:
Studying a stego-channel in this way
leads to some counter-intuitive
results: for example, in certain
circumstances, doubling the number of
algorithms looking for hidden data can
increase the capacity of the
steganographic channel.
In other words, the more algorithms you use to identify messages the less effective it becomes, which goes against the normal criticism of security through obscurity.
Interesting stuff.
The main reason it is a bad idea is that it does not FIX the underlying problems, just attempts to hide them. Sooner or later, the problems will be discovered.
Also, extra encryption will incur additional overhead.
Finally excessive obscurity (like using checksums) makes maintenance a nightmare.
Better security alternatives is to eliminate potential weaknesses in your code such as enforced inputs to prevent injection attacks.
One factor the ability to recover from a security breach. If someone discovers your password, just reset it. But if someone uncovers your obscure scheme, you're hosed.
Using obscurity as all these people agree is not security, its buying yourself time. That said having a decent security system implemented then adding an extra layer of obscurity is still useful. Lets say tomorrow someone finds an unbeatable crack/hole in the ssh service that can't be patched immediately.
As a rule I've implemented in house... all public facing servers expose only the ports needed ( http/https ) and nothing more. One public facing server then will have ssh exposed to the internet at some obscure high numbered port and a port scanning trigger setup to block any IP's that try to find it.
Obscurity has its place in the world of security, but not as the first and last line of defense. In the example above, I don't get any script/bot attacks on ssh because they don't want to spend the time searching for a non-standard ssh service port and if they do, their unlikely to find it before another layer of security steps in and cuts them off.
All of the forms of security available are actually forms of security through obscurity. Each method increases in complexity and provides better security but they all rely on some algorithm and one or more keys to restore the encrypted data. "Security through obscurity" as most call it is when someone chooses one of the simplest and easiest to crack algorithms.
Algorithms such as character shifting are easy to implement and easy to crack, that's why they are a bad idea. It's probably better than nothing, but it will, at most, only stop a casual glance at the data from being easily read.
There are excellent resources on the Internet you can use to educate yourself about all of the available encryption methods and their strengths and weaknesses.
Security is about letting people in or keeping them out depending on what they know, who they are, or what they have. Currently, biometrics aren't good at finding who you are, and there's always going to be problems with it (fingerprint readers for somebody who's been in a bad accident, forged fingerprints, etc.). So, actually, much of security is about obfuscating something.
Good security is about keeping the stuff you have to keep secret to a minimum. If you've got a properly encrypted AES channel, you can let the bad guys see everything about it except the password, and you're safe. This means you have a much smaller area open to attack, and can concentrate on securing the passwords. (Not that that's trivial.)
In order to do that, you have to have confidence in everything but the password. This normally means using industry-standard crypto that numerous experts have looked at. Anybody can create a cipher they can't break, but not everybody can make a cipher Bruce Schneier can't break. Since there's a thorough lack of theoretical foundations for cipher security, the security of a cipher is determined by having a lot of very smart and knowledgeable people try to come up with attacks, even if they're not practical (attacks on ciphers always get better, never worse). This means the crypto algorithm needs to be widely known. I have very strong confidence in the Advanced Encryption Standard, and almost none in a proprietary algorithm Joe wrote and obfuscated.
However, there's been problems with implementations of crypto algorithms. It's easy to inadvertantly leave holes whereby the key can be found, or other mischief done. It happened with an alternate signature field for PGP, and weaknesses with SSL implemented on Debian Linux. It's even happened to OpenBSD, which is probably the most secure operating system readily available (I think it's up to two exploits in ten years). Therefore, these should be done by a reputable company, and I'd feel better if the implementations were open source. (Closed source won't stop a determined attacker, but it'll make it harder for random good guys to find holes to be closed.)
Therefore, if I wanted security, I'd try to have my system as reliable as possible, which means as open as possible except for the password.
Layering security by obscurity on top of an already secure system might help some, but if the system's secure it won't be necessary, and if it's insecure the best thing is to make it secure. Think of obscurity like the less reputable forms of "alternative medicine" - it is very unlikely to help much, and while it's unlikely to hurt much by itself it may make the patient less likely to see a competent doctor or computer security specialist, whichever.
Lastly, I'd like to make a completely unsolicited and disinterested plug for Bruce Schneier's blog, as nothing more than an interested reader. I've learned a lot about security from it.
One of the best ways of evaluating, testing or improving a security product is to have it banged on by a large, clever peer group.
Products that rely for their security on being a "black box" can't have the benefit of this kind of test. Of course, being a "black box" always invites the suspicion (often justified) that they wouldn't stand up to that kind of scrutiny anyway.
I argued in one case that password protection is really security through obscurity. The only security I can think of that wouldn't be STO is some sort of biometric security.
Besides that bit of semantics and nit picking, STO (Security through obscurity) is obviously bad in any case where you need real security. However, there might be cases where it doesn't matter. I'll often XOR pad a text file i don't want anyone reading. But I don't really care if they do, i'd just prefer that it not be read. In that case, it doesn't matter, and an XOR pad is a perfect example of an easy to find out STO.
It is almost never a good idea. It is the same to say, is it a good idea to drive without seatbelt? Of course you can find some cases where it fits, but the anwser due to experience seems obvious.
Weak encryption will only deter the least motivated hackers, so it isn't valueless, it just isn't very valuable, especially when strong encryption, like AES, is available.
Security through obscurity is based on the assumption that you are smart and your users are stupid. If that assumption is based on arrogance, and not empirical data, then your users- and hackers-- will determine how to invoke the hidden method, bring up the unlinked page, decompile and extract the plain text password from the .dll, etc.
That said, providing comprehensive meta-data to users is not a good idea, and obscuring is perfectly valid technique as long as you back it up with encryption, authorization, authentication and all those other principles of security.
If the OS is Windows, look at using the Data Protection API (DPAPI). It is not security by obscurity, and is a good way to store login credentials for an unattended process. As pretty much everyone is saying here, security through obscurity doesn't give you much protection.
http://msdn.microsoft.com/en-us/library/ms995355.aspx
http://msdn.microsoft.com/en-us/library/ms998280.aspx
The one point I have to add which hasn't been touched on yet is the incredible ability of the internet to smash security through obscurity.
As has been shown time and time again, if your only defense is that "nobody knows the back door/bug/exploit is there", then all it takes is for one person to stumble across it and, within minutes, hundreds of people will know. The next day, pretty much everyone who wants to know, will. Ouch.

Resources