I know this is impossible, but how close can I get?
I'm creating achievements, and when a user 'gets the achievement' his browser tells him with a javascript popup, and sends a message to the server to update his profile.
I'd rather not have my users be able to just hit the webservice and get all the achievements. Signing the requests with a private key is better, but it would have to be stored in the .js file and then easily sniffed. I could obfuscate it, or do a unique one per user. And timestamp the requests.
Any better suggestions?
As the original question acknowledges, I think this is basically impossible in
situations where it's really hard to get the server to rerun what the client did.
(eg. a platform game with close timing)
Obfustication's probably your best bet. First off do a bit of crypto and include
timing information - use public/private key per user. That gets rid of the basic
traffic sniffing/replay. Obfusticate the client code too so they at least have to
put some effort into decoding it. I'd say that'd probably eliminate 99% of the
people trying to cheat. Until that last 1% writes a firefox add-on to unlock
achievements and gives it to the other 99% at least.
Beyond that, well, don't have achievements reward them with anything important.
Colin
Well the problem is, how do you determine when someone has an achievement? If it's client side, something like
quest.hasGoldenRod = true;
Then yeah, you're going to have trouble stopping them from setting that themselves.
The way to do it is to have the server mirror the actions that the client takes to ensure they actually got the item legitimately.
Then, when the client says 'I got it', the server goes 'Let me check that,' and if it can also validate that the client did indeed get it, then all is well and give them the achievement.
Signing doesn't help you. You have two approaches:
You can go for complete verification. The only way to assure this completely is to involve the server, like silky said. If the achievement is that the user got to some "area" in a game - the game sends a "Hey I got to LA!" and you go "I know you did. You've been telling me all along as you enter each new area, and I've been keeping track."
You can go for "reasonable" verification using techniques like obfuscation and timing. Have the server send a calculation function userEarnedGoldBarAcheivement down to the client that is the test function - and the function changes slightly every day (so if you send me a request with {goldBar: true, key:12345} I go "Uh, uh, that key was yesterday's!")
Related
After deciding not to use CAPTCHA for my site, I added an [input=range] to my register page in place of a submit button that must be slid to the maximum value to submit. Will a spambot be able to bypass this measure? If so, are there any other good alternatives?
Spambots in general can beat anything that requires submitting a value to the server that can be determined by examining the code. The key is to require the value to be something that a human can recognize easily but a bot could not.
For example, with your slide idea (not bad by the way), rather than having them slide it to the maximum (which would be easy for a bot to imitate), show them 3 pictures and ask them to slide it to the one that "makes them cold." Then show a picture of an arctic wasteland, the sahara desert, and a furnace (this is just an example of course). Those are the most spambot resistant controls.
The things you'll have to watch out for is predictability and repeatability. If you always ask the same question, or if you only have a few different questions, it will be easy for attackers to enumerate all the possible results into an attack script and beat your control.
The only limit is really your creativity and a requirement to be difficult for code, easy for humans.
This depends on what the result of the physical sliding is in terms of code. I have seen a JavaScript implementation of this (not sure if it's the same one) and it offers no security at all because all it does is send a request with a pre-determined token once the slide is complete. However, there is nothing to stop a spam bot from simply acquiring the token on the page and sending the request to the server, which has no idea how the "user" got the token.
In the case of CAPTCHAs, the "token" is difficult for bots to decipher at this stage, but relatively easy for humans.
I personally can't stand any of these kinds of bot prevention measures, and I think most users would agree. The only way to stop bots is to stop users too.
Even if spambots can bypass captcha, human captcha solvers can bypass it. There's no perfect solution for spam.
You can read about this issue here: http://blog.minteye.com/2013/02/26/captcha-solving-human-labor/
Here's a (simplified) example of my situation.
The user plays a game and gets a high-score of 200 points. I award high-scores with money, i.e. 1€/10 points. The user will print a "receipt" which says he won €20, then he gives it to me, I make sure the receipt is authentic and has never been used before and I hand him his prize.
My "issue" is in the bold part, obviously. I should be able to validate the "receipt" by hand, but solutions with other offline methods are welcome too (i.e. small .jar applications for my phone). Also, it must be hard to make fake receipts.
Here's what I thought so far, their pros and their cons.
Hashing using common algorithms i.e. SHA512
Pros: can easily be validated by mobile devices, has a strong resistance to faking it with higher values (if a context-depending salt is used, i.e. the username).
Cons: can be used multiple times, cannot be validated by hand.
Self-made hash algorithms
Pros: can be validated by hand.
Cons: might be broken easily, can be used multiple times.
Certificate codes: I have a list of codes in two databases, one on the server and one on my phone. Every time a receipt is printed, one of these is printed in it and set as "used" into the database. On my phone, I do the same: I check if the code is in the database and hasn't been used yet, then set as "used" in the database.
Pros: doesn't allow for multiple uses of the same code.
Cons: it's extremely easy to fake a receipt, cannot be validated by hand.
This sounds like a classic use case for an Hash-based message authentication code (HMAC) algorithm. Since your idea of "by hand" is "using a smartphone", not "with pecil, paper, and mind", you can compute the hash and print it on the receipt, and then validate it on the phone or the back-end server.
The "missing point" is to use more systems at once so that, together, they work in the needed way. In this case, we can use HMAC for authenticating the message and a list of "certificate codes" to make sure one doesn't use the same receipt over and over.
Another idea might also be to hash the time when the receipt is outputted to the client and print it on the receipt. When someone shows you the code on the receipt, you make sure that hash hasn't been used yet and that it's valid (i.e. the message produces that hash), then you add it to the list of "used hashes".
Thanks to #RossPatterson for suggesting HMAC.
I want to build a system where many people can play browser based simple (swf or html5) games (like snakes, car race, ludo etc). These games will be dead simple and does not require logic to be written in server.
The person who makes the highest score will win some prize.
Now the issue is securing the workflow. My approach may my incorrect, please feel free to suggest any alternative.
1. When game begins, a game id is generated.
2. When game ends, score and game id is sent back.
Problem is, Step 2 can be spoofed. Any one can send similar http request, with generated game id and any score. How do I know that score is coming from my game and its not being sent directly.
I thought of encrypting the score, but again the encryption mechanism will be there in javascript, which can easily be replicated.
Please help, thank in advance.
EDIT 1: I am not worried about sniffers, session hijacking, man in the middle attack. HTTPS will take care of all that. I am worried about user itself. He can just right click > inspect in a browser and check the request header being sent. He can easily replay same request header (just by right clicking and open in a new tab)
You can add layers of submission signing/encoding to obscure the data, and JS code obfuscation to make it harder for a user to undo that level of protection.
But ultimately you cannot solve the problem. You are placing the trust to run your code in the expected way to an untrusted client, and that naturally gives untrustable output. Spoofing the score is only one point of entry - an attacker could just as easily tamper with other parts of the client-side game logic to cheat.
Trusting the client is a perennial security problem to which there is little possible solution. For a trivial high score table that no-one cares about, you can potentially get away with a quantity of obfuscation to keep off the casual attacker, coupled with manual monitoring and pruning for obviously fraudulent submissions.
If you are considering offering actual desirable prizes, you may need to move some essential part of the game onto servers which you control. What pieces you need to protect and how is highly game-dependent and there are likely to be residual attacks that are difficult to defend against. (Consider for example the lengths that commercial multiplayer server-based games go to try to defend against client hacks like aimbots.)
I guess it depends upon the value of the data, but it sounds like the value is very low.
Pick a functionally impossible to guess ID. For example, a GUID (128 bit randomly generated value). If someone were to guess it, they get the game. But it's statistically impossible that they will ever get it right. And if you send it over HTTPS then they can't sniff it out (I'd send it over HTTPS, just because).
If you want to do better you can pick a random password to go with it, but I question if the math supports this as being necessary. It sounds like your data is crazy low value. Does it really matter?
FWIW, the bigger thing I'd be worried about is the person who decides to DoS you by asking you to store tons and tons of data for games that didn't exist...
As most of you know, email is very insecure. Even with a SSL-secured connection between the client and the server that sends an email, the message itself will be in plaintext while it hops around nodes across the Internet, leaving it vulnerable to eavesdropping.
Another consideration is the sender might not want the message to be readable - even by the intended recipient - after some time or after it's been read once. There are a number of reasons for this; for example, the message might contain sensitive information that can be requested through a subpoena.
A solution (the most common one, I believe) is to send the message to a trusted third party, and a link to the that message to the recipient, who then reads this message from the 3rd party. Or the sender can send an encrypted message (using symmetric encryption) to the recipient and send the key to the 3rd party.
Either way, there is a fundamental problem with this approach: if this 3rd party is compromised, all your efforts will be rendered useless. For a real example of an incident like this, refer to debacles involving Crypto AG colluding with the NSA
Another solution I've seen was Vanish, which encrypts the message, splits the key into pieces and "stores" the pieces in a DHT (namely the Vuze DHT). These values can be easily and somewhat reliably accessed by simply looking the hashes up (the hashes are sent with the message). After 8 hours, these values are lost, and even the intended recipient won't be able to read the message. With millions of nodes, there is no single point of failure. But this was also broken by mounting a Sybil attack on the DHT (refer to the Vanish webpage for more information).
So does anyone have ideas on how to accomplish this?
EDIT: I guess I didn't make myself clear. The main concern is not the recipient intentionally keeping the message (I know this one is impossible to control), but the message being available somewhere.
For example, in the Enron debacle, the courts subpoenaed them for all the email on their servers. Had the messages been encrypted and the keys lost forever, it would do them no good to have encrypted messages and no keys.
(Disclaimer: I didn't read details on Vanish or the Sybil attack, which may be similar the what comes below)
First of all: Email messages are generally quite small, esp. compared to a 50 mb youtube vid you can download 10 times a day or more. On this I base the assumption that storage and bandwidth are not a real concern here.
Encryption, in the common sense of the word, introduces parts into your system that are hard to understand, and therefore hard to verify. (think of the typical openssl magic everybody just performs, but 99% of people really understand; if some step X on a HOWTO would say "now go to site X and upload *.cer *.pem and *.csr" to verify steps 1 to X-1, I guess 1 in 10 people will just do it)
Combining the two observations, my suggestion for a safe(*) and understandable system:
Say you have a message M of 10 kb. Take N times 10 kb from /dev/(u)random, possibly from hardware based random sources, call it K(0) to K(N-1). Use a simple xor operation to calculate
K(N) = M^K(0)^K(1)^...^K(N-1)
now, by definition
M = K(0)^K(1)^...^K(N)
i.e. to understand the message you need all K's. Store the K's with N different (more or less trusted) parties, using whatever protocol you fancy, under random 256 bit names.
To send a message, send the N links to the K's.
To destroy a message, make sure at least one K is deleted.
(*) as regards to safety, the system will be as safe as the safest party hosting a K.
Don't take a fixed N, don't have a fixed number of K's on a single node per message (i.e. put 0-10 K's of one message on the same node) to make a brute force attack hard, even for those who have access to all nodes storing keys.
NB: this of course would require some additional software, as would any solution, but the complexity of the plugins/tools required is minimal.
The self-destructing part is really hard, because the user can take a screenshot and store the screenshot unencrypted on his disk, etc. So I think you have no chance to enforce that (there will always be a way, even if you link to an external page). But you can however simply ask the recipient to delete it afterwards.
The encryption is on the other hand is not a problem at all. I wouldn't rely on TLS because even when the sender and the client are using it, there might other mail relies who don't and they might store the message as plain text. So, the best way would be to simple encrypt it explicitly.
For example I am using GnuPG for (nearly) all mails I write, which is based on some asymmetric encryption methods. Here I know that only those I have given explicitly permission can read the mail, and since there are plug-ins available for nearly all popular MUAs, I'ts also quite easy for the recipient to read the mail. (So, nobody has to encrypt the mail manually and might forgot to delete the unencrypted message from the disk...). And it's also possible to revoke the keys, if someone has stolen your private key for example (which is normally encrypted anyway).
In my opinion, GnuPG (or alternatively S/MIME) should be used all the time, because that would also help to make spamming more difficult. But thats probably just one of my silly dreams ;)
There are so many different ways of going about it which all have good and bad points, you just need to choose the right one for your scenario. I think the best way of going about it is the same as your 'most common' solution. The trusted third party should really be you - you create a website of your own, with your own authentication being used. Then you don't have to give your hypothetical keys to anyone.
You could use a two way certification method by creating your own client software which can read the emails, with the user having their own certificate. Better be safe than sorry!
If the recipient knows that the message might become unreadable later and they find the message valuable their intention will be to preserve it, so they will try to subvert the protection.
Once someone has seen the message unencrypted - which means in any perceivable form - either as text or as screen image - they can store it somehow and do whatever they want. All the measures with keys and so one only make dealing with the message inconvenient, but don't prevent extracting the text.
One of the ways could be to use self-destructing hardware as in Mission Impossible - the hardware would display the message and then destroy it, but as you can see it is inconvenient as well - the recipient would need to understand the message from viewing it only once which is not always possible.
So given the fact that the recipient might be interested in subverting the protection and the protection can be subverted the whole idea will likely not work as intended but surely will make dealing with messages less convenient.
If HTML format is used, you can have the message reference assets that you can remove at a later date. If the message is open at a later date, the user should see broken links..
If your environment allows for it, you could use the trusted boot environment to ensure that a trusted boot loader has been used to boot a trusted kernel, which could verify that a trusted email client is being used to receive the email before sending it. See remote attestation.
It would be the responsibility of the email client to responsibly delete the email in a timely fashion -- perhaps relying on in-memory store only and requesting memory that cannot be swapped to disk.
Of course, bugs can happen in programs, but this mechanism could ensure there is no intentional pathway towards storing the email.
The problem, as you describe it, does sound very close to the problem addressed by Vanish, and discussed at length in their paper. As you note, their first implementation was found to have a weakness, but it appears to be an implementation weakness rather than a fundamental one, and is therefore probably fixable.
Vanish is also sufficiently well-known that it's an obvious target for attack, which means that weaknesses in it have a good chance of being found, publicised, and fixed.
Your best option, therefore, is probably to wait for Vanish version 2. With security software, rolling your own is almost never a good idea, and getting something from an established academic security group is a lot safer.
IMO, the most practical solution for the situation is using Pidgin IM client with Off-the-Record (no-logging) and pidgin-encrypt (end-to-end assymetric-encryption) together. The message will be destroyed as soon as the chat window is closed, and in emergency, you can just unplug the computer to close the chat window.
How can I prevent that forms can be scanned with a sort of massive vulnerability scanners like XSSME, SQLinjectMe (those two are free Firefox add-ons), Accunetix Web Scanner and others?
These "web vulnerability scanners" work catching a copy of a form with all its fields and sending thousands of tests in minutes, introducing all kind of malicious strings in the fields.
Even if you sanitize very well your input, there is a speed response delay in the server, and sometimes if the form sends e-mail, you vill receive thousands of emails in the receiver mailbox. I know that one way to reduce this problem is the use of a CAPTCHA component, but sometimes this kind of component is too much for some types of forms and delays the user response (as an example a login/password form).
Any suggestion?
Thanks in advance and sorry for my English!
Hmm, if this is a major problem you could add a server-side submission-rate limiter. When someone submits a form, store some information in a database about their IP address and what time they submitted the form. Then whenever someone submits the form, check the database to see if it's been "long enough" since the last time that IP address submitted the form. Even a fairly short wait like 10 seconds would seriously slow down this sort of automated probing. This database could be automatically cleared out every day/hour/whatever, you don't need to keep the data around for long.
Of course someone with access to a botnet could avoid this limiter, but if your site is under attack by a large botnet you probably have larger problems than this.
On top the rate-limiting solutions that others have offered, you may also want to implement some logging or auditing on sensitive pages and forms to make sure that your rate limiting actually works. It could be something simple like just logging request counts per IP. Then you can send yourself an hourly or daily digest to keep an eye on things without having to repeatedly check your site.
Theres only so much you can do... "Where theres a will theres a way", anything that you want the user to do can be automated and abused. You need to find a median when developing, and toss in a few things that may make it harder for abuse.
One thing you can do is sign the form with a hash, for example if the form is there for sending a message to another user you can do this:
hash = md5(userid + action + salt)
then when you actually process the response you would do
if (hash == md5(userid + action + salt))
This prevents the abuser from injecting 1000's of user id's and easily spamming your system. Its just another loop for the attacker to jump through.
Id love to hear other peoples techniques. CAPTCHA's should be used on entry points like registration. And the method above should be used on actions to specific things (messaging, voting, ...).
also you could create a flagging system, and anything the user does X times in X amount of time that may look fishy would flag the user, and make them do a CAPTCHA (once they enter it they are no longer flagged).
This question is not exactly like the other questions about captchas but I think reading them if you haven't already would be worthwhile. "Honey Pot Captcha" sounds like it might work for you.
Practical non-image based CAPTCHA approaches?
What can be done to prevent spam in forum-like apps?
Reviewing all the answers I had made one solution customized for my case with a little bit of each one:
I checked again the behavior of the known vulnerability scanners. They load the page one time and with the information gathered they start to submit it changing the content of the fields with malicious scripts in order to verify certain types of vulnerabilities.
But: What if we sign the form? How? Creating a hidden field with a random content stored in the Session object. If the value is submitted more than n times we just create it again. We only have to check if it matches, and if it don't just take the actions we want.
But we can do it even better: Why instead to change the value of the field, we change the name of the field randomly? Yes changing the name of the field randomly and storing it in the session object is maybe a more tricky solution, because the form is always different, and the vulnerability scanners just load it once. If we don’t get input for a field with the stored name, simply we don't process the form.
I think this can save a lot of CPU cycles. I was doing some test with the vulnerability scanners mentioned in the question and it works perfectly!
Well, thanks a lot to all of you, as a said before this solution was made with a little bit of each answer.