I am making a browser card game. Each player has a number of purchased cards out of a big pool of available cards.
I need to make sure a player can not hack the cards he uses from the browser, so the server must authenticate he owns each card he uses and it is indeed the same card.
In order to make the app faster I want to store the cards data in an external JSON file and only say "player x owns cards y and z" and get the info on those cards from the JSON.
Are there any security patterns that can help me here?
You can use openPGP for node in order to create two sets of keys:
One public - for your client
and one Private for the server.
Using the public key for each client - you'll be able to encrypt the JSON representing the state of each player and prevent its circumvention by manners of hackery.
Make sure you read the dependancy section in order to properly polyfill your game for older browser versions.
Related
I am building an application in NodeJS + Express where teams can share information with one and other and chat (kind of like an internal messaging forum).
Sometimes there is a need for the team's clients to view and edit some of this stored information on a case by case basis (e.g. a client asks a question and wants to message back and forth with the team, using my app). I don't want the client to have to sign up for an account in this case.
I am thus wondering what is the most secure strategy for generating a URL where anyone with the URL can view and edit a document/POST data to my app within the confines of a single document, without signing in?
(I've seen a couple of posts on this topic but they're quite old and don't focus on this specific case.)
First of all, I can absolutely understand the benefits, but still it is not an optimal idea. However, I would like to summarize some thoughts and recommendations that will help you with the development:
A link like this should not be able to perform critical actions or read highly sensitive data.
Access should be unique and short-lived. For example, the customer could enter his e-mail address or mobile phone number and receive an access code.
If you generate random URLs, they should be generated in a secure random manner (e.g. uuid provides a way to create cryptographically-strong random values).
If I had to design this I would provide as little functionality as possible. Also, the administrator would have to enter a trusted email address and/or mobile phone number when releasing the document. The URL with a UUIDv4 is then sent to this channel and when the customer clicks on the link, he gets a short-lived access code on a separate channel if possible (on the same channel if only one was configured). This way you prevent the danger of an unauthorized person accessing the document in case a customer forwards the original URL out of stupidity.
The App could have a private key hard-coded into it and my server could have the public for it and the App could sign everything. But then a hack could identify the private key in the object code and write a malicious App that signs everything with that same key. Then that App could use my server.
The App could do a key exchange with my server but how does the server know the App is authentic when it does the key exchange?
In essence you cannot know.
Reason is simple: since anybody can get to the client and have everything the client is and knows by reverse engineering the client (to which they have all they need to perfrom that), there is nothing that can prevent them from answering any challenge you might set to what the real app would answer.
You can make it harder on fake apps though. But they could (if done right) give the answer anyway.
E.g. how to make it harder:
The server sends a challenge to the client app to calculate e.g. the CRC32 (or md5, sha-1, sha-256, ... doesn't matter as such) of the app itself from a given start to a given end. If you set those start and end points to be fully random for every challenge you send, you essentially force the fake app to have the real app's compiled code in full ... So you place the burden of having to have the real app (not forcing it to be actually running the (unmodified) code, just having the actual unmodified code).
Take care that you would need to support the server side with allowing for multiple versions of the client etc. or you can't upgrade the clients anymore.
Anybody distributing a fake app would hence be forced to violate your copyright on the real app (and your lawyers would have am easier case maybe).
Alternatives:
To pick an alternative, you need to figure out why it's (so) important to have your client ?
If the client contains secrets: remove them, make the client display only and have an 3 tier model where you only let the user run the display part and keep all secrets on your servers.
If you get your revenue from selling an app, give it away for free and sell accounts on your server. Use authentication to do that: you can authenticate users (login&password, real 2 factor authentication , ...) you can also disallow them to dramatically change their geo location in a short time, disallow simultaneous logins, ...
But the price is the hoops for the user to jump through. And they might use other clients nonetheless.
If you allow logic (like e.g. used in online games) to use the power of the user's CPU to do things, you can still keep oversight on a logic level on the server: e.g. if it takes 5 minutes at the very minimum to complete a task in the real client, and if the client reports back as "achieved" before those 5 minutes are done: you have a cheater ... Similarly, make sure all important assets are only given from the server, don't trust the clients ...
I'm building a desktop application that connects to a web server and communicates through a socket-based API. I want to ensure I only talk to my application, and not any third party hacker. Communication is encrypted over https. In addition, a private/public key pair are used for authentication. Basically the time, private, and public key are hashed together and sent to the server with the current time and public key to the server.
I'm concerned that if others reverse engineer the application, they will discover the hashing function, connecting url, and private key, as normally strings are stored in clear text in compiled applications.
I have two thoughts to mitigate this:
Create a function that generates the application-specific private key using a series of mathematical operations
Create a complex (long) secret and then take some modulo of that secret to send to the server (like the Diffie–Hellman key exchange algorithm).
Am I on the right track? How do I keep the secret key secret?
Encryption is not the correct solution. No matter how well you hide the implementation, a determined attacker with a sufficient amount of time can reverse-engineer it.
At the very least, an attacker can determine where the encryption/hashing is done and dump the memory of the process right before that to examine the secrets in plaintext.
Your best bet would be to a) obfuscate the code and add anti-debugging defenses (not perfect, but it will discourage script kiddies and slow down determined attackers) and b) hardening as much as you can server-side
Basically, you can never rely on the client because you don't control it. Your best bet is to make sure any critical processing is done server-side so a custom client can't do anything malicious.
For example, if you were making a multiplayer chess game, you'd want the client to just submit basic actions (a move) and you'd track board state on the server. It doesn't matter if the client is hacked because if an illegal action is submitted, you just return an error.
Now that there are a couple of neat canvas demo's of both classic platform and even 3D fps games in HTML5, the next step might be to try developing a multiplayer HTML5 game. HTML5 socket support makes this relatively straight-forward, but with client-side source being viewable by anyone in the browser, what are some solutions for basic game security features for a HTML5-frontend multiuser game -- such as being able to prevent a faked high-score submit?
The simple answer is: You can't trust the data from client, which means that the high score submit can't come from the client.
Since the code client is available for anyone to inspect, there's no way of trusting the data that the client sends your server. Even if you encrypt the data with a per-user encryption key (which is possible), the user can simply alter your code within the browser and change the values it's sending to the server.
Since your game is multiplayer, this might be possible IF the server generates all the scoring events. If the server generates all the scoring events, the client never sends score data to the server which means that the high score data can't be faked.
You'll still have to deal with cheating, which is even more challenging, but that's another issue...
Adding on to what Larry said, you're definitely going to have to handle the scoring on the backend to really prevent cheating/fake score posting.
For an example of this in practice... The game Word Wars is a boggle-esque game where you find as many words as you can from a 4x4 grid of letters.
At the start of each game, a 4x4 board is generated server side. A list of possible words for that board is generated and a hashed version (md5'd with a random salt) of each word as well as the salt are passed to the client.
On the client side, when the letters are typed and the enter key is pressed, we md5 (with the salt from the server) the word that was entered and check that against the list of hashed words provided by the server. If it's a match, we update the client with the new score (there's a function based on letters used and their point values).
Once the game is over, the client sends the list of words they came up with to the server (NOT the score), and the server double-checks that those words existed in the board, and handles the scoring.
This is where Clay.io, the company I'm working in comes in. Clay.io offers an API for high level HTML5 game features like leaderboards, achievements, payment processing, etc... Needless to say, we needed a solution for games that have a backend to make certain things like high scores more secure.
The solution was to encrypt JavaScript objects on the backend (node.js, php, whatever) using JWT (JSON Web Token), and pass that encrypted object rather than the score itself. This lets us communicate both ways (game -> Clay.io and Clay.io -> game), and is pretty painless to do. The full docs on this are here: clay.io/docs/encryption (max links hit on this answer)
Back to Word Wars... from the server we generate that JWT with the user's score and pass that on to Clay.io to post the score. Voila :)
Of course, this will differ as the type of game you're developing differs, but the moral of the story is you have to get creative :)
I wrote a blog post that covers HTML5 game security in greater detail. Part 3 of a series on HTML5 Game Development Tips.
I'm writing a system where, as usual, the client is asking for a convenience "remember your credit card details" option.
I've told them that this is in all likelihood a no-go. However, I did have a good idea (tm) just now, and seeing that Good Ideas in Encryption(tm) are actually Bad Ideas (tm), I thought I'd put it up for review here and see what holes can be punched through it.
Essentially, I'm thinking of xor'ing the credit card information plus some message signature using a one time pad that's generated per client. This pad is stored as a cookie variable on the client's browser.
Next time that user tries to place a purchase, the pad is sent to the server, and if the server can properly decode its encrypted data, it shows the credit card information as already being filled. (The cc info isn't actually transmitted back). The server will never store the pad in anything more than memory or page file. In fact, I intend to have the pad be sent twice: once upon arrival on the CC page (where the server checks if it should ask for CC information), and once on CC submission to get the actual information.
The user will also be instructed that their information is "partially stored" in their cookie cache, meaning that they will expect that if their cookies are flushed, their CC information is lost.
Let me know where you think this scheme is horribly failing.
Sounds sketchy, and I'm pretty sure you're misusing the term "one time pad."
Instead of going this route, look into using a service like Authorize.net's Customer Information Management. Basically, you give them the card info, and they give you back an ID that you can use to charge the card. The ID is linked to the website's merchant account, and can't be used to charge the card with any other merchant.
It's much, much safer, and should get you the same results.
Note: I'm not endorsing Auth.net or its CIM. It's just the example I'm most familiar with.
Storing the pad client-side leaves it vulnerable to XSS, I would think.
Technologically: flawed.
Legally: probably flawed. talk to a lawyer.
A one time pad only works if the pad is securely kept secret. Storing it in a cookie definitely doesn't count as secure or secret (it's sent to and from the server, it's dropped onto the user's machine, which might be a public terminal or shared machine). This is a really bad idea. It's a clever idea but ultimately very flawed. I suggest you read the PCI compliance documentation and do what other people do which is (generally speaking):
Don't do it.
Use a payment processor that will securely store the CC and handle billing (i.e. PayPal).
Setup a separate and strongly secured payment gateway, this machine only processes credit card transactions, and it in turn accesses a secured machine that stores the credit card data.
Remember that storing credit card numbers will basically violate PCI and will probably violate any merchant agreements and might even be illegal in your jurisdiction (privacy laws, etc.), consult a lawyer please.
Don't do it. Seriously. Find a payment processor who will handle this for you.
If the credit card is being stored client side then you're storing it with the key which means it's vulnerable.
If you are storing the credit card server side then you don't need a key of an encryption key stored on the client.
It sounds like a very dangerous situation if what you are describing is a case where the user is not only not being given the option whether or not they want to store their details but is also going to have them re-populated without having to authenticate in any way. I'd be pretty happy if I came along to an internet cafe and got the credit card details fields pre-populated for me!