We have a specific scenario, where our client (our website) holds an encrypted piece of string (consider it to be an encrypted token). When our client makes a request to the server, it shares the encrypted token in the request. (The servers are also handled by us). The server then decrypts the token and then proceeds the action to perform with the token.
Is it a good practice to decrypt the token everytime a request is made? or will this decryption become a heavy job on the server side? Considering it is being done on every request and the requests by the client are also frequent.
Details : We're using Node on the server end and we'll be using AES-256 encryption/decryption.
First, you may explain what's the function of the token. Use as a credential or just transit information, if you use it as a credential, you may use a hash algorithm(don't forget to add salt), but if you just want to transit some information, then you use this symmetric encryption algorithm, that's ok, AES is faster than DES.
Related
I'm working on implementing secure CSRF tokens into my nodejs backend & react frontend app that uses express-sessions. I have created this module to generate, validate, and store CSRF tokens in redis & include some extra security like separate secret per token for BREACH security, per-feature tokens, & support for multiple tabs of the same feature (thus the token IDs and keys).
I have read that CSRF tokens are encrypted and only the server knows the key, so that when the browser sends the token to the server the server can validate via the secret key that only the server knows. My question is why are they encrypted?
To my understanding, if an attacker somehow manages to steal the encrypted token then the encryption is useless, as when they submit a request w/ the token the server would validate it since it is essentially the same encrypted token. If that is the case, wouldn't it be more performant to just store the token on the server as well and check if the one the client submits matches? (Of course accounting for timing attacks in the comparison)
Thank you
CSRF tokens are not normally encrypted. In a textbook implementation using for instace the synchronizer token pattern in a normal web app, a CSRF token is just a sufficiently large random value, stored on the server and also given to the client upon form generation. The client can then send it back with the form, proving that the form it is sending was actually generated by the server, and not somebody else. (Even in case of other patterns like double posting, the token sent as something like a header field and a cookie is just a random token in the base case.)
However, there are two things to note.
The synchornizer token pattern (a classic CSRF token) needs a server lookup. This is not so big of a deal if there is a user session anyway, but that's not always the case, some applications are designed to be stateless. In that case you can't just have a random token, because you can't decide without checking server-side state whether it's valid.
The other thing to note is that you can actually increase security even more, if the token contains some information about the client. Like for example it's less useful to somehow steal a csrf token, if it's somewhow tied to the client anyway (for example if it is associated with the current client IP address in a more naive implementation). Again, you could store this additional information server-side, but that's again state, that some applications want to avoid to make things like load balancing easier.
So it comes down to stateless CSRF tokens, ones that you can just check as they are, without state (database) lookups on the backend.
What you can do (and what some frameworks do for you) is they create a structured token with some data embedded in it, and they encrypt it with a key only known to the server. The server then sends this as the CSRF token, and expects to receive it back upon state changing requests. When it receives it back, the server doesn't need a database lookup, it can just decrypt the token and see if it's a valid one that the server created.
Note that purely for this purpose, you don't actually need encryption, the crypto primitive more suitable would be a message authentication code, because you only care about the authenticity of the token, ie. that the server itself created it and not somebody else. However, the data some frameworks include in the token is many times further protected by encyption (and implicit message authentication by a suitable authenticated encryption algorithm). But in a very basic implementation, you could actually just include a timestamp and user id with hmac as a stateless CSRF token (but including more information, maybe even about the form fields generated would further increase security).
So in short, unencrypted random tokens are considered sufficient for CSRF, and in case of double posting, they can also be stateless (because of the same origin policy, an attacker cannot post the same random token to a different origin both as a cookie and as a header). But an encrypted, more information rich token can provide more security if that's needed, potentially even somewhat mitigating the stolen CSRF token threat too.
I am making a little script in python, in which a client has to authenticates to the server. The idea is that an attacker cannot authenticate himself by listening to the network, without knowing the password.
Despite any good practices, I am trying to make my own secure authentication (it is only for personal use).
In my current algorithm, the client and the server share :
the password that authenticates the client
an encryption key
the encryption algorithm (AES with pycrypto)
It works as follows :
The server generates a token
The server encrypts the token
The encrypted token is sent to the client
The client decrypts the token
The client encrypts the set (password + token)
The encrypted set (password + token) is sent to the server
The server decrypts (password + token)
If the received information corresponds to the shared password and the token sent by the server, then the client is successfully authenticated.
In this algorithm, the client and the server share 2 secrets : the password and the encryption key.
I am wondering if it would be secure to do like this :
The server generates a token
The server sends the token to the client (in plain text)
The client encrypts the token, and returns it to the server
If the decrypted token is correct, the client is successfully authenticated.
In this case, the server and the client share only one secret (the encryption key). From my (small) knowledge of AES, I think that an attacker should not be able to guess the key with the token and the encrypted token, nor to guess the encrypted token without owning the key.
So my questions are: do you see any flaws in my algorithms? Is the second as secure as the first?
Thanks for your help
I am not a crypto expert (shout out to https://crypto.stackexchange.com), but AES is meant to assure confidentiality, and your method does not prevent non-repudiation. In other words, I can't read the contents of the token, but I can intercept your message and send the same one to the server to "authenticate" myself, right? (https://en.wikipedia.org/wiki/Replay_attack) Additionally, someone in the middle could modify your message and potentially cause problems, since again, AES assures confidentiality, but not integrity of the message. Aside from those core issues, there are subtle mistakes you can make when implementing this that can cause issues that are very difficult for you (and me) to detect, but possible for attackers to sniff out.
Perhaps when combined with an HMAC, you can overcome these weaknesses... but I would have to encourage you to not "roll your own" crypto scheme and perhaps all you need is HTTPS to secure the communication between the two devices (and a pre-shared token/key/password to prove identity). If you do decide to continue down this route, I would also encourage you to do significant research and having a security expert review your code/implementation before using in any sort of production environment. If this is just for fun/research, that's another story.
I'm new to cryptography and I'm trying to prevent against man-in-the-middle-attack in a web service I'm developing. The way the web service work is that a user registers on the service using his email address and password and creates an application. Each application is given an application id and an application key. The application id is public (that's how the public communicates with that application) but the application key is private. The user credits his application by loading a pin (a 16 digit numeric string). Loading the pin is done via a HTTP Get request.
Now here is my question: how can the user do a HTTP GET request with his application id (the way the server identifies the application) and his application key (the way the server authenticates him) without compromising his application key?
Because our server has SSL (and I read that SSL protects against man-in-the-middle-attack), I was thinking about simply having users submit their application id and application key as parameters in the GET request, but after reading around, I decided this may not be secure. This is also because after doing the HTTP GET request to load the pin, the user may configure his account that we submit the server response via another HTTP GET request to a URL of his choice. And since we want to do an echo back of his application id and application key so he can authenticate that the request was really from us, I was worried his key might be compromised.
So I decided we should have the user do a md5 hash of his app id and app key to provide a hashed parameter and submit that instead of his app key in the GET request. Then on our server, since we already know the user's app id and app key, we can simply do an md5 hash of both and compare it with the hash parameter the user submitted. But then I also thought that may be insecure because if someone intercepts the hash parameter, the attacker can use that same hash parameter to submit several requests since the app id and app key is static. So in the long run, the hash parameter is no different from the app key.
Now I'm thinking, we should have the user do a md5 hash of his app id, his app key and the pin he wants to load to get the hash parameter. This way, since the pin is always different each time, even if an attacker intercepts a request, the authentication process would not be compromised for other requests because the attacker would not be able to reuse that hash with other requests.
For example, if a user has the following credentials:
1. app_id: 1234
2. app_key: bghuTHY678KIjs78
And a user wants to load the pin: 1234567890123456
He generates the hash by doing an md5 hash of "1234:bghuTHY678KIjs78:1234567890123456". That gives him 210a4c92d85473af9d5f48b4ee182ddd. Then he does a HTTP Get request to the address below:
https://example.com/process?app_id=1234&pin=123456789012&hash=210a4c92d85473af9d5f48b4ee182ddd
Is this method secure? Or should I simply just have the users submit their app id and app key in the HTTP GET request since we have SSL?
The user secret should never be sent over the network. Instead, ask the user to sign his requests using his secret. HMAC is the relevant algorithm.
By the way, MD5 is obsolete and insecure for all crypto needs.
Use Secure Remote Password (SRP6a) and register a password verifier and salt for the 16 digit pin. The pin you never send to the server (you can store it in browser local storage for convenience of the user). Then authenticate the client using SRP6a which results in a strong shared secret session key for each successful authentication. Then use HMAC SHA256 to sign API calls using the session key. See the thinbus-srp JavaScript library and its demos of using SRP6a to authenticate resulting in a session key. See the JWS "HS256" (HMAC with SHA-256, 256+ bit secret) algorithm and any library implementing that as an example of signing a web API with a shared secure key.
The SRP6a authentication protocol is a secure zero-knowledge password-proof where the server does not know the password. The server issues a random challenge to the client which generates a password-proof based on the challenge. The server uses the verifier the client provided for their password to check the password-proof. If the 16 digit pin uses uppercase letters like a standard software license key it is infeasible to run a dictionary attack on the verifier. Use the modern browser webcrypto secure random number generator to generate the pin at the browser. Even you won't be able to obtain the password.
The overhead of using SRP6a to authenticate is that you need the client to first fetch the server challenge. The benefit for your use case is that if the client provides a good password-proof based on the challenge the both the client and server share a secure session key. No-one intercepting the traffic can know the session key. With that shared secret you can sign and verify every API call and verify the signature at the server. No-one intercepting any part of any exchange between you and the client end-to-end from registration through to usage can gain any advantage.
I'm implementing a REST web service using C# which will be hosted on Azure as a cloud service. Since it is a REST service, it is stateless and therefore no cookies or session states.
The web service can only be accessed over HTTPS (Certificate provided by StartSSL.com).
Upon a user successfully logging into the service they will get a security token. This token will provide authentication in future communications.
The token will contain a timestamp, userid and ip address of the client.
All communication will only happen over HTTPS so I'm not concerned about the token being intercepted and used in replay attacks; the token will have an expiry anyway.
Since this is a public facing service I am however concerned that someone could register with the service, login and then modifying the token that they receive to access the accounts of other users.
I'm wondering how best to secure the content of the token and also verify that it hasn't been tampered with.
I plan on doing the following to secure the token:
The client successfully logs into the service and the service does:
Generate a random value and hash it with SHA256 1000 times.
Generate a one-time session key from private key + hashed random value.
Hash the session key with SHA256 1000 times and then use it to encrypt the token
Use private key to sign the encrypted token using RSA.
Sends the encrypted token + the signature + the hashed random value to the client in an unencrypted JSON package.
When the client calls a service it sends the encrypted token and signature in an unencrypted JSON package to the service. The service will
Recreate the session key from the private key + the hashed random value
Use the private key to verify the signature
Use the hashed session key to decrypt the token
Check that the token hasn't expired
Continue with the requested operation...
I don't really know anything about encryption so I have some questions:
Is this sufficient or is it overkill?
I read that to detect tampering I should include an HMAC with the token. Since I am signing with the private key, do I still need an HMAC?
Should I be using Rijndael instead of RSA?
If Rijndael is preferred, is the generated IV required for decrypted? i.e. can i throw it away or do I need to send it will the encrypted token? e.g. Encrypted Token + HMAC + IV + hashed random value.
Since all communication happens over HTTPS the unencrypted JSON package isn't really unencrypted until it reaches the client.
Also I may want to re-implement the service in PHP later so this all needs to be doable in PHP as well.
Thanks for your help
You are really over-thinking the token. Truthfully, the best token security relies on randomness, or more accurately unpredictability. The best tokens are completely random. You are right that a concern is that a user will modify his/her token and use it to access the accounts of others. This is a common attack known as "session stealing." This attack is nearly impossible when the tokens are randomly generated and expired on the server side. Using the user's information such as IP and/or a time stamp is bad practice because it improves predictability. I did an attack in college that successfully guessed active tokens that were based on server time stamps in microseconds. The author of the application thought microseconds would change fast enough that they'd be unpredictable, but that was not the case.
You should be aware that when users are behind proxy servers, the proxy will sometimes view their SSL requests in plain text (for security reasons, many proxies will perform deep packet inspection). For this reason it is good that you expire the sessions. If you didn't your users would be vulnerable to an attack such as this, and also possible XSS and CSRF.
RSA or Rijndael should be plenty sufficient, provided a reasonable key length. Also, you should use an HMAC with the token to prevent tampering, even if you're signing it. In theory it would be redundant, since you're signing with a private key. However, HMAC is very well tested, and your implementation of the signing mechanism could be flawed. For that reason it is better to use HMAC. You'd be surprised how many "roll your own" security implementations have flaws that lead them to compromise.
You sound pretty savvy on security. Keep up the good work! We need more security conscious devs in this world.
EDIT:
It is considered safe to include timestamps/user IDs in the token as long as they are encrypted with a strong symmetric secret key (like AES, Blowfish, etc) that only the server has and as long as the token includes a tamper-proof hash with it such as HMAC, which is encrypted with the secret key along with the user ID/timestamp. The hash guarantees integrity, and the encryption guarantees confidentiality.
If you don't include the HMAC (or other hash) in the encryption, then it is possible for users to tamper with the encrypted token and have it decrypt to something valid. I did an attack on a server in which the User ID and time stamp were encrypted and used as a token without a hash. By changing one random character in the string, I was able to change my user ID from something like 58762 to 58531. While I couldn't pick the "new" user ID, I was able to access someone else's account (this was in academia, as part of a course).
An alternative to this is to use a completely random token value, and map it on the server side to the stored User ID/time stamp (which stays on the server side and is thus outside of the clients control). This takes a little more memory and processing power, but is more secure. This is a decision you'll have to make on a case by case basis.
As for reusing/deriving keys from the IV and other keys, this is usually ok, provided that the keys are only valid for a short period of time. Mathematically it is unlikely someone can break them. It is possible however. If you want to go the paranoid route (which I usually do), generate all new keys randomly.
I'm implementing the provider side of a two-legged OAuth protocol for API authentication. We will provide the consumer with a consumer key and secret, which they will use to sign requests. The 2-legged OAuth is dictated by an interoperability standard, and thus a requirement.
The secret is sort of akin to a password, and I would never normally store a password as plain text (bCrypt or similar would be my normal choice). But because my provider needs access to the plain-text secret to verify the signature, it has to be either in some plain-text or reversible form.
I've considered the following options:
Store the secret as plain text
It's the most obvious solution, but if the database is compromised somehow, then all of the secrets will have to be changed. To me this solution is not ideal because it has all of the problems of storing a password in plain-text.
Apply reversible encryption (e.g. AES) with an encryption key stored elsewhere
This will provide some security, because if the database is compromised then the secrets will still be safe. But reversible encryption requires an encryption key, and the key has to be stored on the server. It means that if an attacker compromises the machine, then the encryption can be circumvented.
Is there something I haven't thought of?
Clarification Effectively it's using 2-legged Oauth as a single-signon system. The 'consumer' creates a request including the consumer key, a nonce, and several other parameters. The whole request is then signed by computing an HMAC-SHA1 with the consumer secret. When the request reaches our provider system, the process is repeated and if the signatures match then the request processing continues. We therefore need the plain-text secret to compute the HMAC-SHA1 signature on our side too. Unfortunately this mechanism is dictated by the industry-standard protocol that we need to comply with.
Take a look at this previous question. I'm not an expert on the topic, but I think you're missing part of the equation. In addition to the consumer key and secret, you'll be verifying the application that's sending the request (using an x509 certificate if you're using RSA-SHA1).
Are you sure that the provider needs the plain text password?
If this is the case then you simply can't have a 'customer secret'. As soon as the customer discloses this secret to someone else (including you) it fails to become a secret any longer.
Maybe if you explained more of what you are trying to do we could come up with a more elegant appraoch.