Safe to cache unencrypted API keys in Google Cloud Functions? - node.js

Our Google cloud Function has access to an encrypted API key which it can unencrypt by using an external service. Once the API key is unencrypted, is it then safe to cache the API key as a global variable so that in cases where a Google Cloud Function is reused, the unencrypted variable can be used instead of contacting the unencryption service?
EDIT:
Our thinking is that the function will use an unencrypted version of the API key when running (i.e. store it in its memory for use) and that it's cache, I believe, is in memory and per function, which to the best of my knowledge would make it no less safe to cache the unencrypted API key per function than get it and unencrypt it on every function invocation?
'Safe' was a bad word choice - there is no such thing as safe, everything is, to an extent, a balancing act.

Statistically speaking, the longer you hold sensitive information in memory, the easier it is for a bad actor to get a hold of it. But you can never really eliminate the chance of this happening. The issue is really how this bad actor gets into Cloud Functions. The moment this becomes a possibility, you've got a problem. This can happen by trusting third party code into your deployment, or someone getting a hold of your project's admin credentials or a lapse in security at Google.
But if you assume that there is no possibility of a bad actor entering the system, it doesn't really matter how long you hold on to something in memory, since you trust every bit of code that could access it (and of course Google for providing that memory).
The memory isn't held strictly "per function". It would be held "per function per instance". Depending on the load, you could have many server instances all decrypting and holding sensitive information. But the code running on the instance would only be triggered from the one function and never others.
Caching API keys in memory in this manner does make changing API keys a bit more complex if you had to quickly change them i.e. due to a leak - a way round could be to also store a timestamp in a global variable and invalidate the key after x amount of time has passed or to restart all of the functions so the memory cache is cleared and fresh versions of the API keys are fetched, which would happen if you push a new version of the function to Google Cloud Functions

Related

How to prevent snooping by user of Mac app?

I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.

JWT with Node & Passport: Restarting server

I am new to Node and trying to setup Node & Passport to create JWTs upon authentication.
I am hoping to build a "stateless authentication mechanism" to reduce the need of going back and forward to the database.
By going "stateless", if none of the shared secrets or JWT is saved in the DB, I am assuming if the server restarts, all the issued JWTs (logged in users) are invalidated, thereby requiring a new JWT for all users to access protected routes. I do not want the users to log back in each time a server restarts or a new instance is spun.
I believe I can pass in static shared secret(s) to Node environment that I can use each time to generate the same JWTs that doesn't affect server restart.
Questions:
If a good practice is to pass in the shared secrets, where and how should I create this shared secret? and what all shared secret(s) will I have to pass in?
However, if passing in shared secret(s) to Node environment is not a good strategy, I am all ears for suggestions?
Update
I meant shared secrets when I said "key(s)". I'll update the question so it's not confusing.
Actually passing the keys as environment is the recommended way for this kind of applications.
Because the environment is only be visible by the running application and reduces the possibilities of leaking the keys (compared to something like a config file provided with the rest of the application code).
Normally you don't rotate the keys that often, it's usual to rotate them once a month assuming that you control your environment.
But keep in mind that the key is only used to prove that the token was signed by you, normally is good practice to only include a tiny bit of information in the token (for performance reasons). So you still need to go to the database to retrieve extra information about the user itself. You can add all the user information inside the token but keep in mind that the token needs to be sent for each request and that adds overhead.
If you use a process manager like supervisord you can set the environments over there and give the appropriate permissions to the config file to avoid key leakage.
I normally use environments to pass that kind of information to my node applications, I use it for JWT, AWS keys, SMTP credentials, etc. It keeps your code decoupled and avoids possible mistakes like pushing private keys to public code versioning system like github.

How to manage setting and storing a PIN in local storage in javascript-based application (phonegap)

I'm building a Sencha Touch 2 application that retrieves data from a webservice. It's been decided that it'd be a good idea to use an optional PIN setting on the app for extra security when you launch the app from idle. I'm not really sure what the best way to manage this is.
The web service isn't capable of storing the PIN itself, and the app is also designed to be used offline as well as online, so the number needs to be stored on the device itself in local storage. I'm not convinced this is providing any level of security, and I'm also concerned that on iOS local storage is apparently treated as temporary so setting the number in local storage doesn't necessarily mean it's always going to be there.
The webservice already returns an expiring auth token to the device on which is required for all requests to the API. To my mind that's secure enough, but the idea of the PIN seems to be important to the client.
How would you manage this requirement?
Unlike cookies which passed between server/client (and could be accessed by both of them), sessionStorage / localStorage are 100% stored in the client by a concrete browser, sessionStorage temporarily stores data in one HTTP session, localStorage stores permanent data into client hard disk. The advantage is obvious:
Data won't be passed through HTTP request/response, bandwidth will be saved.
There will be no 4KB limitation, web site has much more flexibility to store large data in client.
Note: W3C "recommended 5 megabytes localStorage size limitation per domain", and "welcome feedback", this is much larger than 4KB limitation in cookie.
Now that data won't be passed through network, this will be relatively more secure.
Considering further according to #3, a number of existing HTTPs connection in theory could use plain HTTP by adopting Web Storage, because there is no need to encrypt the data, the data is stored in client side. Since HTTPs usually have only 10% performance comparing with HTTP, eventually either performance will be improved or cost is saved (procuring cheaper server hardware).
So I (personally) would give localStorage a try :). If you want more security you can additionally encrypt your pin before storing it into localStorage e.g. http://point-at-infinity.org/jsaes/

Implementing SCRAM - nonce validation and server/client keys

Technically two questions - but they are so heavily related I didn't want to split them up; but if the community feels I should, I will.
Following a recent question I am implementing SCRAM for a website login and web service API. Client environments will be .Net and Javascript (with Java likely in the future).
My first issue is basic: The protocol utilises a client and server key as key steps in the authentication process; and yet in order to be validated, both need to be known by both parties in advance since the protocol doesn't allow for exchange of these (to do so would result in a bit of a chicken and egg scenario). If you consider a Javascript client, for example, this means both keys are likely to be constants defined in the source - thus making them easy to fetch. So: why bother? Is it just to mitigate against 'Eve' where that 'Eve', for some reason, hasn't bothered to get the JS or client source code, which will necessarily be public!?
Secondly, like practically any other authentication mechanism it requires a client + server nonce.
Given that the authentication nonce, by definition, should never be used more than once (at least by the same user), this presumably means that a server must maintain a record of all nonce values used by all users forever. Unlike other data that we regularly archive off, such a table is only ever going to get bigger and queries against it likely to get slower and slower!
If that's correct, then it's technically unfeasible to implement this or almost any other authentication mechanism! Since I know that's plainly ridiculous; it must be common to define some additional scope that factors in a reasonable timescale as well.
As always with authentication and encryption; despite being a very experienced software developer I feel like I'm going back to school! What am I missing!?
both need to be known by both parties
in advance since the protocol doesn't
allow for exchange of these (to do so
would result in a bit of a chicken and
egg scenario).
Yes that's correct. Challenge response isn't a key-exchange protocol. It only norms, once client and server share a key, how to compute the same value from that key without transmitting in clear the key via network.
If you consider a Javascript client,
for example, this means both keys are
likely to be constants defined in the
source - thus making them easy to
fetch.
That's not a good idea. Alternatively client and server can agree on a key during a preliminary registration process.
Given that the authentication nonce,
by definition, should never be used
more than once (at least by the same
user), this presumably means that a
server must maintain a record of all
nonce values used by all users
forever.
NO. A new nonce should be generated for each new session using pseudo-random number generation. It's very improbable that you will get the same nonce twice, anyway It doesn't matter if a nonce it has already been used if the attacker don't know that .

Best way to limit (and record) login attempts

Obviously some sort of mechanism for limiting login attempts is a security requisite. While I like the concept of an exponentially increasing time between attempts, what I'm not sure of storing the information. I'm also interested in alternative solutions, preferrably not including captchas.
I'm guessing a cookie wouldn't work due to blocking cookies or clearing them automatically, but would sessions work? Or does it have to be stored in a database? Being unaware of what methods can/are being used so I simply don't know what's practical.
Use some columns in your users table 'failed_login_attempts' and 'failed_login_time'. The first one increments per failed login, and resets on successful login. The second one allows you to compare the current time with the last failed time.
Your code can use this data in the db to determine how long it waits to lock out users, time between allowed logins etc
Assuming google has done the necessary usability testing (not an unfair assumption) and decided to use captchas , I'd suggest going along with them.
Increasing timeouts is frustrating when I'm a genuine user and have forgotten my password (with so many websites and their associated passwords that happens a lot , especially to me)
Storing attempts in the database is the best solution IMHO since it gives you the auditing records of the security breach attempts. Depending on your application this may or may not be a legal requirement.
By recording all bad attempts you can also gather higher level information, such as if the requests are coming from one IP address (i.e. someone / thing is attempting a brute force attack) so you can block the IP address. This can be VERY usefull information.
Once you have determined a threshold, why not force them to request the email to be sent to their email address (i.e. similar to 'I have forgotten my password'), or you can go for the CAPCHA approach.
Answers in this post prioritize database centered solutions because they provide a structure of records that make auditing and lockout logic convenient.
While the answers here address guessing attacks on individual users, a major concern with this approach is that it leaves the system open to Denial of Service attacks. Any and every request from the world should not trigger database work.
An alternative (or additional) layer of security should be implemented earlier in the req/ res cycle to protect the application and database from performing lock out operations that can be expensive and are unnecessary.
Express-Brute is an excellent example that utilizes Redis caching to filter out malicious requests while allowing honest ones.
You know which userid is being hit, keep a flag and when it reaches a threshold value simply stop accepting anything for that user. But that means you store an extra data value for every user.
I like the concept of an exponentially increasing time between attempts, [...]
Instead of using exponentially increasing time, you could actually have a randomized lag between successive attempts.
Maybe if you explain what technology you are using people here will be able to help with more specific examples.
Lock out Policy is all well and good but there is a balance.
One consideration is to think about the consruction of usernames - guessable? Can they be enumerated at all?
I was on an External App Pen Test for a dotcom with an Employee Portal that served Outlook Web Access /Intranet Services, certain Apps. It was easy to enumerate users (the Exec /Managament Team on the web site itself, and through the likes of Google, Facebook, LinkedIn etc). Once you got the format of the username logon (firstname then surname entered as a single string) I had the capability to shut 100's of users out due to their 3 strikes and out policy.
Store the information server-side. This would allow you to also defend against distributed attacks (coming from multiple machines).
You may like to say block the login for some time say for example, 10 minutes after 3 failure attempts for example. Exponentially increasing time sounds good to me. And yes, store the information at the server side session or database. Database is better. No cookies business as it is easy to manipulate by the user.
You may also want to map such attempts against the client IP adrress as it is quite possible that valid user might get a blocked message while someone else is trying to guess valid user's password with failure attempts.

Resources