Nodejs/MEAN.io/Passport - api keys secure - node.js

I want do develop simple web app using Node.js (MEAN.io Fullstack). I am using Passport as authentication middleware. I especially want that on my app users can login with Twitter account.
Are my API key and API secret that i define in config/production.js file "secure". Can someone see their value and misuse them ?

They are as secure as your server is. If someone breaks into your server, then it has full access to the source code and also the API keys.
If you trust your code to store passwords for databases, salts (e.g. for session cookies), etc, then you can trust it also for your API keys.
Please note that it's pretty standard to store API keys inside source/config files (in a non-publicly accessible folder - as would "public/" be, for example).

Related

How to properly store client secret for Google Drive API on Electron app?

I have an Electron app that requires access to the users Google Drive and I want to implement the api functionality without having to expose the client secret. From my understanding, this is impossible to do in certain scenarios like mobile applications, but what is the proper way of going about this on a local app?
When trying to follow the web-app OAuth instructions from Google, it looks like you can't use this method on a local application. When trying to setup the OAuth process this way it doesn't even let you whitelist localhost as a domain to authenticate users on (which breaks the process since this is a local app running on Electron). Add on to that this paper that Google released and it also seems like you can't trick the auth process to think it's not running on localhost, and you also can't run Node.js in the browser (I'm using Electron so this is impossible to do).
I then tried following their Mobile and Desktop app workflow which seemed promising. The issue arises when you need to Exchange authorization code for refresh and access tokens. This again requires that you show your client secret in your main app. I then though of splitting this up and doing some of it locally and then having an auth server that held the client secret and exchanged the authorization code from the client and returned a refresh and access token. Looking at the diagram that Google provides for visualizing this process, it clearly shows that your app needs to do both parts of the authorization process so that idea was also out.
One application that I personally use and looked at was rclone and from the looks of it they just list their client ID and secret directly in their code. The client secret is encrypted, but if you follow the workflow it gets revealed with a key that is also just stored locally on the app. So it's plain text is obscured, but there is nothing preventing anyone from getting hold of the client secret by slightly modifying the code.
I should also mention this app is in a public repo on GitHub and will stay that way.
This is my first time using OAuth so I may be misunderstanding something, but I tried following the documentation as closely as I could and can't shake the feeling that I'm overlooking a piece of this process.
And if the only way to solve this problem is to expose both the client id and secret, is there any way this could lead to users data being compromised? Since the Google Drive API is free to use I don't really mind if others use some of my quota. I'm more worried about security.
For public clients like Desktop apps you're developing, you'll need to use the PKCE flow. You're right that Google's documentation seems off here - you shouldn't need to pass the client_secret as part of the authorization code exchange.
That's supported by the documentation here: https://www.oauth.com/oauth2-servers/pkce/authorization-code-exchange/
It's possible that Google requires the client_secret but it doesn't treat the parameter as a real "secret" for public clients, but rather an additional identifier that is not sensitive, and not sufficient on its own to do anything on behalf of your application. Section 8.5 of the specification reads:
Secrets that are statically included as part of an app distributed to
multiple users should not be treated as confidential secrets, as one
user may inspect their copy and learn the shared secret. For this
reason, and those stated in Section 5.3.1 of [RFC6819], it is NOT
RECOMMENDED for authorization servers to require client authentication of public native apps clients using a shared secret,
as this serves little value beyond client identification which is
already provided by the "client_id" request parameter.
Authorization servers that still require a statically included shared
secret for native app clients MUST treat the client as a public
client (as defined by Section 2.1 of OAuth 2.0 [RFC6749]), and not
accept the secret as proof of the client's identity. Without
additional measures, such clients are subject to client impersonation
(see Section 8.6).
You might also look into standalone OAuth service providers, like Xkit where I work. That would let you keep the secret confidential while still going through an OAuth flow.

How to Secure REST API with API Keys

My architecture consists of a front end server and a backend/API server. The API is accessible to the end user, however I want the front end server to be able to access certain routes of the API that aren't accessible to the end user (higher privilege).
This question has 2 parts:
(1) I need to use API keys for the end user. What's the best practice to do this?
(2) How does the front end play into the API Key system? The client will need to log into their account to access these elevated privileges available from the front end. (such as enabling webhooks)
My application is hosted on Google Cloud App Engine Standard env and I'm using node.js 10. It would be awesome if anyone had any suggestions relating to this architecture.
I know this question is somewhat general but I've spent a few hours looking around online and my question isn't so much how to use API keys, nor how to authenticate frontend, but rather: what is the best practice to do these two together?
Thanks,
Nikita
JWT (https://jwt.io/introduction/) can help with this. You can include API key as well as JWT in the request header. Some services accept API key as a URL parameter but putting such sensitive data in header is a better approach.
JWT can be stored in the cookie when the user authenticates and can be transferred with the request to the server.
The server can use API key for authenticating and decode this JWT using the key available in the server environment for authorizing access as decoding this JWT can reveal the type of user and hence aid is figuring out what level of access is required.
A straight-forward approach used with multiple variations. You can start with the basic version and keep adding layers/features progressively.

Creating login page in angular 5

How to create a login page and check data's in a database using Rest API in angular 5.I am using Node js and Mysql as my database
There are many moving parts to creating an authentication system, and there are simply too many ways to get it wrong: storing plain-text passwords, not salting hashes, not rate-limiting queries, not having a properly configured TLS certificate, et cetera...
As you are not very familiar with these important concepts, it is highly advisable to use a 3rd-party OAuth2 provider in order to provide user authentication.
I repeat: I highly discourage implementing your own login page/fields and authentication methods for limiting database access.
As an example, take a look at this following option of using Google as an OAuth2 provider in your NodeJS application.

How to restrict Firebase data modification?

Firebase provides database back-end so that developers can focus on the client side code.
So if someone takes my firebase uri (for example, https://firebaseinstance.firebaseio.com) then develop on it locally.
Then, would they be able to create another app off my Firebase instance, signup and authenticate themselves to read all data of my Firebase app?
#Frank van Puffelen,
You mentioned the phishing attack. There actually is a way to secure for that.
If you login to your googleAPIs API Manager console, you have an option to lock down which HTTP referrer your app will accept request from.
visit https://console.developers.google.com/apis
Go to your firebase project
Go to credentials
Under API keys, select the Browser key associated with your firebase project (should have the same key as the API key you use to initialize your firebase app.)
Under "Accept requests from these HTTP referrers (web sites), simply add the URL of your app.
This should only allow the whitelisted domain to use your app.
This is also described here in the firebase launch-checklist here: https://firebase.google.com/support/guides/launch-checklist
Perhaps the firebase documentation could make this more visible or automatically lock down the domain by default and require users to allow access?
The fact that someone knows your URL is not a security risk.
For example: I have no problem telling you that my bank hosts its web site at bankofamerica.com and it speaks the HTTP protocol there. Unless you also know the credentials I use to access that site, knowing the URL doesn't do you any good.
To secure your data, your database should be protected with:
validation rules that ensure all data adheres to a structure that you want
authorization rules to ensure that each bit of data can only be read and modified by the authorized users
This is all covered in the Firebase documentation on Security & Rules, which I highly recommend.
With these security rules in place, the only way somebody else's app can access the data in your database is if they copy the functionality of your application, have the users sign in to their app instead of yours and sign in/read from/write to your database; essentially a phishing attack. In that case there is no security problem in the database, although it's probably time to get some authorities involved.
Update May 2021: Thanks to the new feature called Firebase App Check, it is now actually possible to limit access to your Realtime Database to only those coming from iOS, Android and Web apps that are registered in your Firebase project.
You'll typically want to combine this with the user authentication based security described above, so that you have another shield against abusive users that do use your app.
By combining App Check with security rules you have both broad protection against abuse, and fine gained control over what data each user can access.
Regarding the Auth white-listing for mobile apps, where the domain name is not applicable, Firebase has
SHA1 fingerprint for Android apps and
App Store ID and Bundle ID and Team ID (if necessary) for your iOS apps
which you will have to configure in the Firebase console.
With this protection, since validation is not just if someone has a valid API key, Auth domain, etc, but also, is it coming from our authorized apps and domain name/HTTP referrer in case of Web.
That said, we don't have to worry if these API keys and other connection params are exposed to others.
For more info, https://firebase.google.com/support/guides/launch-checklist

Restricting Access to local PouchDB

I would like to use PouchDB in a web app desktop client. I work in an environment where the computer user is generic and different persons use the same computer account. However, using my app they must log in with individual user names granting them their corresponding privileges. The system works offline, with period replication to the server.
Browsing through the documentation of PouchDB and searching the Internet I come to understand that there is no access restriction to a local PouchDB. Anyone who has access to the client/browser has in principle access to the cached data. Also implementing any sort of user access control in my web app seems to be kind of pointless. The code could simply be altered to allow access.
I came to the following possible solution and would like to know if that could work:
First contact with the central server
App sends user credentials to the server. The server encrypts a special databaseKey with the user credentials and sends this encryptedDatabaseKey back to the client app. The client app stores this encryptedDatabaseKey in localStorage, decrypts the contained databaseKey, creates and encrypts the local PouchDB using this databaseKey (e. g. crypto-pouch).
Offline usage
User logs into the app, his credentials are used to decrypt the encryptedDatabaseKey in localStorage, only then has he access to the stored data. If someone alters the code of the app he still cannot gain access to the encrypted PouchDB.
I see the following advantages:
- Without correct credentials there is no access to the local data
- Multiple users can have access to same local PouchDB since the databaseKey is identical.
- The databaseKey could even be changed regularly (app compares during a connection to the server the local encryptedDatabaseKey and the one received from the server, if they differ the app decrypts the database using the old key and encrypts it with the new)
Does this seem like a viable solution? Are there any other/better methods of securing a local PouchDB?
crypto-pouch is indeed the best method to encrypt a local PouchDB. However, I think where you say
Offline usage User logs into the app, his credentials are used to decrypt the encryptedDatabaseKey in localStorage, only then has he access to the stored data
I think it's pointless to decrypt the key and use that to decrypt the database; you might as well just as the user to create and memorize a password? Then you can use that as the key to the crypto-pouch.

Resources