Firebase provides database back-end so that developers can focus on the client side code.
So if someone takes my firebase uri (for example, https://firebaseinstance.firebaseio.com) then develop on it locally.
Then, would they be able to create another app off my Firebase instance, signup and authenticate themselves to read all data of my Firebase app?
#Frank van Puffelen,
You mentioned the phishing attack. There actually is a way to secure for that.
If you login to your googleAPIs API Manager console, you have an option to lock down which HTTP referrer your app will accept request from.
visit https://console.developers.google.com/apis
Go to your firebase project
Go to credentials
Under API keys, select the Browser key associated with your firebase project (should have the same key as the API key you use to initialize your firebase app.)
Under "Accept requests from these HTTP referrers (web sites), simply add the URL of your app.
This should only allow the whitelisted domain to use your app.
This is also described here in the firebase launch-checklist here: https://firebase.google.com/support/guides/launch-checklist
Perhaps the firebase documentation could make this more visible or automatically lock down the domain by default and require users to allow access?
The fact that someone knows your URL is not a security risk.
For example: I have no problem telling you that my bank hosts its web site at bankofamerica.com and it speaks the HTTP protocol there. Unless you also know the credentials I use to access that site, knowing the URL doesn't do you any good.
To secure your data, your database should be protected with:
validation rules that ensure all data adheres to a structure that you want
authorization rules to ensure that each bit of data can only be read and modified by the authorized users
This is all covered in the Firebase documentation on Security & Rules, which I highly recommend.
With these security rules in place, the only way somebody else's app can access the data in your database is if they copy the functionality of your application, have the users sign in to their app instead of yours and sign in/read from/write to your database; essentially a phishing attack. In that case there is no security problem in the database, although it's probably time to get some authorities involved.
Update May 2021: Thanks to the new feature called Firebase App Check, it is now actually possible to limit access to your Realtime Database to only those coming from iOS, Android and Web apps that are registered in your Firebase project.
You'll typically want to combine this with the user authentication based security described above, so that you have another shield against abusive users that do use your app.
By combining App Check with security rules you have both broad protection against abuse, and fine gained control over what data each user can access.
Regarding the Auth white-listing for mobile apps, where the domain name is not applicable, Firebase has
SHA1 fingerprint for Android apps and
App Store ID and Bundle ID and Team ID (if necessary) for your iOS apps
which you will have to configure in the Firebase console.
With this protection, since validation is not just if someone has a valid API key, Auth domain, etc, but also, is it coming from our authorized apps and domain name/HTTP referrer in case of Web.
That said, we don't have to worry if these API keys and other connection params are exposed to others.
For more info, https://firebase.google.com/support/guides/launch-checklist
Related
I have currently developed a backend app that has some important functionalities. I want to consume my backend endpoints from my frontend but I want to be sure that only my fronted calls the backend endpoint and no other. Currently anyone that access my web-app can take advantage of the functionalities (I do not require any user registration or authentication).
How can I be safe that my backend is not being called form other possible malicious attackers that may try to steal the functionalities of my backend?
I have read some other posts regarding solutions how to secure a backend app that do not require user authentication but none has a precise and secure way for that. Some say enabling CORS but during my experience I can say that CORS can be manipulated easily with the help of a simple browser plugin. (not speaking about mobile apps that do not consider it at all)
I would really appreciate if I would have some opinions in case of a web-frontend-app, mobile app and other backend systems that would try to call my API and how can I stop them.
Typical front-end authentication would be best (OpenID, ...).
If you want something different, you could check on your backend whether a specific header with a specific token is sent in the query. If it is not then you send back a 401 HTTP code.
This requires that your customers somehow get that token (through some registration process, probably) and then keep it long-term (it can be stored in LocalStorage but can be lost when cleaning up the browser)
OWASP Authentication is a good source of information.
I want to design a SPA which will have Frontend (React) and Backend-Rest API (Node.js, Express, Mongo DB). I am planning to have Single Sign-On in my application where users would be authenticating using MS-Azure AD, where a call would go to Azure AD from Frontend and in return I will get a token for that User which will be stored locally. After that, I want to call my rest API, for multiple GET, POST, PUT operations in the context of current user logged in on UI. Planning to deploy both frontend and backend on different servers so here I have two questions about securing my REST API.
CORS Implementation
User-Authentication on BE
Given the above requirements is it enough to have just CORS implemented or Do I need to again authenticate the User on BE?
Can somebody provide some best practice or experiences? Is there a lack in my “architecture”?
While CORS is definitely a consideration, it isn't Authentication (AuthN) or Authorization (AuthZ) which you need.
Depending on the number of users your application will have, how the back end will scale you might want to look at OAuth2.0 or stick with simpler session based auth but you will need something.
CORS on your back end will limit if a browser running an app on a domain other than yours to call your web services (it wont stop API requests from other tools).
AuthN - Your not logged in - go get logged in and come back to
me.
AuthZ - Controls what your users can and cant do. You might want to
enforce this at the resource level but you absolutely need to within
your business logic.
Further reading https://auth0.com/docs/authorization/concepts/authz-and-authn
Philippe from Pramgmatic web security has a free online course to get you started: https://pragmaticwebsecurity.com/courses/introduction-oauth-oidc.html Its very well paced and should give you some foundational knowledge. (It might let you write off OAuth for this use case but give it a go)
CORS will not perform any user authentication. You need CORS only when your client code is served from another domain than the backend you are talking too. If it is the same server to host static client files and backends REST endpoint, you don't need CORS. If you are unsure, then don't consider CORS at all and see if it works.
But you need authentication to know which user is which.
Thanks for your help in advance.
I'm using React Native and Node.js to deliver a product for my company.
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
My research so far.
Embedded in strings.xml
Hidden in Source Code
Hidden in BuildConfigs
Using Proguard
Disguised/Encrypted Strings
Hidden in Native Libraries
http://rammic.github.io/2015/07/28/hiding-secrets-in-android-apps/
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
https://github.com/oblador/react-native-keychain
Although this may obfuscate keys, these still have to be hardcoded. Making these kind of useless, unless I'm missing something.
I could use a .env file
https://github.com/luggit/react-native-config
Again, I feel like the hacker can still view secret keys, even if they are saved in a .env
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
Your Question
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
The cruel truth is... you can't!!!
It seems that you already have done some extensive research on the subject, and in my opinion you mentioned one effective way of shipping your App with an embedded secret:
Hidden in Native Libraries
But as you also say:
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
Some are useless and others make reverse engineer the secret from the mobile app a lot harder. As I wrote here, the approach of using the native interfaces to hide the secret will require expertise to reverse engineer it, but then if is hard to reverse engineer the binary you can always resort to a man in the middle (MitM) attack to steel the secret, as I show here for retrieving a secret that is hidden in the mobile app binary with the use of the native interfaces, JNI/NDK.
To protect your mobile app from a MitM you can employ Certificate Pinning:
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
You can read this series of react native articles that show you how to apply certificate pinning to protect the communication channel between your mobile app and the API server.
If you don't know yet certificcate pinning can also be bypassed by using tools like Frida or xPosed.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
So now you may be wondering how can I protect from certificate pinning bypass?
Well is not easy, but is possible, by using a mobile app attestation solution.
Before we go further on it, I would like to clarify first a common misconception among developers, regarding WHO and WHAT is accessing the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered, is not running in a rooted device and is not being the target of a MitM attack. This is done by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on. The cloud service also verifies that the TLS certificate provided to the mobile app on the handshake with the API server is indeed the same in use by the original and genuine API server for the mobile app, not one from a MitM attack.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
I think you should relaly go with a mobile app attestation solution, that you can roll in your own if you have the expertise for it, or you can use a solution that already exists as a SAAS solution at Approov(I work here), that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
Summary
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
Don't go down this route of storing keys in the mobile app, because as you already know, by your extensive research, they can be bypassed.
Instead use a mobile attestation solution in conjunction with OAUTH2 or OpenID connect, that you can bind with the mobile app attestation token. An example of this token binding can be found in this article for the check of the custom payload claim in the endpoint /forms.
Going the Extra Mile
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
I'm considering using Firebase with my Electron App. Specifically I'd like to begin by using Firebase Authentication to sign users in to my app. I've done tons of research regarding the subject thus far, and my biggest concern remains that the domain I'd require for an authorized redirect would need to be localhost (Please correct me if I'm wrong). The Firebase interface sets localhost as an allowed domain by default, I assume just for developer testing, not for a live production environment (Again, please correct me if I'm wrong). The image below is the section of the Firebase Authentication interface that I'm referring to, for setting up authorized domains.
My question is this, in order to distribute an Electron application with access to Firebase, do I have to have the localhost domain authorized? As well, if I do have the localhost domain authorized, is this secure, in the context of couldn't malicious users set up their own localhost and redirect to an unintended page, giving them the ability to freely add data to Firebase databases?
If there's an alternate, more secure option than authorizing the localhost, what are my options?
I've read in plenty of places that the bulk of Firebase security comes in the form of setting applicable rules on who can read and write to the database. Namely this post gives some good oversight on the topic. But I'm a firm believer that if there are extra security measures that can be taken, then always take them, so long as they don't diminish the quality of the application.
Am I being too paranoid, or is this the right approach? Thanks in advance! Any advice or guidance is appreciated.
I see many questions in your post, but I'll answer these two:
My question is this, in order to distribute an Electron application
with access to Firebase, do I have to have the localhost domain
authorized?
That's not something implied by Firebase. Having the localhost
domain authorized is just for testing purposes when you're running
your app locally before deploying to another domain.
If I do have the localhost domain authorized, is this secure, in the
context of couldn't malicious users set up their own localhost and
redirect to an unintended page, giving them the ability to freely add
data to Firebase databases?
Yes, it is secure. You said you haven't started with Firebase yet,
but when you do, you'll find out that you need to download the service credentials
to be able to use the SDK in your app. Only you have access to these credentials. That's why other users can't setup their own localhost and access your authentication system.
We have one web application (sharepoint) that collects information from disparate sources. We would like to be able to link users to the main websites of those various sources and have them pre-authenticated. I.E. they enter their credentials for the other sources (which are a number of different types LDAP, AD and home grown!) and we retrieve some information for them, and remember there details (Possibly Single Sign-on to keep em nice and safe). The user can then click a link that will open the full app in another window already authenticated.
Is this even likely to be possible?
Office Server has a Single-Sign-On api as a builtin feature. you may want to look into that. It enables you to register user credentials securely, and to access it at runtime.
You need to act as a web browser acts to different sites with storing credentials (usually in cookies) locally. Use therefore a a proper client library with cookie support. This could go probably for most of sites. There are sites using HTTP authentication, which are also easier to access from appropriate client libraries. The most demanding can be access to SSL websites, but again, most client HTTP libraries cover that nowadays as well.
All you need now is just to prepare your web application to act as a proxy to all those separate web resources. How exactly this is done in Sharepoint, well, I hope others will answer that...
True Single Sign-on is a big task. Wikipedia describes common methods and links to a few SSO projects.
If you want something lighter, I've used this approach in the past:
Create a table to store temporary security tokens somewhere that all apps can access.
From the source app (Sharepoint in your case), on request of an external app, save a security token (maybe a guid, tight expiration, and userid) in the token table.
Redirect to a request broker page/handler in the destination app. Include the final page requested and the guid in the request.
In the broker, look up the security token. If it exists and hasn't expired, authenticate, authorize, and redirect to the final page if everything is good. If not, send a permissions err.
Security-wise, a guid should be near impossible to guess. You can shrink risk by letting the tokens expire very quickly - it shouldn't take more than a few seconds to call the broker.
If the destination app uses Windows Auth and doesn't have role-based logic, you shouldn't have to do much. Just redirect and let your File/UrlAuthorization handle it. You can handle role-based permissions with the security token db if required.