Paypal API credentials, security issue - security

Thinking about a situation where multiple developers cooperate on a project.
A project that is tested on a development server and then moved on a webserver when ready.
This project talks with paypal API so in dev-mode uses the Sandbox credentials, and when online uses LIVE api credentials.
Problem is security, since I want that only the team leader have access to the file that contains the live API data.
the only solution i've reach until now is to limit the FTP access to the webserver to one person, and this person is the only one who can access the credentials file. But this could be not very practical. Since there will be no synch with the dev server....
I guess this is a common pattern, where sensitive data has to be placed in a secured place, and just accessed from the ProjectLeader, and from the Live web-application.
I need an idea... any suggestion?

Related

How do I secure a REST-API?

I've set up an API with authentication but I want to only allow certain applications and websites to access it. What do I do?
I've got authentication set up for users that are Logged in only being able to access the API, however, how do I prevent them from just logging in from anywhere?
Before I address your question, I think is important that first we clear a common misconception among developers, regarding WHO and WHAT is accessing an API.
THE DIFFERENCE BETWEEN WHO AND WHAT IS COMMUNICATING WITH YOUR API SERVER
To better understand the differences between the WHO and the WHAT are accessing your mobile app, let’s use this picture:
The Intended Communication Channel represents your mobile being used as you expected, by a legit user without any malicious intentions, using an untampered version of your mobile app, and communicating directly with your API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of your mobile app, a hacker using the genuine version of you mobile app while man in the middle attacking it to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let your API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, your mobile app.
Now we need a way to identify WHAT is calling your API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
For your surprise you may end up discovering that It can be one of your legit users using a repackaged version of your mobile app or an automated script trying to gamify and take advantage of your service.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
YOUR QUESTIONS
I've got authentication set up for users that are Logged in only being able to access the API, however, how do I prevent them from just logging in from anywhere?
If by logging in from anywhere you mean any physical location, then you can use blocking by IP address as already suggested by #hanshenrik, but if you mean blocking from logging from other applications, that are not the ones you have issued the API keys for, then you have a very hard problem in your hands to solve, that leads to your first question:
I've set up an API with authentication but I want to only allow certain applications and websites to access it. What do I do?
This will depend if WHAT is accessing the API is a web or a mobile application.
Web application
In a web app we only need to inspect the source code with the browser dev tools or by right click on view page source and search for the API key, and then use it in any tool, like Postman or in any kind of automation we want, just by replicating the calls as we saw them being made in the network tab of the browser.
For an API serving a web app you can employ several layers of dense, starting with reCaptcha V3, followed by Web Application Firewall(WAF) and finally if you can afford it a User Behavior Analytics(UBA) solution.
Google reCAPTCHA V3:
reCAPTCHA is a free service that protects your website from spam and abuse. reCAPTCHA uses an advanced risk analysis engine and adaptive challenges to keep automated software from engaging in abusive activities on your site. It does this while letting your valid users pass through with ease.
...helps you detect abusive traffic on your website without any user friction. It returns a score based on the interactions with your website and provides you more flexibility to take appropriate actions.
WAF - Web Application Firewall:
A web application firewall (or WAF) filters, monitors, and blocks HTTP traffic to and from a web application. A WAF is differentiated from a regular firewall in that a WAF is able to filter the content of specific web applications while regular firewalls serve as a safety gate between servers. By inspecting HTTP traffic, it can prevent attacks stemming from web application security flaws, such as SQL injection, cross-site scripting (XSS), file inclusion, and security misconfigurations.
UBA - User Behavior Analytics:
User behavior analytics (UBA) as defined by Gartner is a cybersecurity process about detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns—anomalies that indicate potential threats. Instead of tracking devices or security events, UBA tracks a system's users. Big data platforms like Apache Hadoop are increasing UBA functionality by allowing them to analyze petabytes worth of data to detect insider threats and advanced persistent threats.
All this solutions work based on a negative identification model, by other words they try their best to differentiate the bad from the good by identifying WHAT is bad, not WHAT is good, thus they are prone to false positives, despite of the advanced technology used by some of them, like machine learning and artificial intelligence.
So you may find yourself more often than not in having to relax how you block the access to the API server in order to not affect the good users. This also means that this solutions require constant monitoring to validate that the false positives are not blocking your legit users and that at same time they are properly keeping at bay the unauthorized ones.
Mobile Application
From your reply to a comment:
What about for mobile applications?
Some may think that once a mobile app is released in a binary format that their API key will be safe, but turns out that is not true, and extracting it from a binary is sometimes almost as easy as extracting it from a web application.
Reverse engineering a mobile app is made easy by plethora of open source tools, like the Mobile Security Framework(MobSF), Frida, XPosed, MitmProxy, and many other more, but as you can see in this article, it can be done with MobSF or with the strings utility that is installed in a normal Linux distribution.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
Regarding APIs serving mobile apps a positive identification model can be used by using a Mobile App Attestation solution that guarantees to the API server that WHAT is making the requests can be trusted, without the possibility of false positives.
The Mobile App Attestation
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
CONCLUSION
In the end the solution to use in order to protect your API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
So using API keys may sound like locking the door of your home and leave the key under the mat, but not using them is liking leaving your car parked with the door closed, but the key in the ignition.

Do I need to host the backend server for Stripe\Braintree payment gateway after I move the app to production?

if anyone could give me a clear high level answer that would be great. I want to integrate a payment gateway into my app eg: Strip/Braintree, and I have gotten it all working to the testing part but now I am wondering for me to move it to production do I need to host the back end server for retrieving the tokens myself?
Currently I hosted the test server locally to test that it works. But what now? Do I need to host this on a server for all time so my app can get its tokens?
Please help.
Yes, you have to.
You can start with a Virtual Machine at DigitalOcean or Vultr. Replicate your test environment there, then harden the server, etc.
If you're new to that then I recommend you to find someone who has experience setting up servers in production environments.
Thanks for your help. I spoke with Stripe and below was there response. They confirmed that you do need a server backend all the time.
--
Unfortunately, we don’t provide any hosted solutions when working with app based payment flows—you would need to have a back-end setup in place or use a serverless solution such as Heroku, both for your eventual move to a production environment and also while in development to test your back-end.
Generally speaking, you’ll use our SDKs when building your app to implement our client-side framework enabling you to securely collect and tokenize payment details from customers from within your app. However, the back-end server is where you’ll actually make requests to Stripe when you need to create a charge, refund a payment or take some other API related action.
Additionally, your back-end server will play a critical role as that’s where you’ll need to generate the ephemeral keys that will be used as the client-side session credentials for the app’s user. The use of ephemeral keys will facilitate the retrieval and updating of customer objects in Stripe for a given user (the persistent creation and use of individual customer objects is a default behavior for our mobile SDKs), but will ensure that your Stripe account’s secret API keys remain protected (public API keys are still used in the client).

Firebase with Electron, How to secure host?

I'm considering using Firebase with my Electron App. Specifically I'd like to begin by using Firebase Authentication to sign users in to my app. I've done tons of research regarding the subject thus far, and my biggest concern remains that the domain I'd require for an authorized redirect would need to be localhost (Please correct me if I'm wrong). The Firebase interface sets localhost as an allowed domain by default, I assume just for developer testing, not for a live production environment (Again, please correct me if I'm wrong). The image below is the section of the Firebase Authentication interface that I'm referring to, for setting up authorized domains.
My question is this, in order to distribute an Electron application with access to Firebase, do I have to have the localhost domain authorized? As well, if I do have the localhost domain authorized, is this secure, in the context of couldn't malicious users set up their own localhost and redirect to an unintended page, giving them the ability to freely add data to Firebase databases?
If there's an alternate, more secure option than authorizing the localhost, what are my options?
I've read in plenty of places that the bulk of Firebase security comes in the form of setting applicable rules on who can read and write to the database. Namely this post gives some good oversight on the topic. But I'm a firm believer that if there are extra security measures that can be taken, then always take them, so long as they don't diminish the quality of the application.
Am I being too paranoid, or is this the right approach? Thanks in advance! Any advice or guidance is appreciated.
I see many questions in your post, but I'll answer these two:
My question is this, in order to distribute an Electron application
with access to Firebase, do I have to have the localhost domain
authorized?
That's not something implied by Firebase. Having the localhost
domain authorized is just for testing purposes when you're running
your app locally before deploying to another domain.
If I do have the localhost domain authorized, is this secure, in the
context of couldn't malicious users set up their own localhost and
redirect to an unintended page, giving them the ability to freely add
data to Firebase databases?
Yes, it is secure. You said you haven't started with Firebase yet,
but when you do, you'll find out that you need to download the service credentials
to be able to use the SDK in your app. Only you have access to these credentials. That's why other users can't setup their own localhost and access your authentication system.

How to restrict Firebase data modification?

Firebase provides database back-end so that developers can focus on the client side code.
So if someone takes my firebase uri (for example, https://firebaseinstance.firebaseio.com) then develop on it locally.
Then, would they be able to create another app off my Firebase instance, signup and authenticate themselves to read all data of my Firebase app?
#Frank van Puffelen,
You mentioned the phishing attack. There actually is a way to secure for that.
If you login to your googleAPIs API Manager console, you have an option to lock down which HTTP referrer your app will accept request from.
visit https://console.developers.google.com/apis
Go to your firebase project
Go to credentials
Under API keys, select the Browser key associated with your firebase project (should have the same key as the API key you use to initialize your firebase app.)
Under "Accept requests from these HTTP referrers (web sites), simply add the URL of your app.
This should only allow the whitelisted domain to use your app.
This is also described here in the firebase launch-checklist here: https://firebase.google.com/support/guides/launch-checklist
Perhaps the firebase documentation could make this more visible or automatically lock down the domain by default and require users to allow access?
The fact that someone knows your URL is not a security risk.
For example: I have no problem telling you that my bank hosts its web site at bankofamerica.com and it speaks the HTTP protocol there. Unless you also know the credentials I use to access that site, knowing the URL doesn't do you any good.
To secure your data, your database should be protected with:
validation rules that ensure all data adheres to a structure that you want
authorization rules to ensure that each bit of data can only be read and modified by the authorized users
This is all covered in the Firebase documentation on Security & Rules, which I highly recommend.
With these security rules in place, the only way somebody else's app can access the data in your database is if they copy the functionality of your application, have the users sign in to their app instead of yours and sign in/read from/write to your database; essentially a phishing attack. In that case there is no security problem in the database, although it's probably time to get some authorities involved.
Update May 2021: Thanks to the new feature called Firebase App Check, it is now actually possible to limit access to your Realtime Database to only those coming from iOS, Android and Web apps that are registered in your Firebase project.
You'll typically want to combine this with the user authentication based security described above, so that you have another shield against abusive users that do use your app.
By combining App Check with security rules you have both broad protection against abuse, and fine gained control over what data each user can access.
Regarding the Auth white-listing for mobile apps, where the domain name is not applicable, Firebase has
SHA1 fingerprint for Android apps and
App Store ID and Bundle ID and Team ID (if necessary) for your iOS apps
which you will have to configure in the Firebase console.
With this protection, since validation is not just if someone has a valid API key, Auth domain, etc, but also, is it coming from our authorized apps and domain name/HTTP referrer in case of Web.
That said, we don't have to worry if these API keys and other connection params are exposed to others.
For more info, https://firebase.google.com/support/guides/launch-checklist

How do I securely connect a Backbone.js app to a database?

I am starting to plan a web-app and Backbone.js will be a perfect fit for the client side. I have been planning on using node for the backend but this is open for the time being.
I need a way to secure the front-end app's connection to a database. I have had discussions with others on Quora but I think the thought process was too abstracted from the core problem.
I would prefer to be accessing the data by RESTful end-points, but I need to ensure only my app can talk to the API. I will have full control over both the front-end and back-end of the application. There is a possibility of other apps being built around the database (in a year or two), however they will be developed by me (i.e. not a public API) and these will probably use separate OAuth end-points.
Some notes on the app (may or may not be useful):
The app is planned to be offered in a SaaS model where companies subscribe and are allowed multiple users.
The data for each company needs to be secure and only accessible to members of that company.
All traffic (front-end and app to API) will be sent through SSL.
Any advice on the best way to do this will be greatly appreciated.
We have the exact same setup as you - SaaS model, multiple apps (mobile, web, etc) and when I followed your link, Miguel has the exact solution we use.
Token that is time stamped and sent to the client on auth. We store that hash token in a User Model and then every subsequent request we validate that token.
You can extend Backbone.Model with a BaseModel that appends the token to every server request by overriding Backbone.Sync
See here about how they extended a baseview and you can apply the same thing to a basemodel.

Resources