How to create private api that can only be accessed by the front-end? - node.js

I've been looking over the web for a little while but couldn't grasp the concept of making private API only between front-end and back-end. what I essentially want to do is to have an API that's only accessible through the front-end, not through curl, postman or anything else.
I have the following setup:
App is hosted on Heroku, backend is in nodejs
I use https connection that I self-generated via let's encrypt tool.
I have a public API atm that returns a string 'Hello world'
Currently, you can access it either via front-end or by going to www.example.com/api/test but what I would like to do is not allow the user to manually visit the link or use curl or postman to get that but instead only make it accessible through the front-end.
The front-end is written in Angular 2 (if it matters at all)
Note, that I am not planning to have any user sign in on the website, I simply want to restrict access to the API to outside world so that only my front-end can get it.
UPDATE USE CASE
The use case in the future is simple. I have a basic sign up form which asks for email address and a text description. I then use nodemailer on the backend to send that information to the gmail using POST request from Angular 2. I access the data sent through req.on('data') and req.on('end') and process it. My fear is how do I make sure I am not gonna get spammed through that API and receive 10k emails hence my wish to somehow make the API only accessible through the front-end.

While you cannot prevent a REST service from being called by the whole internet, you can still prevent spamming :
Your service requiring authentication or not, it's always the same mechanism, using a captcha ( the most important part ) and rate-limiting your API.
1. CAPTCHA :
The best way to ensure that the client making the request to a server is driven by a human-being is a captcha.
CAPTCHA :
A CAPTCHA (a backronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used in computing to determine whether or not the user is human.
You can find plenty of services, or libraries that will create captchas, like Google's reCAPTCHA.
2. rate limiting :
For a public service, you can rate-limit access by IP : if the same IP makes 10, 100, or even 1000 requests (depending on the purpose of that service), that's a bit suspicious, so you can refuse to serve him, by sending an error status, and logging that unfair behavior to the application logs. So the sysadmin can ban the IP at the firewall level with a tool like fail2ban.
For an authenticated service, well that's the same except you might also want to rate-limit the API based on the IP and on its identity, and might not want to ban an authenticated user...
Note that you don't really have to handle the rate-limit yourself, for a public API, meaning that preventing the same IP to make 1000 POST request to the same url in 10 seconds is something that can and should be done by a sysadmin.

Related

How do I properly setup and deploy a private API exclusively for my frontend?

I am currently working on a web application. The client is designed in Vue.js and the server application is made with node.js and express.
As of now I plan to deploy both the client-website and the node.js-app on the same server. Both will be adressed via two different, unique domains. The server will be set up manually with nginx.
The problem now is that this solution won't prevent a user from being able to send requests to the server outside the client that was made for it. Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way. I think the only clean solution is that only my Vue.js-app would be able to perform such actions. However, since both the server and the client are two different environments/applications, some sort of cross-origin-request mechanism (cors for instance) must be set up.
So I'm wondering, is this bad by design or is it usual that way? If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible? If so, what are usual best practices for development and deployment / things to consider? Should I change my plan and work on a complete different architecture for my expectations instead / How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
I think the only clean solution is that only my Vue.js-app would be able to perform such actions.
An API that is usable from a browser-based application is just open to the world. You cannot prevent use from other places. That just how the WWW works. You can require that a user in your system is authenticated and that auth credential is provided with each request (such as an auth cookie) before the API will provide any data. But, even then, any hacker can sign up for your system, take the auth credential and use your API for their own uses. You cannot prevent that.
If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible?
There is no such thing as a private API that is used from a browser-based application. Nothing that runs in a browser is private.
If you were thinking of using CORs protections to limit the use of your API, that only limits it from other browser-based applications as CORs protections are enforced inside the browser. Any outside script using your API is not subject to CORs at all.
How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
Bigger sites (such as Google) have APIs that require some sort of developer credential and that credential comes with particular usage rules (max number of requests over some time period, max data used, storage limits, etc...). These sites implement code in their API servers to verify that only an authorized client (one with the proper developer credential) is using the API and that the usage stays within the bounds that are afforded that developer credential. If not, the API will return some sort of 4xx or 5xx error.
Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way.
Yes, this will likely be possible. Many sites nowadays use something like a captcha to require human intervention before a request to create an account can succeed. This can be successful at preventing entirely automated creation of accounts. But, it still doesn't stop some developer from manually creating an account, then grabbing that accounts credentials and using them with your API.
When talking about web applications, the only truly private APIs are APIs that are entirely within your server (one part of your server calling something in another part of your server). These private APIs can even be http requests, but they must either not be accessible to the outside world or they must require credentials that are never available to the outside world. Since they are not available to the outside world, they cannot be used from within a browser application.
OK, that was a lot of things you cannot do, what CAN you do?
First and foremost, an application design that keeps private APIs internal to the server (not sent from the client) is best. So, if you want to implement a piece of functionality that needs to call several APIs you would like to be private, then don't implement that functionality on the client. Implement that functionality on the server. Have the client make one request and get some data or HTML back that it can then display. Keep as much of the internals of the implementation of that feature on the server.
Second, you can require auth credentials for a user in your system for all API usage. While this won't prevent rouge usage, it will give you a bit more control because you can track usage, suspend user accounts when you find abuse, etc...
Third, you can implement usage rules for your public-facing APIs such as requests per minute, amount of data, etc... that your actual web application would never exceed so if they are exceeded, then it must be some unintended usage of the API. And, you could go further than that and detect usage patterns that do not happen in your client. For example, if you see an API user cycling through dozens of users, requesting all their profiles and you know that is something your regular client never does, you could detect that type of usage and block it.

Restrict api access in Node JS express

I have an express server with a few API routes like this:
server.post("/api/send-email", (req, res) => {
});
});
You don't need an auth token to access the API, but I only want my website mydomain.com to be able to use it.
I have tried enabling restricting access like this:
function restrictAccess(req, res, next) {
if (req.headers['origin'] !== 'http://localhost:3000') {
res.sendStatus(403);
} else {
next();
}
}
And I then passed restrict access into my route as a middleware.
When I make a POST request with postman I can't the API anymore, but I can just change the origin header and am able to access it again.
How can I allow only requests from mydomain.com? I have searched the internet for a long time now, but couldn't find anything. Is it even possible?
How can I allow only requests from my own webpages at mydomain.com?
In a nutshell, you can't. Any tool like postman or any script (such as node.js, PHP, Perl, etc...) can send whatever headers or other request parameters it wants so headers by themselves are useless for restricting access to only a web page in your domain.
That's just how the web works.
Restricting access would more commonly require a user login or some credential like that (probably setting a cookie that you can check) and then if you see that your APIs are being abused, you can ban/remove a specific user account that is doing it.
There are other techniques that may make it more work for scripts or tools to use your API, but even they are not immune to a hacker that wants to put in the work. For example, your server can generate a token, embed it in your web page and then whenever you make an API request from your web page, you include the token from the web page. Your server, then checks for the presence of a valid token. You make sure that tokens expire in some reasonable amount of time so a hacker can't just get one and use it for a long time.
A determined hacker can still scrape a token out of your web page whenever they want to use it so this is only an obstacle, not something that stops a determined hacker.
The only real solution here and what someone like Google uses for their APIs is to require some sort of credential with each API call and then instrument your server for abuse of the APIs (rate limiting, unintended use, etc...) and then revoke that credential if it is being misused. The credential can be a developer token (as with some Google APIs) or it can be some sort of authentication credential from a user login (like perhaps a cookie).
There are other tricks I've seen used before where an API only works properly if a sequence of requests came before it that would normally be coming from your web page. This is a lot more work to implement and maintain, but if your web page would normally issue a request for the web page, then make two ajax calls, then request five images and then call the API, you can somehow have your server track this sequence of events from a specific browser and only if you see the expected sequence of events that looks like it's coming from a real browser web page, so you allow the API call to work. Again, this is a lot of work and still not infallible because determined hacker can just use something like puppeteer to automate an actual browser.
Major browsers send along the origin header without permitting any browser Javascript to modify it.
Non-browser API clients, like Postman and anything else, can set the origin header, and other headers, to whatever they choose. Non-browser API clients can easily spoof your API pretending to be browsers.
Therefore, security tip, using the origin header's value to decide whether to grant access to your API offers you no security whatsoever.
You really do need some kind of token access mechanism. Especially for an API that sends email. If a cybercreep finds your API, your hosting service will accuse you of sending spam.
Sorry about that. :-( Security is a pain in the neck.

Nodejs Express, How to rate limit each user of my website when calling my API?

I have cors installed and only my website is whitelisted, how reliable is this? Can bad actors still call my api if they are not calling it from my website?
Next I want to rate limit each user on my website, (the users are not registered or signed in),
I want to restrict each user to make no more than 1 request per second.
How can each user be identified? and then how can each user be limited?
Too many separate questions packaged together here. I'll tackle the ones I can:
I have cors installed and only my website is whitelisted, how reliable is this? Can bad actors still call my api if they are not calling it from my website?
CORS only works with cooperating clients. That means browsers. Your API can be used by anybody else with a scripting tool or any programming language or even a tool like CURL. So, CORS does not prevent bad actors at all. The only thing it prevents is people embedding calls to your API in their own web page Javascript. It doesn't prevent anyone from accessing your API programmatically from whatever tool they want. And, they could even use your API in their own web-site via a proxy. It's not much protection.
How can each user be identified? and then how can each user be limited?
Rate limiting works best when there's an authentication credential with each request because that allows you to uniquely identify each request and/or ban or delay credentials
that misbehave. If there are no credentials, you can try to cookie them to track a given user, but cookies can be blocked or thrown away even in browsers to defeat that. So, without any sort of auth credential, you're stuck with just the requesting IP address. For some users (like home users), that's probably sufficient. But, for corporate users, many, many users may present as the same corporate IP address (due to how their NAT or proxy works), thus you can't tell one user at a major company from another purely by IP address. If you had a lot of users from one company simultaneously using the site, you could falsely trigger rate limiting.

Block http/https requests executed from scripts or any other external sources in node server

In node, if I use a library like axios and a simple async script, I can send unlimited post requests to any web server. If I know all parameters, headers and cookies needed for that url, I'll get a success response.
Also, anyone can easily make those requests using Postman.
I already use CORS in my node servers to block requests coming from different origins, but that works only for other websites triggering requests in browsers.
I'd like to know if it's possible to completely block requests from external sources (manually created scripts, postman, softwares like LOIC, etc...) in a node server using express.
thanks!
I'd like to know if it's possible to completely block requests from external sources (manually created scripts, postman, softwares like LOIC, etc...) in a node server using express.
No, it is not possible. A well formed request from postman or coded with axios in node.js can be made to look exactly like a request coming from a browser. Your server would not know the difference.
The usual scheme for an API is that you require some sort of developer credential in order to use your API. You apply terms of service to that credential that describe what developers are allowed or are not allowed to do with your API.
Then, you monitor usage programmatically and you slow down or ban any credentials that are misusing the APIs according to your terms (this is how Google does things with its APIs). You may also implement rate limiting and other server protections so that a run-away developer account can't harm your service. You may even black list IP addresses that repeatedly abuse your service.
For APIs that you wish for your own web pages to use (to make Ajax calls to), there is no real way to keep others from using those same APIs programmatically. You can monitor their usage and attempt to detect usage that is out-of-line of what your own web pages would do. There are also some schemes where you place a unique, short-use token in your web page and require your web pages to include the token with each request of the API. With some effort, that can be worked around by smart developers by regularly scraping the token out of your web page and then using it programmatically until it expires. But, it is an extra obstacle for the API thief to get around.
Once you have identified an abuser, you can block their IP address. If they happen to be on a larger network (like say a university), their public IP address may be shared by many via NAT and you may end up blocking many more users than you want to - that's just a consequence of blocking an IP address that might be shared by many users.

How to make Node API only accessible by web app?

I'm developing a web app with React and an GraphQL API with Node.js / Express. I would like to make the API more secure so that its harder for API requests that don't come from the web app on the browser to get data. I know how to do it with registered users. But how to make the non-registered user still be able to access some basic data needed for the app?
Is it possible to put some kind of key in the web app - so the API call can't be replicated for others through sniffing the network dev tool in browser and replicating in Postman? Does SSL/TLS also secure requests in that browser tool? Or use like a "standard" user for non-registered visitors?
Its a serverside web app with next.js
I know theres no 100% secure api but maybe its possible to make it harder for unauthorized access.
Edit:
I'm not sure if this is a problem about CSRF because Its not about accessing user data or changing data through malicious websites etc. But its about other people trying to use the website data (all GET requests to API) and can easily build there own web app on top of my api. So no one can easily query my api through simple Postman requests.
The quick answer is no you can't.
If you trying to prevent what can be describe as legit users form accessing your api you can't really do it. they can always fake the same logic and hit your webpage first before abusing the api. if this is what your trying to prevent your best bet is to add rate limiting to the api to prevent a single user from making too many request to your api (I'm the author of ralphi and
express-rate-limit is very popular).
But if you are actually trying to prevent another site form leaching of you and serving content to their users it is actually easier to solve.
Most browsers send Referrer header with the request you can check this header and see that requests are actually coming from users on your own site (this technique is called Leech Protection).
Leaching site can try and proxy request to your api but since they all going to come from the same IP they will hit your rate limiting and he can only serve a few users before being blocked.
One thing the Leecher site can do is try to cache your api so he wont have to make so many requests. if this is a possible case you are back to square one and you might need to manually block his IP once you notice such abuse. I would also check if it's legal cause he might be breaking the law.
Another option similar to Referrer is to use samesite cookies. they will only sent if the request is coming directly from your site. they are probably more reliable than the Referrer but not all browsers actually respect them.

Resources