I'm developing a web app with React and an GraphQL API with Node.js / Express. I would like to make the API more secure so that its harder for API requests that don't come from the web app on the browser to get data. I know how to do it with registered users. But how to make the non-registered user still be able to access some basic data needed for the app?
Is it possible to put some kind of key in the web app - so the API call can't be replicated for others through sniffing the network dev tool in browser and replicating in Postman? Does SSL/TLS also secure requests in that browser tool? Or use like a "standard" user for non-registered visitors?
Its a serverside web app with next.js
I know theres no 100% secure api but maybe its possible to make it harder for unauthorized access.
Edit:
I'm not sure if this is a problem about CSRF because Its not about accessing user data or changing data through malicious websites etc. But its about other people trying to use the website data (all GET requests to API) and can easily build there own web app on top of my api. So no one can easily query my api through simple Postman requests.
The quick answer is no you can't.
If you trying to prevent what can be describe as legit users form accessing your api you can't really do it. they can always fake the same logic and hit your webpage first before abusing the api. if this is what your trying to prevent your best bet is to add rate limiting to the api to prevent a single user from making too many request to your api (I'm the author of ralphi and
express-rate-limit is very popular).
But if you are actually trying to prevent another site form leaching of you and serving content to their users it is actually easier to solve.
Most browsers send Referrer header with the request you can check this header and see that requests are actually coming from users on your own site (this technique is called Leech Protection).
Leaching site can try and proxy request to your api but since they all going to come from the same IP they will hit your rate limiting and he can only serve a few users before being blocked.
One thing the Leecher site can do is try to cache your api so he wont have to make so many requests. if this is a possible case you are back to square one and you might need to manually block his IP once you notice such abuse. I would also check if it's legal cause he might be breaking the law.
Another option similar to Referrer is to use samesite cookies. they will only sent if the request is coming directly from your site. they are probably more reliable than the Referrer but not all browsers actually respect them.
Related
I am currently working on a web application. The client is designed in Vue.js and the server application is made with node.js and express.
As of now I plan to deploy both the client-website and the node.js-app on the same server. Both will be adressed via two different, unique domains. The server will be set up manually with nginx.
The problem now is that this solution won't prevent a user from being able to send requests to the server outside the client that was made for it. Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way. I think the only clean solution is that only my Vue.js-app would be able to perform such actions. However, since both the server and the client are two different environments/applications, some sort of cross-origin-request mechanism (cors for instance) must be set up.
So I'm wondering, is this bad by design or is it usual that way? If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible? If so, what are usual best practices for development and deployment / things to consider? Should I change my plan and work on a complete different architecture for my expectations instead / How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
I think the only clean solution is that only my Vue.js-app would be able to perform such actions.
An API that is usable from a browser-based application is just open to the world. You cannot prevent use from other places. That just how the WWW works. You can require that a user in your system is authenticated and that auth credential is provided with each request (such as an auth cookie) before the API will provide any data. But, even then, any hacker can sign up for your system, take the auth credential and use your API for their own uses. You cannot prevent that.
If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible?
There is no such thing as a private API that is used from a browser-based application. Nothing that runs in a browser is private.
If you were thinking of using CORs protections to limit the use of your API, that only limits it from other browser-based applications as CORs protections are enforced inside the browser. Any outside script using your API is not subject to CORs at all.
How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
Bigger sites (such as Google) have APIs that require some sort of developer credential and that credential comes with particular usage rules (max number of requests over some time period, max data used, storage limits, etc...). These sites implement code in their API servers to verify that only an authorized client (one with the proper developer credential) is using the API and that the usage stays within the bounds that are afforded that developer credential. If not, the API will return some sort of 4xx or 5xx error.
Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way.
Yes, this will likely be possible. Many sites nowadays use something like a captcha to require human intervention before a request to create an account can succeed. This can be successful at preventing entirely automated creation of accounts. But, it still doesn't stop some developer from manually creating an account, then grabbing that accounts credentials and using them with your API.
When talking about web applications, the only truly private APIs are APIs that are entirely within your server (one part of your server calling something in another part of your server). These private APIs can even be http requests, but they must either not be accessible to the outside world or they must require credentials that are never available to the outside world. Since they are not available to the outside world, they cannot be used from within a browser application.
OK, that was a lot of things you cannot do, what CAN you do?
First and foremost, an application design that keeps private APIs internal to the server (not sent from the client) is best. So, if you want to implement a piece of functionality that needs to call several APIs you would like to be private, then don't implement that functionality on the client. Implement that functionality on the server. Have the client make one request and get some data or HTML back that it can then display. Keep as much of the internals of the implementation of that feature on the server.
Second, you can require auth credentials for a user in your system for all API usage. While this won't prevent rouge usage, it will give you a bit more control because you can track usage, suspend user accounts when you find abuse, etc...
Third, you can implement usage rules for your public-facing APIs such as requests per minute, amount of data, etc... that your actual web application would never exceed so if they are exceeded, then it must be some unintended usage of the API. And, you could go further than that and detect usage patterns that do not happen in your client. For example, if you see an API user cycling through dozens of users, requesting all their profiles and you know that is something your regular client never does, you could detect that type of usage and block it.
In node, if I use a library like axios and a simple async script, I can send unlimited post requests to any web server. If I know all parameters, headers and cookies needed for that url, I'll get a success response.
Also, anyone can easily make those requests using Postman.
I already use CORS in my node servers to block requests coming from different origins, but that works only for other websites triggering requests in browsers.
I'd like to know if it's possible to completely block requests from external sources (manually created scripts, postman, softwares like LOIC, etc...) in a node server using express.
thanks!
I'd like to know if it's possible to completely block requests from external sources (manually created scripts, postman, softwares like LOIC, etc...) in a node server using express.
No, it is not possible. A well formed request from postman or coded with axios in node.js can be made to look exactly like a request coming from a browser. Your server would not know the difference.
The usual scheme for an API is that you require some sort of developer credential in order to use your API. You apply terms of service to that credential that describe what developers are allowed or are not allowed to do with your API.
Then, you monitor usage programmatically and you slow down or ban any credentials that are misusing the APIs according to your terms (this is how Google does things with its APIs). You may also implement rate limiting and other server protections so that a run-away developer account can't harm your service. You may even black list IP addresses that repeatedly abuse your service.
For APIs that you wish for your own web pages to use (to make Ajax calls to), there is no real way to keep others from using those same APIs programmatically. You can monitor their usage and attempt to detect usage that is out-of-line of what your own web pages would do. There are also some schemes where you place a unique, short-use token in your web page and require your web pages to include the token with each request of the API. With some effort, that can be worked around by smart developers by regularly scraping the token out of your web page and then using it programmatically until it expires. But, it is an extra obstacle for the API thief to get around.
Once you have identified an abuser, you can block their IP address. If they happen to be on a larger network (like say a university), their public IP address may be shared by many via NAT and you may end up blocking many more users than you want to - that's just a consequence of blocking an IP address that might be shared by many users.
I've been looking over the web for a little while but couldn't grasp the concept of making private API only between front-end and back-end. what I essentially want to do is to have an API that's only accessible through the front-end, not through curl, postman or anything else.
I have the following setup:
App is hosted on Heroku, backend is in nodejs
I use https connection that I self-generated via let's encrypt tool.
I have a public API atm that returns a string 'Hello world'
Currently, you can access it either via front-end or by going to www.example.com/api/test but what I would like to do is not allow the user to manually visit the link or use curl or postman to get that but instead only make it accessible through the front-end.
The front-end is written in Angular 2 (if it matters at all)
Note, that I am not planning to have any user sign in on the website, I simply want to restrict access to the API to outside world so that only my front-end can get it.
UPDATE USE CASE
The use case in the future is simple. I have a basic sign up form which asks for email address and a text description. I then use nodemailer on the backend to send that information to the gmail using POST request from Angular 2. I access the data sent through req.on('data') and req.on('end') and process it. My fear is how do I make sure I am not gonna get spammed through that API and receive 10k emails hence my wish to somehow make the API only accessible through the front-end.
While you cannot prevent a REST service from being called by the whole internet, you can still prevent spamming :
Your service requiring authentication or not, it's always the same mechanism, using a captcha ( the most important part ) and rate-limiting your API.
1. CAPTCHA :
The best way to ensure that the client making the request to a server is driven by a human-being is a captcha.
CAPTCHA :
A CAPTCHA (a backronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used in computing to determine whether or not the user is human.
You can find plenty of services, or libraries that will create captchas, like Google's reCAPTCHA.
2. rate limiting :
For a public service, you can rate-limit access by IP : if the same IP makes 10, 100, or even 1000 requests (depending on the purpose of that service), that's a bit suspicious, so you can refuse to serve him, by sending an error status, and logging that unfair behavior to the application logs. So the sysadmin can ban the IP at the firewall level with a tool like fail2ban.
For an authenticated service, well that's the same except you might also want to rate-limit the API based on the IP and on its identity, and might not want to ban an authenticated user...
Note that you don't really have to handle the rate-limit yourself, for a public API, meaning that preventing the same IP to make 1000 POST request to the same url in 10 seconds is something that can and should be done by a sysadmin.
The situation
I am writing a Single-Page-Web App (using Angular). Lets call it SPA
Another team-mate is writing some APIs (using Node.js). Lets call is Server
My SPA is to Login to the Server using login/passwd, and do some stuff
My team-mate has decided to use cookies to track the session. Hence, upon a successful login, a http-only cookie is to be set in the web-browser the SPA is loaded in.
The problem
If we put the SPA in the Server's public_html dir, all works well. This, however, makes the SPA as a part of the API code. This breaks our build process, since every version upgrade to the SPA now requires upgrading the API too.
If we host the SPA in a seperate webserver that only serves the static SPA files, I run into CORS issues. Since the SPA comes from a different origin than the APIs it is trying to access, the browser blocks the ajax calls. To overcome this, we will have to set Access-Control-Allow-Origin on the server side appropriately. I also understand that Access-Control-Allow-Credentials:true needs to be set, to instruct the browser to set/send the cookies.
Possible solutions
We create a build process which does a git-pull to the Server's public_html dir every time the SPA gets upgraded. I am trying to avoid this to keep the client and server upgrades separate.
We create a proxy kind of situation, where the Server doesnt store the SPA files, but collects them on-demand from another server that hosts the SPA files. In this case, the web-browser will see the SPA files and subsequent ajax calls from the same origin.
We code the server to set Access-Control-Allow-Origin:* in its responses. Firstly, this is too open and looks insecure. Is it really insecure, or is it just my perception? Also, since we are setting Access-Control-Allow-Credentials:true, Chrome complains Cannot use wildcard in Access-Control-Allow-Origin when credentials flag is true.. To overcome this, we will have to put exact origins (perhaps using a regex) in the Access-Control-Allow-Origin. This may seriously restrict us from distributing our SPA to users in unknown domains.
For a Server API designer, is Cookie based authentication the recommended way to handle Authentication for SPAs? OAuth2.0 and JWT based Authentication seems to suggest that Cookies based Authentication is not right for SPAs. Any pros/cons?
Kindly comment on the above options, or suggest any others that you may have used. Thanks in advance.
I think the issue is that your terminology is confusing. API is not an server, its an application that lives on a machine that can also be a server. If you make a NodeJS API, I suggest you use a Nginx server as a reverse proxy before it. Assuming you want the Nginx server, API and SPA files all on same machine, you can deploy your API to a directory and your SPA to another directory and have Nginx route the requests accordingly.
So I believe solution 2 is way to go. From there you can easily scale by increasing number of instances(if you use AWS) and load balance them or separate your API into its own application server.
As far as authentication. I have always preferred using Header Authorization with access tokens over cookies for SPA or API request. The idea that each request is self contained and does not require a persistent string kept on the browser is more appealing to me, though you can save access token via local storage.
I would go with either solution 2 or 3.
2: you could set both (webpage and API) on the same server (or use reverse proxies) so that from an outside perspective they share the same origins.
3: in the case of an API, the same origin policy becomes less important. The API is to be consumed by clients that are not part of your web application anyways, no?
I would not see any issue in setting a more lax allow origin header. And by more lax I don't mean wildcard, just add the origin of your webpage. Why do you want to wildcard it?
For my current side project, which is a modular web management system (which could contain modules for database management, cms, project management, resource management, time tracking, etc…), I want to expose the entire system as a RESTful API as I think that will make the system as more usable. The system itself is going to be coded in ASP.MET MVC3 however if I make all the data/actions available through a RESTful API, that should make the system very easy to use with PHP, Ruby, Python, etc… (they could even make there own interface to manage certain data if they wanted).
However, the one thing that seems hard to do easily (from the user's using the RESTful API point of view) with a RESTful API is security with ajax functionality. If I wanted something that was complex to setup and use, I would just create SOAP services but the whole drive for using a RESTful API is that it is very easy. The most common way of securing a RESTful API with with a key that is associated with a user. This works fine when all the calls are done on the server side however once you start doing ajax functionality, that changes. I would want the RESTful API to be able to be called directly from javascript however anyone who are firebug would easily be able to access the key the user is using allow that person access to the system. Is there a better way the secure a RESTful API where it does not make the user of the RESTful API do complex things just to set it up?
For one thing, you can't prevent the user of your API to not expose his key.
But, if you are writing a client for your API, I would suggest using your server side to do any requests to the API, while your HTML pages provide the data from the user. If you absolutely must use Javascript to make calls to the API and you still have a server side that populates the page in question, then you can obscure the actual key via a one-way digest algorithm in a timestamp-dependant way, while generating the page, and make it that your api checks that digest in a time-dependant way too.
Also, I'd suggest that you take a look into OAuth Nonces and timestamps a bit more deeply. Twitter and other API providers obviously have this problem too, so they must be doing something with the Nonce values.
It is possible to make some signature in request from javascript. But I'm hot sure, how 'RESTfull' urls would be with this extra info. And there you have the same problem: anyone who can see your making-signature-algorithm can make his own signature, witch you server will accept as well.
SSL stands for secure socket layer. It is crucial for security in REST API design. This will secure your API and make it less vulnerable to malicious attacks.
Other security measures you should take into consideration include: making the communication between server and client private and ensuring that anyone consuming the API doesn’t get more than what they request.
SSL certificates are not hard to load to a server and are available for free mostly during the first year. They are not expensive to buy in cases where they are not available for free.
The clear difference between the URL of a REST API that runs over SSL and the one which does not is the “s” in HTTP:
https :// mysite.com/posts runs on SSL.
http :// mysite.com/posts does not run on SSL.