I have a Cloudflare Worker that presents a registration form, accepts input from the user that is posted back to the Worker which then sends that on to an Node HTTP API elsewhere (DigitalOcean if that matters) that inserts the data into a MongoDB (though it could be any database). I control the code in both the CF-Worker and the API.
I am looking for the best way to secure this. I am currently figuring to include a pre-shared secret key in the API call request headers and I have locked down what this particular API can do with database access control. Is there an additional way for me to confirm that only the CF Webworker can call the API?
If this is obvious to some I apologize. I have always been of the mind that unless you are REALY good at security it is best to consult those who are.
You can research OAuth2.0 standard. That is authorization standard for third party clients. Here is link: https://oauth.net/2/
This solution is the most professional.There are other less secure ways to do it, but easier to implement. Password and username, x-api-key, etc..
It sounds to me that you can also block all IPs and allow only requests from that specific domain name (CF Worker)
Related
I am currently working on a web application. The client is designed in Vue.js and the server application is made with node.js and express.
As of now I plan to deploy both the client-website and the node.js-app on the same server. Both will be adressed via two different, unique domains. The server will be set up manually with nginx.
The problem now is that this solution won't prevent a user from being able to send requests to the server outside the client that was made for it. Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way. I think the only clean solution is that only my Vue.js-app would be able to perform such actions. However, since both the server and the client are two different environments/applications, some sort of cross-origin-request mechanism (cors for instance) must be set up.
So I'm wondering, is this bad by design or is it usual that way? If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible? If so, what are usual best practices for development and deployment / things to consider? Should I change my plan and work on a complete different architecture for my expectations instead / How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
I think the only clean solution is that only my Vue.js-app would be able to perform such actions.
An API that is usable from a browser-based application is just open to the world. You cannot prevent use from other places. That just how the WWW works. You can require that a user in your system is authenticated and that auth credential is provided with each request (such as an auth cookie) before the API will provide any data. But, even then, any hacker can sign up for your system, take the auth credential and use your API for their own uses. You cannot prevent that.
If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible?
There is no such thing as a private API that is used from a browser-based application. Nothing that runs in a browser is private.
If you were thinking of using CORs protections to limit the use of your API, that only limits it from other browser-based applications as CORs protections are enforced inside the browser. Any outside script using your API is not subject to CORs at all.
How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
Bigger sites (such as Google) have APIs that require some sort of developer credential and that credential comes with particular usage rules (max number of requests over some time period, max data used, storage limits, etc...). These sites implement code in their API servers to verify that only an authorized client (one with the proper developer credential) is using the API and that the usage stays within the bounds that are afforded that developer credential. If not, the API will return some sort of 4xx or 5xx error.
Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way.
Yes, this will likely be possible. Many sites nowadays use something like a captcha to require human intervention before a request to create an account can succeed. This can be successful at preventing entirely automated creation of accounts. But, it still doesn't stop some developer from manually creating an account, then grabbing that accounts credentials and using them with your API.
When talking about web applications, the only truly private APIs are APIs that are entirely within your server (one part of your server calling something in another part of your server). These private APIs can even be http requests, but they must either not be accessible to the outside world or they must require credentials that are never available to the outside world. Since they are not available to the outside world, they cannot be used from within a browser application.
OK, that was a lot of things you cannot do, what CAN you do?
First and foremost, an application design that keeps private APIs internal to the server (not sent from the client) is best. So, if you want to implement a piece of functionality that needs to call several APIs you would like to be private, then don't implement that functionality on the client. Implement that functionality on the server. Have the client make one request and get some data or HTML back that it can then display. Keep as much of the internals of the implementation of that feature on the server.
Second, you can require auth credentials for a user in your system for all API usage. While this won't prevent rouge usage, it will give you a bit more control because you can track usage, suspend user accounts when you find abuse, etc...
Third, you can implement usage rules for your public-facing APIs such as requests per minute, amount of data, etc... that your actual web application would never exceed so if they are exceeded, then it must be some unintended usage of the API. And, you could go further than that and detect usage patterns that do not happen in your client. For example, if you see an API user cycling through dozens of users, requesting all their profiles and you know that is something your regular client never does, you could detect that type of usage and block it.
I'm developing both server and client side of a web application and it is almost finish. Now, it is time to secure it.
I read lots of articles and Q-A sites to understand the principles of the concept. But there are still question marks on my mind.
There is a similar question here:
How do I secure REST API calls?
They suggested to use token-based security system, which is very common and practical way. Also services like Firebase, Auth0 are providing this security system.
And this is about "how and where to store token": https://auth0.com/docs/security/store-tokens
If so, how can token protect server from fake-calls while we are storing it in the browsers local storage?
Explaining it with an example in order to be clear:
My client-side code has a form with options. One of the option can be selected via drop down option and there are only "1,2,3,4" in those options. So that, client can never send a form with "5" value to the server. But what if someone use a API tool (for example postman) to send a form with a value of 5? Attacker still can add a token to that request. First login to system as normal user. Than open the developer console of the browser, copy your token and paste to the header of your fake-request.
Not allowing the cross origin calls may solve the problem. But I am not sure if this means server and client should run on the same domain (or host)?
Bonus from stackoverflow: Stackoverflow's use of localstorage for Authorization seems unsafe. Is this correct else how do we strengthen it?
They are also discussing the similar question from another aspect. (Not for the server security but for the user's security.)
Not related but in case of need: front-end is developed with Angular 5, server is developed with Java and Spring Framework.
I'm developing my web-site project based on NodeJs/Express, and for some UI parts I'm using Jquery ajax request to fetch secondary data.
How can we handle some basic control on our Rest API end-points that are used for ajax calls by the browser?
I was thinking about some kind of token authorization , but it can be also used by other clients (scripts etc.) once it has been intercepted , so how can we protect our server from unwanted requests? What other controls should be used in this cases (recognize too many request from same client, clients black list,etc)?
There are three main topics Authentication, Authorization, Security. I will give links and only shortly answers. Subject is enough big to write few books.
Authentication - who is the one who is making request. There are many 'strategies' to authentication user. Please check most pupular module for this : http://passportjs.org/docs.
Of course you can inplement one or more of this strategies alone.
For stateless authentication jwt tokens are very convenient. If you want to code it yourself (Passport has this strategy) check this link (one of many in web) https://scotch.io/tutorials/authenticate-a-node-js-api-with-json-web-tokens.
How to prevent from token interception? Use always https and set token expiration time short.
Where to store your token client side? for detail look at this https://stormpath.com/blog/where-to-store-your-jwts-cookies-vs-html5-web-storage/ In short don't store in web storage because of XSS attacks. Use cookies, when they are correctly configured they are safe (more in attached link), if not configured they are very exposed to threats.
Authorization : we know user, but he has access only to some resources. Please check https://github.com/OptimalBits/node_acl
There is gist with node_acl and passport : https://gist.github.com/danwit/e0a7c5ad57c9ce5659d2
In short passport authenticate user. We now who want what. We setup roles and resources and define roles and resources relation. Then we set for each user roles. Module will check for us user permission.
Security: please look for this subject in documentation of sails framework http://sailsjs.org/documentation/concepts/security they describes attacks and how framework prevent form them. I write about express:
DDOS: (part of your question "too many request from same client") "At the API layer, there isn't much that can be done in the way of prevention". This is subject most for servers admins. In short use load balancer. If it is one IP (not hundreds) then blacklist or deley response (for start look at this https://www.npmjs.com/package/delayed-request but I thing that solution must be more sophisticated).
CSRF: "type of attack which forces an end user to execute unwanted actions on a web application backend". Look at this module https://www.npmjs.com/package/csrf
XSS: "type of attack in which a malicious agent manages to inject client-side JavaScript into your website" don't trust any data from user. Always validate, filter, santize. Look at this https://www.npmjs.com/package/xss
In documentation of sails, there is more attack types but above are most popular.
Use express session + passport (http://passportjs.org/)
Basically you should have a login to the website and than only authenticated users can call the REST apis.
now... if you don't want a login, than you can't really protect the APIs as the website by design is open.
You didn't specify much info so it is hard to say more than that.
Also DoS attacks can't be protected by your code, and usually its not the responsibility of the app server (in your case node.js express) to provide such protection. If someone wants your website down by doing DoS attacks, than without other layers (see https://en.wikipedia.org/wiki/Denial-of-service_attack#Defense_techniques) which mostly mean it is up to a router/switch/so on... to implement.
Let's say I have 2 servers (server and authenticator), and I have a client. My end goal here is to be able to identify the client on server. My solution was to come up with a token/secret system like OAuth: client has a token and secret. It passes it to server. Server passes it to authenticator. If valid, server allows the request.
Obviously, this is nonoptimal just for the number of requests being made. The reason authenticator and server are separated is because this is for a decentralised service-- any number of servers may be used, and it's impractical to ask client libraries to register on each server.
So, the question remains, what's the best/correct way to do this? The goal is to create a system that is decentralised, but can still have clients identify themselves in a relatively secure fashion to the server.
Disclaimer: I'm not a security expert so I could be off-base here and in actual implementation there seems to be a number of security issues that would need to be ironed out.
In the broadest sense, could you have the client supply credentials to the authenticator and then upon verification the authenticator supplies the client and the server both with matching security tokens and then the client and server can communicate directly?
Just curious about there a reason you don't want to implement OAuth and run your own OAuth server.
Additional reference: http://groups.google.com/group/37signals-api/msg/aeb0c8bf67a224cc
Turns out the solution was to define my problem a bit better. As I'm only trying to create a way to block applications, I only need to store their name and key when they request the server. Then, as long as they're not blocked and the key matches the one in the datastore, they'll be identified. So I'm not trying to authenticate so much as identify. Thanks for the input!
For my current side project, which is a modular web management system (which could contain modules for database management, cms, project management, resource management, time tracking, etc…), I want to expose the entire system as a RESTful API as I think that will make the system as more usable. The system itself is going to be coded in ASP.MET MVC3 however if I make all the data/actions available through a RESTful API, that should make the system very easy to use with PHP, Ruby, Python, etc… (they could even make there own interface to manage certain data if they wanted).
However, the one thing that seems hard to do easily (from the user's using the RESTful API point of view) with a RESTful API is security with ajax functionality. If I wanted something that was complex to setup and use, I would just create SOAP services but the whole drive for using a RESTful API is that it is very easy. The most common way of securing a RESTful API with with a key that is associated with a user. This works fine when all the calls are done on the server side however once you start doing ajax functionality, that changes. I would want the RESTful API to be able to be called directly from javascript however anyone who are firebug would easily be able to access the key the user is using allow that person access to the system. Is there a better way the secure a RESTful API where it does not make the user of the RESTful API do complex things just to set it up?
For one thing, you can't prevent the user of your API to not expose his key.
But, if you are writing a client for your API, I would suggest using your server side to do any requests to the API, while your HTML pages provide the data from the user. If you absolutely must use Javascript to make calls to the API and you still have a server side that populates the page in question, then you can obscure the actual key via a one-way digest algorithm in a timestamp-dependant way, while generating the page, and make it that your api checks that digest in a time-dependant way too.
Also, I'd suggest that you take a look into OAuth Nonces and timestamps a bit more deeply. Twitter and other API providers obviously have this problem too, so they must be doing something with the Nonce values.
It is possible to make some signature in request from javascript. But I'm hot sure, how 'RESTfull' urls would be with this extra info. And there you have the same problem: anyone who can see your making-signature-algorithm can make his own signature, witch you server will accept as well.
SSL stands for secure socket layer. It is crucial for security in REST API design. This will secure your API and make it less vulnerable to malicious attacks.
Other security measures you should take into consideration include: making the communication between server and client private and ensuring that anyone consuming the API doesn’t get more than what they request.
SSL certificates are not hard to load to a server and are available for free mostly during the first year. They are not expensive to buy in cases where they are not available for free.
The clear difference between the URL of a REST API that runs over SSL and the one which does not is the “s” in HTTP:
https :// mysite.com/posts runs on SSL.
http :// mysite.com/posts does not run on SSL.