I'm currently developing an Angular 6 page where we are doing some Http Post calls and sending the authentication as the header. The header is static (fixed password).
Is there any security differences sending it from the Angular frontend side with HttpClient, or sending it to an endpoint in our Node.Js backend (on cloud premises) and sending it there? Our thinking is that the "header" will be "hidden" for the client since we are sending it through our backend instead.
Another note, we will have the entire site behind authentication, and the clients logged obviously have the right to see the authentication, but we would like preferably not to.
Any thoughts and suggestions?
Depending on what you are trying to do with your post request, In previous projects I have worked on we have used your second approach and used a backend to validate requests before sending them on as I also have worked with secure systems and as a rule of thumb don't trust the client.
Here is some information from Angular's website on security with HttpClient https://angular.io/guide/http#security-xsrf-protection
I hope it helps.
I'm developing a web app with React and an GraphQL API with Node.js / Express. I would like to make the API more secure so that its harder for API requests that don't come from the web app on the browser to get data. I know how to do it with registered users. But how to make the non-registered user still be able to access some basic data needed for the app?
Is it possible to put some kind of key in the web app - so the API call can't be replicated for others through sniffing the network dev tool in browser and replicating in Postman? Does SSL/TLS also secure requests in that browser tool? Or use like a "standard" user for non-registered visitors?
Its a serverside web app with next.js
I know theres no 100% secure api but maybe its possible to make it harder for unauthorized access.
Edit:
I'm not sure if this is a problem about CSRF because Its not about accessing user data or changing data through malicious websites etc. But its about other people trying to use the website data (all GET requests to API) and can easily build there own web app on top of my api. So no one can easily query my api through simple Postman requests.
The quick answer is no you can't.
If you trying to prevent what can be describe as legit users form accessing your api you can't really do it. they can always fake the same logic and hit your webpage first before abusing the api. if this is what your trying to prevent your best bet is to add rate limiting to the api to prevent a single user from making too many request to your api (I'm the author of ralphi and
express-rate-limit is very popular).
But if you are actually trying to prevent another site form leaching of you and serving content to their users it is actually easier to solve.
Most browsers send Referrer header with the request you can check this header and see that requests are actually coming from users on your own site (this technique is called Leech Protection).
Leaching site can try and proxy request to your api but since they all going to come from the same IP they will hit your rate limiting and he can only serve a few users before being blocked.
One thing the Leecher site can do is try to cache your api so he wont have to make so many requests. if this is a possible case you are back to square one and you might need to manually block his IP once you notice such abuse. I would also check if it's legal cause he might be breaking the law.
Another option similar to Referrer is to use samesite cookies. they will only sent if the request is coming directly from your site. they are probably more reliable than the Referrer but not all browsers actually respect them.
I've been looking over the web for a little while but couldn't grasp the concept of making private API only between front-end and back-end. what I essentially want to do is to have an API that's only accessible through the front-end, not through curl, postman or anything else.
I have the following setup:
App is hosted on Heroku, backend is in nodejs
I use https connection that I self-generated via let's encrypt tool.
I have a public API atm that returns a string 'Hello world'
Currently, you can access it either via front-end or by going to www.example.com/api/test but what I would like to do is not allow the user to manually visit the link or use curl or postman to get that but instead only make it accessible through the front-end.
The front-end is written in Angular 2 (if it matters at all)
Note, that I am not planning to have any user sign in on the website, I simply want to restrict access to the API to outside world so that only my front-end can get it.
UPDATE USE CASE
The use case in the future is simple. I have a basic sign up form which asks for email address and a text description. I then use nodemailer on the backend to send that information to the gmail using POST request from Angular 2. I access the data sent through req.on('data') and req.on('end') and process it. My fear is how do I make sure I am not gonna get spammed through that API and receive 10k emails hence my wish to somehow make the API only accessible through the front-end.
While you cannot prevent a REST service from being called by the whole internet, you can still prevent spamming :
Your service requiring authentication or not, it's always the same mechanism, using a captcha ( the most important part ) and rate-limiting your API.
1. CAPTCHA :
The best way to ensure that the client making the request to a server is driven by a human-being is a captcha.
CAPTCHA :
A CAPTCHA (a backronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used in computing to determine whether or not the user is human.
You can find plenty of services, or libraries that will create captchas, like Google's reCAPTCHA.
2. rate limiting :
For a public service, you can rate-limit access by IP : if the same IP makes 10, 100, or even 1000 requests (depending on the purpose of that service), that's a bit suspicious, so you can refuse to serve him, by sending an error status, and logging that unfair behavior to the application logs. So the sysadmin can ban the IP at the firewall level with a tool like fail2ban.
For an authenticated service, well that's the same except you might also want to rate-limit the API based on the IP and on its identity, and might not want to ban an authenticated user...
Note that you don't really have to handle the rate-limit yourself, for a public API, meaning that preventing the same IP to make 1000 POST request to the same url in 10 seconds is something that can and should be done by a sysadmin.
I'm trying to build a server with some security. Let's say I have a file list component and a image viewer that I don't have access to and which make constant requests to my server. I'd like to filter those somehow based on who is making the request.Is there any way to check server-side if the client that's making the request has authorization while still having the server in a RESTful state and by not modifying the requests themselves?
tl;dr
I am considering a webservice design model which consist of several services/subdomains, each of which may be implemented in different platforms and hosted in different servers.
The main issue is authentication. If a request for jane's resources came in, can a split system authenticate that request as her's?
All services access the same DB layer, of course. So I have in mind a single point of truth each service can use to authenticate each request.
For example, jane accesses www.site.com, which renders stuff in her browser. The browser may send a client-side request to different domains of site.com, with requests like:
from internalapi.site.com fetch /user/users_secret_messages.json
from imagestore.site.com fetch /images/list_of_images
The authentication issue is: another user (or an outsider) can craft a request that can fool a subdomain into giving them information they should not access.
So I have in mind a single point of truth: a central resource accessible by each service that can be used to authenticate each request.
In this pseudocode, AuthService.verify_authentication() refers the central resource
//server side code:
def get_user_profile():
auth_token=request.cookie['auth_token']
user=AuthService.verify_authentication(auth_token)
if user=Null:
response.write("you are unauthorized/ not logged in")
else:
response.write(json.dumps(fetch_profile(user)))
Question: What existing protocols, software or even good design practices exist to enable flawless authentication across multiple subdomains?
I seen how OAuth takes the headache out of managing 3rd-party access and wonder if something exists for such authentication. I also got the idea from Kerberos and TACACS.
This idea was the result of teamthink, as a way to simplify architecture (rather than handle heavy loads).
I built a system that did this a little while ago. We were building shop.megacorp.com, and had to share a login with www.megacorp.com, profile.megacorp.com, customerservice.megacorp.com, and so on.
The way it worked was in two parts.
Firstly, all signon was handled through a set of pages on accounts.megacorp.com. The signup link from our pages went there, with a return URL as a parameter (so https://accounts.megacorp.com/login?return=http://shop.megacorp.com/cart). The login process there would redirect back to the return URL after completion. The login page also set an authentication cookie, scoped to the whole of the megacorp.com domain.
Secondly, authentication was handled on the various sites by grabbing the cookie from the request, then forwarding it via an internal web service to accounts.megacorp.com. We could have done this is a straightforward SOAP or REST query, with the cookie as a parameter, but actually, what we did was send a HTTP request, with the cookie added to the headers (sort of as if the user had sent the request directly). That URL would then come back as a 200 if the cookie was valid, serving up some information about the user, or a 401 or something if it wasn't. We could then deal with the user accordingly.
Needless to say, we didn't want to make a request to accounts.megacorp.com for every user request, so after a successful authentication, we would mark the user's session as authenticated. We'd store the cookie value and a timestamp, and if subsequent requests had the same cookie value, and were within some timeout of the timestamp, we'd treat them as authenticated without passing them on.
Note that because we pass the cookie as a cookie in the authentication request, the code to validate it on accounts.megacorp.com is exactly the same as handling a direct request from a user, so it was trivial to implement correctly. So, in response to your desire for "existing protocols [or] software", i'd say that the protocol is HTTP, and the software is whatever you can use to validate cookies (a standard part of any web container's user handling). The authentication service is as simple as a web page which prints the user's name and details, and which is marked as requiring a logged-in user.
As for "good design practices", well, it worked, and it decoupled the login and authentication processes from our site pretty effectively. It did introduce a runtime dependency on a service on accounts.megacorp.com, which turned out to be somewhat unreliable. That's hard to avoid.
And actually, now i think back, the request to accounts.megacorp.com was actually a SOAP request, and we got a SOAP response back with the user details, but the authentication was handled with a cookie, as i described. It would have been simpler and better to make it a REST request, where our system just did a GET on a standard URL, and got some XML or JSON describing the user in return.
Having said all that, if you share a database between the applications, you could just have a table, in which you record (username, cookie, timestamp) tuples, and do lookups directly in that, rather than making a request to a service.
The only other approach i can think of is to use public-key cryptography. The application handling login could use a private key to make a signature, and use that as the cookie. The other applications could have the corresponding public key, and use that to verify it. The keys could be per-user or there could just be one. That would not involve any communication between applications, or a shared database, following the initial key distribution.