Below is the file structure of my MERN project.
|-Project
|- client
|- server
Client folder contains a react server.
Client runs at localhost.client.com
Server folder contains a code for the node.js server.
Server runs at localhost.server.com
Whenever I'm making a request from the client to server. How can I mitigate the csrf attack?
To make sure that the request made to the server is from the client and not from any other source.
Your issue might be covered in React frontend and REST API, CSRF.
There is an excellent article about CSRF and counter measures (with Angular in mind, but it is still the same problem). TL/DR:
use same-origin-policy or set Access-Control-Allow-Origin-header when needed
save XSRF-Token as secure cookie (unfortunately this requires an exta request - most times). Only code from your domain can access this value.
send that token as X-XSRF-TOKEN header value with your request to authorize the request
To make sure only your application can use the server api you can set the Access-Control-Allow-Origin value in the CORS / OPTIONS response header.
During development it usually is set to
Access-Control-Allow-Origin: *
for production you specify your domain / server name
Access-Control-Allow-Origin: localhost.client.com
To prevent spoofing the origin, you can use (Anti-)CSRF-Tokens. This are extra values attached to your request, which authenticate your request.
This value can/should be saved in a secure cookie. csurf or JSON Web Tokens might be relevant for you. In your case CSRF-Tokens might require an extra request to your api to query the token.
Related
I'm using express-session to initialize a session and save the cookie. But the process of how the cookie is saved browser side is abstracted away and something of a black box to me, it just happens automatically. Can anyone point to a resource that explains how the client takes the cookie from the response and saves it in local storage? My front facing stack is composed of react, nextjs and urql client.
When you use express-session to initialize a session and save the cookie on the server, the client automatically receives the cookie in the response from the server and saves it in the local storage. This happens because the browser automatically includes the cookie in the request headers for any subsequent requests to the same domain, and the server uses the cookie to identify the user's session.
The process of how the cookie is saved in the local storage and included in the request headers is part of the underlying mechanics of the HTTP protocol and is handled automatically by the browser. It is not something that you need to worry about or configure when using express-session.
If you want to learn more about how cookies work in general, you can check out the following resources:
The official documentation for cookies on the Mozilla Developer
Network: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies
A tutorial on cookies from the W3Schools website:
https://www.w3schools.com/js/js_cookies.asp
I'm currently developing an Angular 6 page where we are doing some Http Post calls and sending the authentication as the header. The header is static (fixed password).
Is there any security differences sending it from the Angular frontend side with HttpClient, or sending it to an endpoint in our Node.Js backend (on cloud premises) and sending it there? Our thinking is that the "header" will be "hidden" for the client since we are sending it through our backend instead.
Another note, we will have the entire site behind authentication, and the clients logged obviously have the right to see the authentication, but we would like preferably not to.
Any thoughts and suggestions?
Depending on what you are trying to do with your post request, In previous projects I have worked on we have used your second approach and used a backend to validate requests before sending them on as I also have worked with secure systems and as a rule of thumb don't trust the client.
Here is some information from Angular's website on security with HttpClient https://angular.io/guide/http#security-xsrf-protection
I hope it helps.
Suppose I have an client/server application working over HTTP. The server provides a RESTy API and client calls the server over HTTP using regular HTTP GET requests.
The server requires no authentication. Anyone on the Internet can send a GET HTTP request to my server. It's Ok. I just wonder how I can distinguish between the requests from my client and other requests from the Internet.
Suppose my client sent a request X. A user recorded this request (including the agent, headers, cookies, etc.) and send it again with wget for example. I would like to distinguish between these two requests in the server-side.
There is no exact solution rather then authentication. On the other hand, you do not need to implement username & password authentication for this basic requirement. You could simply identify a random string for your "client" and send it to api over custom http header variable like ;
GET /api/ HTTP/1.1
Host: www.backend.com
My-Custom-Token-Dude: a717sfa618e89a7a7d17dgasad
...
You could distinguish the requests by this custom header variable and it's values existence and validity. But I'm saying "Security through obscurity" is not a solution.
You cannot know for sure if it is your application or not. Anything in the request can be made up.
But, you can make sure that nobody is using your application inadvertently. For example somebody may create a javascript application and point to your REST API. The browser sends the Origin header (draft) indicating in which application was the request generated. You can use this header to filter calls from applications that are not yours.
However, that somebody may use his own web server as proxy to your application, allowing him then to craft HTTP requests with more detail. In this case, at some point you would be able of pin point his IP address and block it.
But the best solution would be to put some degree of authorization. For example, the UI part can ask for authentication via login/password, or just a captcha to ensure the caller is a person, then generate a token and associate that token with the use session. From that point the calls to the API have to provide such token, otherwise you must reject them.
Most CSRF solutions seem to insist that the CSRF token is sent as part of the POST data.
In my situation the data being sent is json, and I don't control what is sent (and I don't want to start messing with the json). So, I'm thinking of sending the CSRF token as a header. However, there are legacy parts of my application that would still need to be able to send the token in the body (e.g. submits from html forms).
So my CSRF protection would have to allow the request if a valid CSRF token appeared in the body OR a header. Is this a security risk, compared with insisting that the token is in the body?
CSRF is about make a unsuspicious user post data to a server where the attacker believes the user is logged in.
The idea behind the protection, is that the server associate a token to your session, and sends it to you as a cookie and as payload requirement. Then when posting something you send the token in the payload, and as cookie. Therefore the attacker cannot guess what token is in the cookie or the session. If the server receives a post with two different tokens, it will be rejected.
I think it would be fine to put the payload token in a header, as long it is not "Cookie" or any other header that is "remembered" and sent automatically by the browser.
There won't be any security risk if you send a CSRF token in header. Just make sure that the value of this header changes everytime the client requests a page i.e it should be a random number. Also, your web application on client side should send this header back to the server, so that the server can match the value of header sent to the client with the value of the same header received from the client's response.
Sending CSRF in request header is more secure.
CORS doesn't check same-origin policy for the form tag requests, which means if somebody managed to get the CSRF token then he can send the post request by using form tag from different domain (origin)
but in case of sending the CSRF in request header, the form tag cannot send request header, he has to use javascript (fetch() or XmlHttpRequest()), in this case the CORS will prevent him because he is sending from different domain (origin).
This defense relies on the same-origin policy (SOP) restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests with custom headers.
below, is quoted from https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.
This technique obviously works for AJAX calls, but you still need to protect form tags with approaches described in this document such as tokens. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).
I'm writing a unit test for a middleware that relies on persistent sessions in connect. (namely connect-mongo).
I'd like to create a fake session, but can't seem to figure out how.
I have a connect.sid cookie in my browser that I assume correlates to the _id in my sessions collection in some encrypted manner.
Here's what I tried:
I added in the cookieParser middleware and a session store to a server, then used the following request to send it up to the server (copied the key from chrome's dev tools panel):
var jar = request.jar(),
cookie = request.cookie('connect.sid=<REALLYLONGKEY>');
jar.add(cookie);
request({url : 'http://localhost:8585/',jar : jar},this.callback);
that correctly set the cookie on the server side, and I have verified that sessions are working.
However, the magic conversion from cookie to session didn't happen as I had hoped - what's the correct way to do this?
Setting the cookie on the server would only work if a session with that ID exists. Who created the session in the first place?
I can tell you what I did on my server. I wanted to create tests that simulate the client side and send requests to the server. I needed a way to authenticate the clients. My server allowed authentication based on Google OAuth. However, I did not want to go through the trouble of teaching the clients to sign into a Google account.
My solution was to implement an alternative method for signing in to my server - using nothing but a username. This feature is only enabled during testing and disabled for production. My test clients can now sign in without a problem. They receive the cookie 'connect.sid' as a result of the sign-in and send it back to the server in subsequent requests.
I too used request.jar() to create a cookie jar for my requests. I should note, however, that this is only necessary if you are simulating more than one client at the same time and need a separate cookie jar for each client.