Are there any Firefox add-ons/extensions that would create the correct necessary values for Authorization HTTP Header, given a username/password dataset?
I have some code that's calculating the value for the Authorization header but the server is rejecting it. I'm looking to compare value that I've created against what a browser would issue, hence allowing me to watch the request data as it goes out to the server.
I don't have a login field to work with -- I just want to send a plain HTTP request from the URL bar of the browser.
I don't know a plug-in specifically for authorization headers, but I can recommend the "tamper data" plug-in. It lets you see and modify http headers before they get sent to the server.
For basic authorization, you need to include a header
Authorization: Basic xxxxx
where xxxxx is replaced by "username:password" base64 encoded. If this is what you're doing, but you're still having problems, I'd expect the problem to be with the base 64 encoding. The value should end in an = sign and could need padding with a 0 to make it be the correct length.
Related
I'm a bit naive about how to send cookie data between servers. I am aware that in the HTTP request you use Set-Cookie.
Right now, I am sending a header between the servers, for authorization purposes, so that one server is authorized with the other. But I am wondering if there is some advantage to using cookies, if cookies act differently than headers in this case. From what I have read, cookies and headers are one and the same for most purposes?
Using two Node.js servers, one being the web server, the other being the API server, is there any reason why sending a cookie vs a regular non-cookie header is better?
The "cookie" represents shared state between the client and the server. As was mentioned, the way to set cookie values, is to use the Set-Cookie header. And the way to communicate values that have already been set is to use the Cookie header.
Cookies are typically associated with web browsers, as tool to track and identify existing users. I've never seen cookies used for server to server communication.
The Authorization header is good for passing encoded or encrypted strings.
So for example you might see:
Authorization: "Basic dXNlcm5hbWU6cGFzc3dvcmQ="
The value in this case is the base64 encoded string of "username:password"
I wouldn't worry too much about what header you use. You can make up your own x-my-awesome-auth-header: Its conventional to prefix a custom header with an "x".
An important thing to consider, is what the header value contains. If you are communicating over plain http make sure you encrypt.
Also consider using open source standards for passing encrypted data such as JWT
Edit: To answer your question, is there any reason why sending a cookie is better? When it comes to server to server communication, its actually much worse to use Cookies, because those servers have to maintain state with other servers. eg. When A talks to B, A has to remember what B said when they talk again. You typically what server to server communication to be stateless, meaning you can throw away data pertaining to authorization and permission after each transaction. Each transaction has to go through the full authorization and permission resolution process. Its much easier to code, and there is no penalty in terms of security as long as your are protected via encryption
Yes, "cookies" is just jargon for the Cookie: HTTP header and corresponding Set-Cookie: header. So they are ultimately the same basic thing. Many APIs use the slightly more semantic Authorization: header, so that would be a good place to start.
What is the difference between use X-CSRF-Token in an HTTP header or token
in the hidden field?
When to use the hidden field and when to use the header and why?
I think that X-CSRF-Token is when I'm using JavaScript / AJAX but I'm not sure.
CSRF protection comes in a number of methods.
The traditional way (the "Synchronizer token" pattern) usually involves setting a unique valid Token value for each request and then verifying that unique value when the request is subsequently sent in. It is usually done by setting a hidden form field. The token value is usually short lived and associated to that session, so if a hacker tries to reuse a value they saw previously on the page, or tries to guess the value they will likely fail. So only requests from your application will work and forged requests from outside your application/domain (aka cross site request forgery) will fail.
The downside of that is it requires your application to set this hidden token on all HTML forms. These pages now have to be dynamically generated by an application, when perhaps previously they were static HTML. It can also break the back button (as you need to refresh the form to regenerate another unique CSRF value). You also now need to keep track of valid tokens on the server side and check any requests use a valid token. This can take quite a bit of extra effort to implement and maintain going forward.
An alternative approach (called the "Cookie-to-header token" pattern) is to set a Cookie once per session and the have JavaScript read that cookie and set a custom HTTP header (often called X-CSRF-TOKEN or X-XSRF-TOKEN or just XSRF-TOKEN) with that value. Any requests will send both the header (set by Javascript) and the cookie (set by the browser as a standard HTTP header) and then the server can check that value in the X-CSRF-TOKEN header matches the value in the cookie header. The idea being that only JavaScript run on the same domain would have access to the cookie, so JavaScript from another domain couldn't set this header to the right value (assuming the page is not vulnerable to XSS that would give access to this cookie). Even fake links (e.g. in a phishing email) would not work either, as even though they would appear to come from the right domain, only the cookie will be set but not X-CSRF-TOKEN header.
This can be MUCH easier to implement than the Synchronizer token pattern as you don't need to set the token for each call to each form, and the check is relatively simple too (just check the cookie matches the header) rather than tracking CSRF tokens validity. All you need is to set a cookie to a random value for each session. Some front end frameworks will even automatically generate the header for you if they see the cookie (e.g. AngularJS does this for example).
The downside is that it requires JavaScript to work (but that may not be an issue if your app basically doesn't work without JavaScript anyway) and also it will only work for requests the JavaScript makes (e.g. XHR requests) - regular HTML form requests would not set the header. A variation on this (the "Double Submit Cookie" pattern) puts the X-CSRF-TOKEN value in a hidden form field rather than in an HTTP Header to get around this but still keep the server side logic simpler than the traditional Synchronizer token pattern. It should be noted however that OWASP states some weaknesses with the Double Submit method, when the attacker is able to set the cookie (which is often easier than reading the cookie) so recommends validating the CSRF token in this case.
Additionally the Synchronizer token pattern can allow extra controls to enforce flow (e.g. the hidden field CSRF token will only be set when the application thinks you have sent a valid request in to get that form).
Oh and some security scans will pick up the fact the cookie is not set with the HTTP-Only flag so can be read by JavaScript - but that's deliberate as it needs to be able to read by that! False alert. You'd think as long as you are using a common name like X-CSRF-TOKEN they would know not to flag this, but have seen it flagged often.
All of them are for cross site request forgery protection and you need to use just one of them when sending a request to backend. Different names comes from different frameworks.
It's all about sending a csrf value to backend. Then backend will compare it with the csrf value stored in database for that specific user.
csrf:
Is used in HTML forms (not AJAX)
Produced in backend while rendering HTML form.
we can not set request header in HTML forms directly, so we have to send it via form input as a hidden field.
you can name this hidden input whatever you want.
E.g.: <input name="my_csrf_input" value="a_hashed_string(the csrf value)"
X-CSRF-TOKEN:
It is added to the request HTTP header for AJAX requests.
To use it, we can put the csrf value in a <meta> tag while rendering the HTML, then in front end we can get the value from that <meta> tag and send it to backend.
Laravel specific:
When using Laravel as backend. Laravel checks this header automatically and compares it to the valid csrf value in database (Laravel has a middleware for this).
X-XSRF-TOKEN:
It is added to the request header for AJAX requests.
Popular libraries like Angular and Axios, automatically get value of this header from XSRF-TOKEN cookie and put it in every request header.
To use it, we should create a cookie named XSRF-TOKEN in backend, then our front end framework that uses Angular or Axios will use it automatically.
Laravel specific:
Because it's popular, Laravel creates this cookie in each response.
so when you're using for example Axios and Laravel you don't need to do anything, just log user in and 'auth' middleware will do the job.
It's a bigger string compared to X-CSRF-Token because cookies are encrypted in Laravel.
I want to secure my application using access token while communicating to the server.I am using nginx server which logs all the headers which are present in the request. It is a security threat if we are logging the header. If somebody can access the logs file. They can easily manipulate the data. Then why people use token in header?
In this case can we consider payload as the right choice?
What is the best place(or standard way) to put access token : Header or in payload?
What are the pros and cons of both?
IMHO, it is mainly a matter of taste, and of tools you use for testing ...
If you use elaborated tools, that allows you to set custom headers, header is nice, because it does not clutter the payload. But if you want to be able to test with a simple browser, if is easier to add a request parameter than a header ...
You can even accept both, first looking in the header, and then in payload if you could not find the token in header.
Most CSRF solutions seem to insist that the CSRF token is sent as part of the POST data.
In my situation the data being sent is json, and I don't control what is sent (and I don't want to start messing with the json). So, I'm thinking of sending the CSRF token as a header. However, there are legacy parts of my application that would still need to be able to send the token in the body (e.g. submits from html forms).
So my CSRF protection would have to allow the request if a valid CSRF token appeared in the body OR a header. Is this a security risk, compared with insisting that the token is in the body?
CSRF is about make a unsuspicious user post data to a server where the attacker believes the user is logged in.
The idea behind the protection, is that the server associate a token to your session, and sends it to you as a cookie and as payload requirement. Then when posting something you send the token in the payload, and as cookie. Therefore the attacker cannot guess what token is in the cookie or the session. If the server receives a post with two different tokens, it will be rejected.
I think it would be fine to put the payload token in a header, as long it is not "Cookie" or any other header that is "remembered" and sent automatically by the browser.
There won't be any security risk if you send a CSRF token in header. Just make sure that the value of this header changes everytime the client requests a page i.e it should be a random number. Also, your web application on client side should send this header back to the server, so that the server can match the value of header sent to the client with the value of the same header received from the client's response.
Sending CSRF in request header is more secure.
CORS doesn't check same-origin policy for the form tag requests, which means if somebody managed to get the CSRF token then he can send the post request by using form tag from different domain (origin)
but in case of sending the CSRF in request header, the form tag cannot send request header, he has to use javascript (fetch() or XmlHttpRequest()), in this case the CORS will prevent him because he is sending from different domain (origin).
This defense relies on the same-origin policy (SOP) restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests with custom headers.
below, is quoted from https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.
This technique obviously works for AJAX calls, but you still need to protect form tags with approaches described in this document such as tokens. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).
I'm a little confused about Basic authentication in regards to web browsers. I had thought that the web browser would only send an Authorization header after having received an HTTP 401 status in the previous response. However, it appears that Chrome sends the Authorization header with every request thereafter. It has the data that I entered once upon a time in response to a 401 from my website and sends it with every message (according to the developer tools that ship with Chrome and my webserver). Is that expected behavior? Is there some header I should use with my 401 to infer that the Authorization stuff should not be cached? I'm using WWW-Authenticate header currently.
This is the expected behavior of the browser as defined in RFC 2617 (Section 2):
A client SHOULD assume that all paths at or deeper than the depth of
the last symbolic element in the path field of the Request-URI also
are within the protection space specified by the Basic realm value of
the current challenge. A client MAY preemptively send the
corresponding Authorization header with requests for resources in
that space without receipt of another challenge from the server.
Similarly, when a client sends a request to a proxy, it may reuse a
userid and password in the Proxy-Authorization header field without
receiving another challenge from the proxy server. See section 4 for
security considerations associated with Basic authentication.
to my knowledge, Basic HTTP authentication has no ability to perform a logout / re-authentication. This along with the lack of security of HTTP Basic authentication is why most websites now use forms and cookies for auth solutions.
From RFC 2617:
If a prior request has been authorized, the
same credentials MAY be reused for all other requests within that
protection space for a period of time determined by the
authentication scheme, parameters, and/or user preference.
From my experience it is quite common to see browsers automatically sending the Basic credentials for subsequent requests. It prevents having to do an extra round trip for additional resources.