understanding basic authentication with a 401 - iis

I'm a little confused about Basic authentication in regards to web browsers. I had thought that the web browser would only send an Authorization header after having received an HTTP 401 status in the previous response. However, it appears that Chrome sends the Authorization header with every request thereafter. It has the data that I entered once upon a time in response to a 401 from my website and sends it with every message (according to the developer tools that ship with Chrome and my webserver). Is that expected behavior? Is there some header I should use with my 401 to infer that the Authorization stuff should not be cached? I'm using WWW-Authenticate header currently.

This is the expected behavior of the browser as defined in RFC 2617 (Section 2):
A client SHOULD assume that all paths at or deeper than the depth of
the last symbolic element in the path field of the Request-URI also
are within the protection space specified by the Basic realm value of
the current challenge. A client MAY preemptively send the
corresponding Authorization header with requests for resources in
that space without receipt of another challenge from the server.
Similarly, when a client sends a request to a proxy, it may reuse a
userid and password in the Proxy-Authorization header field without
receiving another challenge from the proxy server. See section 4 for
security considerations associated with Basic authentication.
to my knowledge, Basic HTTP authentication has no ability to perform a logout / re-authentication. This along with the lack of security of HTTP Basic authentication is why most websites now use forms and cookies for auth solutions.

From RFC 2617:
If a prior request has been authorized, the
same credentials MAY be reused for all other requests within that
protection space for a period of time determined by the
authentication scheme, parameters, and/or user preference.
From my experience it is quite common to see browsers automatically sending the Basic credentials for subsequent requests. It prevents having to do an extra round trip for additional resources.

Related

CSRF tokens for browser and non browser clients

We can implement token prevention for CSRF fine - by, for example, a hidden field in the browser that is sent on javascript web service / REST requests from the page and checked on the server against a token in a cookie.
This can be done fairly painlessly accross our internal and external web applications using some standard server code, javascript etc.
This would seem to work fine and make sense as the token on the page is validating the origin of the request.
All good.
The problem is we also use the same REST / SOAP endpoints from non broswer clients i.e. other services within the enteprise network.
These clients are not vulnerable to CSRF because they don't execute javascript.
However, short of some form of IP whitelisting - which in an enteprise environment can be very problematic - CSRD tokens break non browser clients.
Any thoughts?
You do not need to have JS enabled in order to have CSRF vulnerabilities. if you have an URL such as
http://example.com/site?logout=1
which logs you out, this URL can be played by the user out of the normal scope of activities. JS is just one of the enablers.
This said, your CSRF tokens should go into standard HTTP headers and these are not likely to break anything (except if your clients are very peculiar and rely on exact headers to work)
OK - we solved this by issuing / checking for headers that cannot be manipulated.
https://fetch.spec.whatwg.org/#forbidden-header-name
So - either check for the presense of one of these or add your own header begining with Sec- e.g. Sec-IsBrowser then do the following logic
IF(ForbiddenHeader is present and not null and not empty string/or check for a few)
{
OWASP at this point recommends if the ORIGIN is present then check its OK -
OR if the Referer is present check if OK (OK meaning an expected value e.g. www.mysite)
Reject - if either of these tests fail
(OWASP says if you cant find either of these headers - choose to reject or proceed)
Then Check your CORS token
}
ELSE
{
Hopefully the caller was not a browser so were OK as nothing is there to automatically post our authentication token
}

How to deny outside post requests?

I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.

How to distinguish between HTTP requests sent by my client application and other requests from the Internet

Suppose I have an client/server application working over HTTP. The server provides a RESTy API and client calls the server over HTTP using regular HTTP GET requests.
The server requires no authentication. Anyone on the Internet can send a GET HTTP request to my server. It's Ok. I just wonder how I can distinguish between the requests from my client and other requests from the Internet.
Suppose my client sent a request X. A user recorded this request (including the agent, headers, cookies, etc.) and send it again with wget for example. I would like to distinguish between these two requests in the server-side.
There is no exact solution rather then authentication. On the other hand, you do not need to implement username & password authentication for this basic requirement. You could simply identify a random string for your "client" and send it to api over custom http header variable like ;
GET /api/ HTTP/1.1
Host: www.backend.com
My-Custom-Token-Dude: a717sfa618e89a7a7d17dgasad
...
You could distinguish the requests by this custom header variable and it's values existence and validity. But I'm saying "Security through obscurity" is not a solution.
You cannot know for sure if it is your application or not. Anything in the request can be made up.
But, you can make sure that nobody is using your application inadvertently. For example somebody may create a javascript application and point to your REST API. The browser sends the Origin header (draft) indicating in which application was the request generated. You can use this header to filter calls from applications that are not yours.
However, that somebody may use his own web server as proxy to your application, allowing him then to craft HTTP requests with more detail. In this case, at some point you would be able of pin point his IP address and block it.
But the best solution would be to put some degree of authorization. For example, the UI part can ask for authentication via login/password, or just a captcha to ensure the caller is a person, then generate a token and associate that token with the use session. From that point the calls to the API have to provide such token, otherwise you must reject them.

Is it a security risk to allow CSRF token to be sent in body OR header?

Most CSRF solutions seem to insist that the CSRF token is sent as part of the POST data.
In my situation the data being sent is json, and I don't control what is sent (and I don't want to start messing with the json). So, I'm thinking of sending the CSRF token as a header. However, there are legacy parts of my application that would still need to be able to send the token in the body (e.g. submits from html forms).
So my CSRF protection would have to allow the request if a valid CSRF token appeared in the body OR a header. Is this a security risk, compared with insisting that the token is in the body?
CSRF is about make a unsuspicious user post data to a server where the attacker believes the user is logged in.
The idea behind the protection, is that the server associate a token to your session, and sends it to you as a cookie and as payload requirement. Then when posting something you send the token in the payload, and as cookie. Therefore the attacker cannot guess what token is in the cookie or the session. If the server receives a post with two different tokens, it will be rejected.
I think it would be fine to put the payload token in a header, as long it is not "Cookie" or any other header that is "remembered" and sent automatically by the browser.
There won't be any security risk if you send a CSRF token in header. Just make sure that the value of this header changes everytime the client requests a page i.e it should be a random number. Also, your web application on client side should send this header back to the server, so that the server can match the value of header sent to the client with the value of the same header received from the client's response.
Sending CSRF in request header is more secure.
CORS doesn't check same-origin policy for the form tag requests, which means if somebody managed to get the CSRF token then he can send the post request by using form tag from different domain (origin)
but in case of sending the CSRF in request header, the form tag cannot send request header, he has to use javascript (fetch() or XmlHttpRequest()), in this case the CORS will prevent him because he is sending from different domain (origin).
This defense relies on the same-origin policy (SOP) restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests with custom headers.
below, is quoted from https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.
This technique obviously works for AJAX calls, but you still need to protect form tags with approaches described in this document such as tokens. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).

HTTP Referer for Single Sign On

As part of a project with a partner, we are required to provide single-sign-on service on our app. Basically, people will log in through our partner's website, then they are redirected to ours. The redirected request will have the user's data in the HTTP header fields.
Here's where it gets "iffy". The process of authenticating if this request is valid or not is dependent on the value of the HTTP Referer field. Our partner tells us to check this field to see that the source is a legitimate one.
Now I know (and I'm glad to be proven wrong) that this field is easy enough to forge, and since no other method of authentication is given to us, a malicious user could easily construct a false HTTP request and gain access to our web app.
I'm a programmer first, and admittedly know very little about the intricacies of HTTP. So are my concerns real? Would using SSL (somehow) void this concern?
Remember that rule number one is never trust client input. Like any other client input, the Referer header is trivial to forge. SSL does nothing for you because you still rely on client input. Also, note that browsers SHOULD NOT send Referer to http pages when referred by https pages.
Additionally, consider that many privacy-conscious people and proxies (that individuals may not have any control over) might strip Referer headers from their requests, breaking your scheme.
To do this properly, you need to use something like OAuth or OpenID, where the protocols have been designed to be secure.
The HTTP Referrer header is unreliable: depending on the browser used it may not be sent.
Does http-equiv="refresh" keep referrer info and metadata?
Yes - It is forgeable.
No - A client can just as easily send a (fake) HTTPS request as a (fake) HTTP request. The only difference is the connection is encrypted. It says nothing about the data transmitted.
That being said, it is another precaution that can be used. It should not be relied upon for security, however.
I would look at Microsoft Federation -- it's likely overkill, but it shows one way to implement SSO securely.

Resources