Suppose I have the following URL route:
app.post('upvote', function(req, res) {
// make a database a call to increase vote count
});
What can I do to prevent others from opening up a console and sending AJAX POST request on www.mysite.com/upvote? I'd like it so that only www.mysite.com is allowed to make that POST request and no one else.
What can I do to prevent others from opening up a console and sending AJAX POST request
Who is "others"?
If others==users of the site... there is nothing you can do to stop them sending whatever requests they like, using the JavaScript console or any other means. You can't trust the client, so you have to have server-side authorisation: requiring that the user be logged into an account, and registering that the account has upvoted so can't vote again.
If others==admins of other sites (by serving script to their users that causes submissions to your site)... it isn't possible for JavaScript on another site to cause an AJAX POST request, or at least not unless you deliberately opt into that using CORS. But it's quite possible for them to cause a POST by simply creating a <form> pointing to your address and submitting it.
This is a classic Cross Site Request Forgery problem. The widely-accepted solution to XSRF issues is to include a secret token as a parameter in each POST (form or AJAX) submission. That secret token is associated with the logged-in state of the user, either by being stored in the server-side session, or replicated in a client-side cookie. Either way an attacker from another site isn't capable of getting hold of a token that is valid for the logged-in user, so they can't fake the request.
You need XSRF protection on all actions that have a direct effect, whether AJAX or form-based POSTs.
I agree with bobince. others is a very general term.
If others belong to other sites (malicious sites on net).
express has csrf middleware to protect from Cross Site Request
Forgery. You can use it to prevent such a scenario. See the API docs
here.
If others are users of your own site
then that is an authentication issue. Every request must be
checked before serving / executing it. You should implement a user
authentication to prevent this situation. I use passport, and
ensure that user is authenticated before I actually run app.post
handler.
Related
I am trying to wrap my head around csrf protection and there is something I have trouble understanding. Maybe someone can give me the insight I need :).
What I understand
Say we have no csrf protection. Someone logs in to a website A with his/her credentials. After valid login a session cookie is stored in the browser. The user POSTS some data through a form and the sever accepts it with no trouble. Since we have no csrf protection this opens the system up for a vulnerability.
The user visits another website B, a malicious website like a phishing attempt. This website is posting to website A in the background with some javascript xhr request for example. The browser has the cookie stored for website A and since the user was logged in already this is a valid session. Therefore website A will accept the post without any trouble.
To solve this csrf protection comes in. Upon loading the page with the form on website A from the server a nonce (one time code) is generated. This code must be submitted with the form so the server can check if this post came from the same session that requested the form. If the code is the same as the one that was just generated the form is accepted. If the code is missing or incorrect, the server says no.
Question
If malicious website B first makes a get request to the page that renders the form. It would be able to fetch the token to send along with the post request afterwards. Right? Am I missing something obvious?
Thanks!
I understand that you concern is that a malicious website can request your anti-CSRF token.
You would need to prevent cross-origin reads or embedding of pages or endpoints that returns the CSRF tokens. One of the important things to keep in mind is that CORS don't provide CSRF protection, as preflight CORS requests are not always executed by the browser, for example when using regular html forms.
Most modern browsers block cross origin requests by default. When you do need cross origin requests for your own domains, can you do that by setting the correct Cross Origin headers, like Access-Control-Allow-Origin: sub.domain.com.
To prevent embedding in an iframe you can implement the X-Frame-Options: to DENY, or SAMEORIGIN.
You can find more information on https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
I search alot about this topic but I didn't found any useful solutions.
How does Facebook detect that the host isn't Facebook even if the referrer and host can be faked in the request headers using curl or an HTML form in another website.
If you send the login POST parameters to https://m.facebook.com/login/ , Facebook will display a message : (For security reasons, don't login from website other than Facebook) and they block the login.
So how they can 100% sure that the request is made from Facebook.com?
Thank you.
They probably use a version of CSRF. Looking at the actual login form, there are 14 (fourteen!) hidden HTML fields in addition to the username and password. At least 4 of them look to me like a CSRF token. You would need to pull them all out of the home page and send them in with your login request. Tools like curl can send such a complete request, but you will still need to retrieve all the fields yourself.
CSRF is a way of preventing (or at least making very very difficult) a website from being POSTed to from a page not on its own website. The usual way of implementing it is to create a one-time-use token, store it sever-side in the session and also put it on the web page. If the HTTP POST omits the token, then the POST is not accepted. Reload the page with the form and the token is re-generated. Some sites have an expiry for the token, too - that is, it will only work for a short time, such as 10 minutes.
BTW, this has nothing to do with PHP. CSRF support is in many many frameworks in many languages and it's not difficult to build it yourself.
I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.
Most frameworks I've looked at will insert into forms a hidden input element with the value being a CSRF token. This is designed to prevent user Bob from logging in on my site and then going to http://badsite.com which embeds img tags or JS that tell my site to execute requests using Bob's still logged in session.
What I'm not getting is what stops JS on badsite.com from AJAX requesting a URL with a form on my site, regex-ing the CSRF token from the hidden input element, and then AJAX posting to my site with that valid CSRF token?
It seems to me that you'd want to use JS to insert the CSRF token into the form at runtime, pulling the value from a cookie (which is inaccessible to badsite.com). However, I've not heard this approach mentioned and so many frameworks do the simple hidden input with the CSRF token, I'm wondering if my solution is over-engineered and I'm missing some part of what makes the hidden input method secure.
Can anyone provide some clarity? Thanks!
what stops JS on badsite.com from AJAX requesting a URL with a form on my site
The Same Origin Policy (unless you subvert it with overly liberal CORS headers). JavaScript running on a site can't read data from a site hosted on a different host without permission from that host.
There are workarounds to the SOP, but they all either require the co-operation of the host the data is being read from (JSON-P, CORS), or don't pass any data that identifies a specific user (so can't access anything that requires authorisation).
tl;dr
I am considering a webservice design model which consist of several services/subdomains, each of which may be implemented in different platforms and hosted in different servers.
The main issue is authentication. If a request for jane's resources came in, can a split system authenticate that request as her's?
All services access the same DB layer, of course. So I have in mind a single point of truth each service can use to authenticate each request.
For example, jane accesses www.site.com, which renders stuff in her browser. The browser may send a client-side request to different domains of site.com, with requests like:
from internalapi.site.com fetch /user/users_secret_messages.json
from imagestore.site.com fetch /images/list_of_images
The authentication issue is: another user (or an outsider) can craft a request that can fool a subdomain into giving them information they should not access.
So I have in mind a single point of truth: a central resource accessible by each service that can be used to authenticate each request.
In this pseudocode, AuthService.verify_authentication() refers the central resource
//server side code:
def get_user_profile():
auth_token=request.cookie['auth_token']
user=AuthService.verify_authentication(auth_token)
if user=Null:
response.write("you are unauthorized/ not logged in")
else:
response.write(json.dumps(fetch_profile(user)))
Question: What existing protocols, software or even good design practices exist to enable flawless authentication across multiple subdomains?
I seen how OAuth takes the headache out of managing 3rd-party access and wonder if something exists for such authentication. I also got the idea from Kerberos and TACACS.
This idea was the result of teamthink, as a way to simplify architecture (rather than handle heavy loads).
I built a system that did this a little while ago. We were building shop.megacorp.com, and had to share a login with www.megacorp.com, profile.megacorp.com, customerservice.megacorp.com, and so on.
The way it worked was in two parts.
Firstly, all signon was handled through a set of pages on accounts.megacorp.com. The signup link from our pages went there, with a return URL as a parameter (so https://accounts.megacorp.com/login?return=http://shop.megacorp.com/cart). The login process there would redirect back to the return URL after completion. The login page also set an authentication cookie, scoped to the whole of the megacorp.com domain.
Secondly, authentication was handled on the various sites by grabbing the cookie from the request, then forwarding it via an internal web service to accounts.megacorp.com. We could have done this is a straightforward SOAP or REST query, with the cookie as a parameter, but actually, what we did was send a HTTP request, with the cookie added to the headers (sort of as if the user had sent the request directly). That URL would then come back as a 200 if the cookie was valid, serving up some information about the user, or a 401 or something if it wasn't. We could then deal with the user accordingly.
Needless to say, we didn't want to make a request to accounts.megacorp.com for every user request, so after a successful authentication, we would mark the user's session as authenticated. We'd store the cookie value and a timestamp, and if subsequent requests had the same cookie value, and were within some timeout of the timestamp, we'd treat them as authenticated without passing them on.
Note that because we pass the cookie as a cookie in the authentication request, the code to validate it on accounts.megacorp.com is exactly the same as handling a direct request from a user, so it was trivial to implement correctly. So, in response to your desire for "existing protocols [or] software", i'd say that the protocol is HTTP, and the software is whatever you can use to validate cookies (a standard part of any web container's user handling). The authentication service is as simple as a web page which prints the user's name and details, and which is marked as requiring a logged-in user.
As for "good design practices", well, it worked, and it decoupled the login and authentication processes from our site pretty effectively. It did introduce a runtime dependency on a service on accounts.megacorp.com, which turned out to be somewhat unreliable. That's hard to avoid.
And actually, now i think back, the request to accounts.megacorp.com was actually a SOAP request, and we got a SOAP response back with the user details, but the authentication was handled with a cookie, as i described. It would have been simpler and better to make it a REST request, where our system just did a GET on a standard URL, and got some XML or JSON describing the user in return.
Having said all that, if you share a database between the applications, you could just have a table, in which you record (username, cookie, timestamp) tuples, and do lookups directly in that, rather than making a request to a service.
The only other approach i can think of is to use public-key cryptography. The application handling login could use a private key to make a signature, and use that as the cookie. The other applications could have the corresponding public key, and use that to verify it. The keys could be per-user or there could just be one. That would not involve any communication between applications, or a shared database, following the initial key distribution.