I have a front end HTML website that posts requests to a WebApp hosting online. These html pages run locally on any machine. Thus, they are making cross site requests.
I am thinking of saving cookies and validating these cookies in each request that is sent to my WebApp. Is that the best way to go?
Also, when the user goes back and forth from 1 page to another, (and it is required that he is logged in), do i send a request with cookie from the HTML at the start of the page and redirect him to login page accordingly if his cookie is not valid?
Thanks in advance,
The way you are planning to go about seems alright. Ofcourse you will have to think well about how you are going to save your access tokens. I would use an access token with the IP, username and session expiration date encoded in it.
However just checking on page load would not be the best idea since the result of the check could be spoofed. You should sent this acces token with every request for information so your webapp can decide wether or not to reply with information.
Related
I am trying to wrap my head around csrf protection and there is something I have trouble understanding. Maybe someone can give me the insight I need :).
What I understand
Say we have no csrf protection. Someone logs in to a website A with his/her credentials. After valid login a session cookie is stored in the browser. The user POSTS some data through a form and the sever accepts it with no trouble. Since we have no csrf protection this opens the system up for a vulnerability.
The user visits another website B, a malicious website like a phishing attempt. This website is posting to website A in the background with some javascript xhr request for example. The browser has the cookie stored for website A and since the user was logged in already this is a valid session. Therefore website A will accept the post without any trouble.
To solve this csrf protection comes in. Upon loading the page with the form on website A from the server a nonce (one time code) is generated. This code must be submitted with the form so the server can check if this post came from the same session that requested the form. If the code is the same as the one that was just generated the form is accepted. If the code is missing or incorrect, the server says no.
Question
If malicious website B first makes a get request to the page that renders the form. It would be able to fetch the token to send along with the post request afterwards. Right? Am I missing something obvious?
Thanks!
I understand that you concern is that a malicious website can request your anti-CSRF token.
You would need to prevent cross-origin reads or embedding of pages or endpoints that returns the CSRF tokens. One of the important things to keep in mind is that CORS don't provide CSRF protection, as preflight CORS requests are not always executed by the browser, for example when using regular html forms.
Most modern browsers block cross origin requests by default. When you do need cross origin requests for your own domains, can you do that by setting the correct Cross Origin headers, like Access-Control-Allow-Origin: sub.domain.com.
To prevent embedding in an iframe you can implement the X-Frame-Options: to DENY, or SAMEORIGIN.
You can find more information on https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
I search alot about this topic but I didn't found any useful solutions.
How does Facebook detect that the host isn't Facebook even if the referrer and host can be faked in the request headers using curl or an HTML form in another website.
If you send the login POST parameters to https://m.facebook.com/login/ , Facebook will display a message : (For security reasons, don't login from website other than Facebook) and they block the login.
So how they can 100% sure that the request is made from Facebook.com?
Thank you.
They probably use a version of CSRF. Looking at the actual login form, there are 14 (fourteen!) hidden HTML fields in addition to the username and password. At least 4 of them look to me like a CSRF token. You would need to pull them all out of the home page and send them in with your login request. Tools like curl can send such a complete request, but you will still need to retrieve all the fields yourself.
CSRF is a way of preventing (or at least making very very difficult) a website from being POSTed to from a page not on its own website. The usual way of implementing it is to create a one-time-use token, store it sever-side in the session and also put it on the web page. If the HTTP POST omits the token, then the POST is not accepted. Reload the page with the form and the token is re-generated. Some sites have an expiry for the token, too - that is, it will only work for a short time, such as 10 minutes.
BTW, this has nothing to do with PHP. CSRF support is in many many frameworks in many languages and it's not difficult to build it yourself.
I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.
First, i know that there are to components of handling user access to restricted pages in web appliations.
Authentification is about identifying a user.
Authorization: is about determining what parts of the application an authenticated user has access to
I belive this is made with session id.
But does the client have to send the session id with every request he makes? If not how can he be authentified? Or is a cookie used for this?
Sessions exist on the server. They are sometimes (usually) identified by a cookie.
The session can contain a multitude of information that is relevant to the session. E.g. shopping basket.
Server gets the cookie. Looks up the session. Has it timed out? Is it from the same IP address? From the same browser perhaps? Then use the stored information for the generation of the web page
Session still among pages, but it would be destroyed when you close your browser
Cookies still sometime when the time expires
I wish to use my Stellaris LM3S8962 microcontroller as a bridge between internet and a bunch of sensors. I will be using Zigbee nodes for communication from the sensors to the microcontroller. I have been able to use the lwIP TCP/IP stack (for LM3S8962) to access HTML pages stored in the controller's flash.
Now, I want to add a secure login system for the same. What I basically want is that - when I enter the IP of the controller in the browser, it should prompt me for a username and a password. I want to make this system as secure as possible using the lwIP TCP/IP stack.
FYI, the stack does not support PHP or any other scripts. CGI feature (in C) is supported but I don't know how to implement the security part. Please guide.
There are basically two ways you could implement user authentication over HTTP on your platform:
"classic" Basic HTTP authentication (see RFC2616 for exact specification),
a login form, creating a session ID, and returning that to the browser, to be stored either in a cookie, or in the URL.
Basic HTTP authentication works by you inserting a check into your web page serving routine, to see if there is an Authorization HTTP header. If there is, you should decode it (see the RFC), and check the specified username/password. If the credentials are ok, then proceed with the web page serving. If the credentials are incorrect, or in case there is no Authorization header, you should return a 401 error code. The browser will then prompt the user for the credentials. This solution is rather simple, and you'll get the browser login dialog, not a nice HTML page. Also, the microcontroller will have to check the credentials for every page request.
Authentication via a login form works by your server maintaining a list of properly authenticated user sessions (probably in memory), and checking the request against this list. Upon loggin in, a unique, random ID should be generated for that session, and the session data should be entered into your list. The new session ID should be returned to the browser to be included in upcoming HTTP requests. You may choose the session ID to be put into a browser cookie, or you can embed it into the URL, as a URL parameter (?id=xxxxx). The URL parameter is embedded by sending a 302 Redirection response to the browser with the new URL. If you want the cookie solution, you can send the response to the login request with a Set-Cookie HTTP response header. In the login form solution, the actual credentials (username/password) are only checked once, at login. After that, only the session ID is checked against the list of active sessions. The session ID has to be extracted from the requests for every web page serving (from the URL, or from the Cookie HTTP header). Also, you'll have to implement some kind of session timeout, both as a security measure, and to put bounds to the size of the list of active sessions.
Now comes the harder part: in both solutions, the username/password travels in some requests (in every request for the Basic HTTP authentication, and in the login page POST for the login form solution), and can be extracted from the network traffic. Also, information neccessary to hijack the session is present in every request in both solutions. If this is a problem (the system works on a freely accessible LAN segment, or over network links beyond your control), you'll probably need SSL to hide these sensitive data. How to get a reasonable SSL implementation for your platform, that's another question.