Hey so am new to pentesting and I learnt that using https makes the traffic encrypted so hackers cannot decipher credentials passed in a body for example in a login page or read the traffic properly. So I was practicing with both GET and POST requests for a login page app over https and in both the credentials are present in the request body when I intercept them using burpsuite. In GET the params are available in the URL and in POST they are present in the body. Can someone explain then how can the privacy of credentials be maintained if they are present in plaintext in the request body. Won't everyone be able to read them??
Testing:
Submitted credentials through a login page application over Https.
Passed them through both GET and POST methods.
Result:
Able to see creds in both types of calls in the request body.
Burpsuite sees them because it runs as a proxy for the browser. Over HTTPS both GET and POST parameters are encrypted once they leave the browser.
Note that the URL as such doesn’t leave the browser (or, more generally, user agent). A connection is made using the hostname and port (those will still be visible to a man-in-the-middle) but the GET parameters are sent using HTTPS once the connection is established.
The reason people say not to pass sensitive information in GET requests is because the URL may end up in log files, browser history, seen over people’s shoulders, etc. Also GET requests can be cached.
Related
I am trying to wrap my head around csrf protection and there is something I have trouble understanding. Maybe someone can give me the insight I need :).
What I understand
Say we have no csrf protection. Someone logs in to a website A with his/her credentials. After valid login a session cookie is stored in the browser. The user POSTS some data through a form and the sever accepts it with no trouble. Since we have no csrf protection this opens the system up for a vulnerability.
The user visits another website B, a malicious website like a phishing attempt. This website is posting to website A in the background with some javascript xhr request for example. The browser has the cookie stored for website A and since the user was logged in already this is a valid session. Therefore website A will accept the post without any trouble.
To solve this csrf protection comes in. Upon loading the page with the form on website A from the server a nonce (one time code) is generated. This code must be submitted with the form so the server can check if this post came from the same session that requested the form. If the code is the same as the one that was just generated the form is accepted. If the code is missing or incorrect, the server says no.
Question
If malicious website B first makes a get request to the page that renders the form. It would be able to fetch the token to send along with the post request afterwards. Right? Am I missing something obvious?
Thanks!
I understand that you concern is that a malicious website can request your anti-CSRF token.
You would need to prevent cross-origin reads or embedding of pages or endpoints that returns the CSRF tokens. One of the important things to keep in mind is that CORS don't provide CSRF protection, as preflight CORS requests are not always executed by the browser, for example when using regular html forms.
Most modern browsers block cross origin requests by default. When you do need cross origin requests for your own domains, can you do that by setting the correct Cross Origin headers, like Access-Control-Allow-Origin: sub.domain.com.
To prevent embedding in an iframe you can implement the X-Frame-Options: to DENY, or SAMEORIGIN.
You can find more information on https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.
Suppose I have an client/server application working over HTTP. The server provides a RESTy API and client calls the server over HTTP using regular HTTP GET requests.
The server requires no authentication. Anyone on the Internet can send a GET HTTP request to my server. It's Ok. I just wonder how I can distinguish between the requests from my client and other requests from the Internet.
Suppose my client sent a request X. A user recorded this request (including the agent, headers, cookies, etc.) and send it again with wget for example. I would like to distinguish between these two requests in the server-side.
There is no exact solution rather then authentication. On the other hand, you do not need to implement username & password authentication for this basic requirement. You could simply identify a random string for your "client" and send it to api over custom http header variable like ;
GET /api/ HTTP/1.1
Host: www.backend.com
My-Custom-Token-Dude: a717sfa618e89a7a7d17dgasad
...
You could distinguish the requests by this custom header variable and it's values existence and validity. But I'm saying "Security through obscurity" is not a solution.
You cannot know for sure if it is your application or not. Anything in the request can be made up.
But, you can make sure that nobody is using your application inadvertently. For example somebody may create a javascript application and point to your REST API. The browser sends the Origin header (draft) indicating in which application was the request generated. You can use this header to filter calls from applications that are not yours.
However, that somebody may use his own web server as proxy to your application, allowing him then to craft HTTP requests with more detail. In this case, at some point you would be able of pin point his IP address and block it.
But the best solution would be to put some degree of authorization. For example, the UI part can ask for authentication via login/password, or just a captcha to ensure the caller is a person, then generate a token and associate that token with the use session. From that point the calls to the API have to provide such token, otherwise you must reject them.
When you use Disqus API on the server side, you have to put 'app_secret' in URLs on every API request. Here is what Disqus doc says:
If you are using the server-side API, you will need to send api_secret with your secret API key value.
(https://disqus.com/api/docs/requests/)
When I call URL like this:
https://disqus.com/api/3.0/threads/list.json?access_token={ACCESS_TOKEN}
I get this error:
{"code":5,"response":"Invalid API key"}
When change URL to this:
https://disqus.com/api/3.0/threads/list.json?access_token={ACCESS_TOKEN}&api_secret={API_SECRET}
It works ok.
I think it is very, very dangerous to use secret key in common GET requests. I don't know any other API that would be required to use secret key in GET requests.
What do you think about it?
It's a server-side request, so from your server to Disqus.com. The client will never see the URL.
As you are using HTTPS, all that is visible in plaintext is that you made a request to a server at a specific IP address. However, an attacker monitoring your DNS requests or using a reverse DNS lookup can easily determine that the IP address belongs to the server at disqus.com.
So in short: It's safe. An attacker can see that you talk to disqus.com, but everything else is encrypted.
Also see this answer and this answer for more.
I wish to use my Stellaris LM3S8962 microcontroller as a bridge between internet and a bunch of sensors. I will be using Zigbee nodes for communication from the sensors to the microcontroller. I have been able to use the lwIP TCP/IP stack (for LM3S8962) to access HTML pages stored in the controller's flash.
Now, I want to add a secure login system for the same. What I basically want is that - when I enter the IP of the controller in the browser, it should prompt me for a username and a password. I want to make this system as secure as possible using the lwIP TCP/IP stack.
FYI, the stack does not support PHP or any other scripts. CGI feature (in C) is supported but I don't know how to implement the security part. Please guide.
There are basically two ways you could implement user authentication over HTTP on your platform:
"classic" Basic HTTP authentication (see RFC2616 for exact specification),
a login form, creating a session ID, and returning that to the browser, to be stored either in a cookie, or in the URL.
Basic HTTP authentication works by you inserting a check into your web page serving routine, to see if there is an Authorization HTTP header. If there is, you should decode it (see the RFC), and check the specified username/password. If the credentials are ok, then proceed with the web page serving. If the credentials are incorrect, or in case there is no Authorization header, you should return a 401 error code. The browser will then prompt the user for the credentials. This solution is rather simple, and you'll get the browser login dialog, not a nice HTML page. Also, the microcontroller will have to check the credentials for every page request.
Authentication via a login form works by your server maintaining a list of properly authenticated user sessions (probably in memory), and checking the request against this list. Upon loggin in, a unique, random ID should be generated for that session, and the session data should be entered into your list. The new session ID should be returned to the browser to be included in upcoming HTTP requests. You may choose the session ID to be put into a browser cookie, or you can embed it into the URL, as a URL parameter (?id=xxxxx). The URL parameter is embedded by sending a 302 Redirection response to the browser with the new URL. If you want the cookie solution, you can send the response to the login request with a Set-Cookie HTTP response header. In the login form solution, the actual credentials (username/password) are only checked once, at login. After that, only the session ID is checked against the list of active sessions. The session ID has to be extracted from the requests for every web page serving (from the URL, or from the Cookie HTTP header). Also, you'll have to implement some kind of session timeout, both as a security measure, and to put bounds to the size of the list of active sessions.
Now comes the harder part: in both solutions, the username/password travels in some requests (in every request for the Basic HTTP authentication, and in the login page POST for the login form solution), and can be extracted from the network traffic. Also, information neccessary to hijack the session is present in every request in both solutions. If this is a problem (the system works on a freely accessible LAN segment, or over network links beyond your control), you'll probably need SSL to hide these sensitive data. How to get a reasonable SSL implementation for your platform, that's another question.