Can't see all response header cookies in jmeter - azure

I am trying to load test an application which uses azure AD B2C authentication. I have replicated all requests. However there is authorize endpoint which when inspected in browser I can see few cookies from response header.
This has openidconnect cookie, another couple of cookies. However when running this request in jmeter, I can see only openidconnect cookie but not others.
I need to send other cookies in subsequent request. I have cookie manager and also turned on save cookie flag in user.properties and jmeter.properties files.
Any help is much appreciated.

Most probably it indicates some problem with cookies, i.e. domain/path mismatch or the cookie is expired or something like this so it sounds like your system under test issue.
If you don't care about correctness of functional behaviour of your application and just want to send all the incoming cookies no matter of their validity - you can try the following workarounds:
Disable JMeter's mechanism for checking cookies so everything will be stored in the Cookie Manager no matter if there are any issues. It can be done by adding the next line to user.properties file:
CookieManager.check.cookies=false
Switch to less restrictive implementation, i.e. netscape in the HTTP Cookie Manager itself:
More information: HTTP Cookie Manager Advanced Usage - A Guide
You can also enable debug logging for the Cookie Manager by adding the next line to log4j2.xml file:
<Logger name="org.apache.jmeter.protocol.http.control" level="debug" />
once done you will be able to see what's going on with each and every incoming/outgoing cookie in jmeter.log file

Related

Do I need to use CSRF tokens in a cookie-based API?

I have an API in Next.js (NextAuth.js) that only the frontend will be using. It uses cookies for authentication. My question is could a malicious website change the user's data using CSRF? Should I implement CSRF tokens or can I prevent malicious websites from changing the data without it?
If authentication is based on something that the browser sends automatically with requests (like cookies), then yes, you most likely need protection against CSRF.
You can try it yourself: set up a server on one origin (eg. localhost:3000), and an attacker page on another (eg. localhost:8080, it's the same as a different domain, controlled by an attacker). Now log in to your app on :3000, and on your attacker origin make a page that will post to :3000 something that changes data. You will see that while :8080 will not receive the response (because of the same origin policy), :3000 will indeed receive and process the request. It will also receive cookies set for :3000, regardless of where the user is making the request from.
For mitigation, you can implement the synchronizer token pattern (csrf tokens), double submit, or you can decide to rely on the SameSite property of cookies, which are not supported by old browsers, but are supported by fairly recent ones, so there is some risk, depending on who your users are.

JMeter load test for Application authenticated with Azure AD

I'm new to JMeter and trying to create test case.The application is using OAuth 2.0 azure active directory authentication,I followed one the post https://blog.pnop.co.jp/jmeter-webapps-azuread-auth_en/ and was able to do http request to app but in return I'm getting below error:
We can't sign you in
Your browser is currently set to block cookies. You need to allow cookies to use this service.
Cookies are small text files stored on your computer that tell us when you're signed in. To learn how to allow cookies, check the online help in your web browser.
I took care of CookieManager.save.cookies=true in user.properties but still cookies are giving me hard-time though I could see cookies populating in request header being sent
If someone have a crack of similar scenario then that would be great help
Thanks
Have you added HTTP Cookie Manager to your test plan?
If yes and you're still experiencing problems try:
Checking jmeter.log file for any suspicious entries
Choose a more "relaxed" implementation for the cookie manager, i.e. netscape
Add the next line to user.properties file:
CookieManager.check.cookies=false
JMeter restart will be required to pick up the change
More information: HTTP Cookie Manager Advanced Usage - A Guide

How to deny outside post requests?

I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.

understanding basic authentication with a 401

I'm a little confused about Basic authentication in regards to web browsers. I had thought that the web browser would only send an Authorization header after having received an HTTP 401 status in the previous response. However, it appears that Chrome sends the Authorization header with every request thereafter. It has the data that I entered once upon a time in response to a 401 from my website and sends it with every message (according to the developer tools that ship with Chrome and my webserver). Is that expected behavior? Is there some header I should use with my 401 to infer that the Authorization stuff should not be cached? I'm using WWW-Authenticate header currently.
This is the expected behavior of the browser as defined in RFC 2617 (Section 2):
A client SHOULD assume that all paths at or deeper than the depth of
the last symbolic element in the path field of the Request-URI also
are within the protection space specified by the Basic realm value of
the current challenge. A client MAY preemptively send the
corresponding Authorization header with requests for resources in
that space without receipt of another challenge from the server.
Similarly, when a client sends a request to a proxy, it may reuse a
userid and password in the Proxy-Authorization header field without
receiving another challenge from the proxy server. See section 4 for
security considerations associated with Basic authentication.
to my knowledge, Basic HTTP authentication has no ability to perform a logout / re-authentication. This along with the lack of security of HTTP Basic authentication is why most websites now use forms and cookies for auth solutions.
From RFC 2617:
If a prior request has been authorized, the
same credentials MAY be reused for all other requests within that
protection space for a period of time determined by the
authentication scheme, parameters, and/or user preference.
From my experience it is quite common to see browsers automatically sending the Basic credentials for subsequent requests. It prevents having to do an extra round trip for additional resources.

Is CSRF possible with PUT or DELETE methods?

Is CSRF possible with PUT or DELETE methods? Or does the use of PUT or DELETE prevent CSRF?
Great question!
In a perfect world, I can't think of a way to perform a CSRF attack.
You cannot make PUT or DELETE requests using HTML forms.
Images, Script tags, CSS Links etc all send GET requests to the server.
XmlHttpRequest and browser plugins such as Flash/Silverlight/Applets will block cross-domain requests.
So, in general, it shouldn't be possible to make a CSRF attack to a resource that supports PUT/DELETE verbs.
That said, the world isn't perfect. There may be several ways in which such an attack can be made possible :
Web Frameworks such as Rails have support for "pseudo method". If you put a hidden field called _method, set its value to PUT or DELETE, and then submit a GET or POST request, it will override the HTTP Verb. This is a way to support PUT or DELETE from browser forms. If you are using such a framework, you will have to protect yourself from CSRF using standard techniques
You may accidentally setup a lax response headers for CORS on your server. This would allow arbitrary websites to make PUT and DELETE requests.
At some point, HTML5 had planned to include support for PUT and DELETE in HTML Forms. But later, they removed that support. There is no guarantee that it won't be added later. Some browsers may actually have support for these verbs, and that can work against you.
There may just be a bug in some browser plugin that could allow the attacker to make PUT/DELETE requests.
In short, I would recommend protecting your resources even if they only support PUT and DELETE methods.
Yes, CSRF is possible with the PUT and DELETE methods, but only with CORS enabled with an unrestrictive policy.
I disagree with Sripathi Krishnan's answer:
XmlHttpRequest and browser plugins such as Flash/Silverlight/Applets
will block cross-domain requests
Nothing stops the browser from making a cross-domain request. The Same Origin Policy does not prevent a request from being made - all it does is prevent the request from being read by the browser.
If the server is not opting into CORS, this will cause a preflight request to be made. This is the mechanism that will prevent a PUT or DELETE from being used, because it is not a simple request (the method needs to be HEAD, GET or POST). Assuming a properly locked down CORS policy of course (or none at all which is secure by default).
No. Relying on an HTTP verb is not a way to prevent a CSRF attack. It's all in how your site is created. You can use PUTs as POSTs and DELETEs as GETs - it doesn't really matter.
To prevent CSRF, take some of the steps outlined here:
Web sites have various CSRF countermeasures available:
Requiring a secret, user-specific token in all form submissions and side-effect URLs prevents CSRF; the attacker's site cannot put the
right token in its submissions1
Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security
implications (money transfer, etc.)
Limiting the lifetime of session cookies Checking the HTTP Referer header or(and)
Checking the HTTP Origin header[16]
Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls[17]
Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies[18]
Verifying that the request's header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5).
This protection has been proven unsecure[19] under a combination of
browser plugins and redirects which can allow an attacker to
provide custom HTTP headers on a request to any website, hence
allow a forged request.
In theory it should not be possible as there is no way to initiate a cross-domain PUT or DELETE request (except for CORS, but that needs a preflight request and thus the target site's cooperation). In practice I would not rely on that - many systems have been bitten by e.g. assuming that a CSRF file upload attack was not possible (it should not be, but certain browser bugs made it possible).
CSRF is indeed possible with PUT and DELETE depending on the configuration of your server.
The easiest way to think about CSRF is to think of having two tabs open in your browser, one open to your application with your user authenticated, and the other tab open to a malicious website.
If the malicious website makes a javascript request to your application, the browser will send the standard cookies with the request, thus allowing the malicious website to 'forge' the request using the already authenticated session. That website can do any type of request that it wants to, including GET, PUT, POST, DELETE, etc.
The standard way to defend against CSFR is to send something along with the request that the malicious website cannot know. This can be as simple as the contents of one of the cookies. While the request from the malicious site will have the cookies sent with it, it cannot actually access the cookies because it is being served by a different domain and browser security prevents it from accessing the cookies for another domain.
Call the cookie content a 'token'. You can send the token along with requests, and on the server, make sure the 'token' has been correctly provided before proceeding with the request.
The next question is how do you send that value with all the different requests, with DELETE specifically difficult since it is not designed to have any kind of payload. In my opinion, the cleanest way is to specify a request header with the token. Something like this x-security-token = token. That way, you can look at the headers of incoming requests, and reject any that are missing the token.
In the past, standard ajax security restricted what could be done via ajax on the malicious server, however, now-a-days, the vulnerability depends on how you have your server set up with regards to accees-control configurations. Some people open up their server to make it easier to make cross domain calls or for users to make their own RESTful clients or the like, but that also makes it easier for a malicious site to take advantage unless CSRF prevention methods like the ones above are put in place.

Resources