CouchDB Access Denied Redirect - security

When I punch in the URL for a secured database it displays the following message on the page:
{"error":"unauthorized","reason":"You are not authorized to access this db."}
Although this message certainly gets the point across I would prefer to redirect the user to a different page, like a login page. I've tried changing the authentication_redirect option in the couch config but no success. How would I go about doing this?

Authentication redirect is only works if client explicitly accepts text/html content type (e.g. sends Accept: text/html header):
GET /db HTTP/1.1
Accept: text/html
Host: localhost:5984
In this case, CouchDB will send HTTP 302 response instead of HTTP 401 which redirects on your authentication form, specified with authentication_redirect configuration option:
HTTP/1.1 302 Moved Temporarily
Cache-Control: must-revalidate
Content-Length: 78
Content-Type: text/plain; charset=utf-8
Date: Tue, 24 Sep 2013 01:32:40 GMT
Location: http://localhost:5984/_utils/session.html?return=%2Fdb&reason=You%20are%20not%20authorized%20to%20access%20this%20db.
Server: CouchDB/1.4.0 (Erlang OTP/R16B01)
{"error":"unauthorized","reason":"You are not authorized to access this db."}
Othewise CouchDB doesn't know was request send by human from browser or by application library. In the last case redirecting to the HTML form instead of raising HTTP error isn't suitable solution.

Related

Where can I see Server http header?

One of our applications is tested by Whitehat Sentinel and one of their findings was that in some cases our response header for Server is set to:
Microsoft-HTTPAPI/2.0
I have tried accessing the URL they identified with Postman and Fiddler but I do not see the Server header. I have also tried an online web sniffer http://web-sniffer.net/
Can someone advise how I can see this header?
In Chrome Network tab I see these headers
HTTP/1.1 404 Not Found
Content-Type: text/html
Strict-Transport-Security: max-age=300
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Date: Thu, 13 Jul 2017 13:59:15 GMT
Content-Length: 1245
No Server header.
The URL reported by Whitehat was not working for me, I changed the target URL to domain.com/%% and this caused the request to be handled by http.sys and it returned the Server attribute.
That is not the name of the header. That is the value found in the Server header when an application serves files over HTTP via http.sys, which is the kernel-mode HTTP server built into Windows.
For example, when serving a file via a C# HttpListener, I get this header:
Server: Microsoft-HTTPAPI/2.0
This header can be disabled by setting the following registry value:
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters
Name: DisableServerHeader
Type: DWORD
Value: 1

Why does Node.js Postgres Wiki example insert multiple records per http request?

We are using the node-postgres (pg on NPM) for our app but were having issues so we decided to go back to the examples on the Wiki:
https://github.com/brianc/node-postgres/wiki/Example
When we run the example, each http request inserts two records into the Postgres ("visit") table. Is this the desired behaviour...?
We published the example code to Heroku: https://node-postgres-example.herokuapp.com
(Note: Visit using Google Chrome)
Note: we made 3 changes to the server.js code from the Wiki to make it run on Heroku this is on GitHub: https://github.com/dwyl/postgres-connection-pool-test
(the changes we made to server.js are purely to (1) create the visit table if it does not already exist, (2) to get the postgres connection string from process.env.DATABSE_URL and (3) to listen on process.env.PORT on Heroku. all the rest of the code is as per the Wiki example)
Your client (browser) seems to make two requests. If you use curl from the command line, the example works as advertised and returns a continuous visit counter:
→ curl -i https://node-postgres-example.herokuapp.com/
HTTP/1.1 200 OK
Server: Cowboy
Connection: keep-alive
Content-Type: text/plain
Date: Sun, 13 Mar 2016 14:19:42 GMT
Transfer-Encoding: chunked
Via: 1.1 vegur
You are visitor number 40
→ curl -i https://node-postgres-example.herokuapp.com/
HTTP/1.1 200 OK
Server: Cowboy
Connection: keep-alive
Content-Type: text/plain
Date: Sun, 13 Mar 2016 14:20:00 GMT
Transfer-Encoding: chunked
Via: 1.1 vegur
You are visitor number 41
The second request is almost certainly the browser requesting /favicon.ico which is an anomaly in the web tech stack in that it's a request the browser makes implicitly without an explicit reference in some containing HTML document. If you handle the favicon request separately, perhaps using express-favicon, you will resolve your issue and only log 1 visit per page load.

how to connect node js to quickbooks v3 REST API

I am trying to connect to Intuits v3 REST api, using node.js. I am using SuperAgent and superagent-oauth to make the requests. I generated the access tokens using Intuits Oauth playground. But I keep getting "ApplicationAuthenticationFailed; errorCode=003200; statusCode=401"
This is what I am using.
var OAuth = require('oauth')
,request = require('superagent');
require('superagent-oauth')(request);
var oauth = new OAuth.OAuth('','', consumerKey, consumerSecret, '1.0.A', null, 'HMAC-SHA1')
request.get("https://quickbooks.api.intuit.com/v3/company/672198300/customer/102")
.set('Content-Type', 'text/plain')
.accept('json')
.sign(oauth,accessToken,accessTokenSecret )
.end(function (err, res) {
console.log(res.text)
})
and here is the response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<IntuitResponse time="2014-06-14T18:33:49.228-07:00" xmlns="http://schema.intuit.com/finance/v3">
<Fault type="AUTHENTICATION">
<Error code="3200">
<Message>message=ApplicationAuthenticationFailed; errorCode=003200; statusCode=401</Message>
</Error>
</Fault>
Can anyone shed any light on what is happening?
You could use the node.js client library
Like most other clients, that would save you from manually building http requests. Just provide the application credentials and the individual user credentials and you can simply call methods on a Javascript object. All of the REST endpoints have corresponding methods on the QuickBooks object, which follows node.js convention and takes an optional callback as the last argument.
SOlVED!
I used Postman to create the request. And it worked. Then I checked the oAuth header Postman had generated against the one I was generating with node ( I used requestBin to see the header of my request ). I discovered that the only real difference was that I was using "1.0A" as the version. Changing that to "1.0" worked!
var oauth = new OAuth.OAuth('','', consumerKey, consumerSecret, '1.0', null, 'HMAC-SHA1')
I do not have anything for ou in node.js but can provide you with raw request and response for the calls. Compare your raw requests against this. The signature should be double encoded.
Get Request token call-
GET https://oauth.intuit.com/oauth/v1/get_request_token?oauth_callback=oob&oauth_nonce=34562646-ab97-46e1-9aa7-f814d83ef9d1&oauth_consumer_key=qyprd7I5WvVgWDFnPoiBh1ejZn&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1392306961&oauth_version=1.0&oauth_signature=0EtvSnzsuumeyib2fiEcnSyu8%3D HTTP/1.1
Host: oauth.intuit.com
HTTP/1.1 200 OK
Date: Thu, 13 Feb 2014 15:56:03 GMT
Server: Apache
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Length: 150
Connection: close
Content-Type: text/plain
oauth_token_secret=dXhHHMS1EfdrQ32UabOMscIRWt5bLJNX3ZKljjBc&oauth_callback_confirmed=true&oauth_token=qyprdbwXdWrAt0xM2NgkLlJ79yCp4I2SmDg7tahDBPjA6Wti
Get Access Token-
GET https://oauth.intuit.com/oauth/v1/get_access_token?oauth_verifier=b4skra3&oauth_token=qyprde5fvI7WNOQjTKYLDzTVxJ2dLPTgQEQSPlDVGxEy9wZX&oauth_nonce=f20a5a4b-3635-40a8-92cf-697dfdb07b9d&oauth_consumer_key=qyprd7I5WvVgJZUvWDFnPoiBh1ejZn&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1392397399&oauth_version=1.0&oauth_signature=gEVHttlM8IBAAkmi1dSNJgkKGsI%3D HTTP/1.1
Host: oauth.intuit.com
HTTP/1.1 200 OK
Date: Fri, 14 Feb 2014 17:03:20 GMT
Server: Apache
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Length: 120
Connection: close
Content-Type: text/plain
oauth_token_secret=474gtp6xsFzNJ1EhrrjiHrTH96xXieaRLinjPomA&oauth_token=qyprdNIpWn2oYPupMpeH8Byf9Bhun5rPpIZZtTbNsPyFtbT4

Self-Coded Proxy cannot retrieve image from wikipedia

I'm trying to write a small proxy server in c#. It is working nicely for many webpages I tested (including google.com and microsoft.com). For testing I started my proxy server and configured IE 10 on Windows 8 to use it.
But when I try wikipedia.org it does only load the main page but no pictures. I tried to load a single picture (http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png). When I use IE without proxy it works, but with the proxy I get a 404 response.
This is the GET Request which IE (my proxy just forwards it) issues:
GET http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png HTTP/1.1
Accept: text/html, application/xhtml+xml, */*\
Accept-Language: de-CH\
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: upload.wikimedia.org
DNT: 1
Proxy-Connection: Keep-Alive
IMHO it looks correct. This is the response I get (omited some html tags):
HTTP/1.1 404 Not Found
Content-Type: text/html; charset=UTF-8
X-Varnish: 1427845074 1427806476, 274786836, 3671934588
Via: 1.1 varnish, 1.1 varnish, 1.1 varnish
Content-Length: 262
Accept-Ranges: bytes
Date: Mon, 01 Jul 2013 21:30:54 GMT
Age: 28
Connection: keep-alive
X-Cache: cp1063 hit (1), cp3004 miss (0), cp3003 frontend miss (0)
Access-Control-Allow-Origin: *
...404 Not Found\n The resource could not be found.\nRegexp failed to match URI: "http:/upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png"
The strange part is here:
Regexp failed to match URI: "http:/upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png"
-> the URL starts with a http:/
In the code I connect to uploads.wikimedia.org like this:
// connect to uploads.wikimedia.org
ServerSocket.Connect(RemoteHost, 80);
byte[] SendBuffer = Request.ToArray();
// send the clients request to the server
ServerSocket.Send(SendBuffer);
I have no idea why it doesn't work. Any help is appreciated. My full code is located on Github: Proxy_C_Sharp
I just found out why.
According to the HTTP/1.1 specification (http://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5) in Chapter 5.2.1:
"To allow for transition to absoluteURIs in all requests in future versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI form in requests, even though HTTP/1.1 clients will only generate them in requests to proxies."
I tried it out with a small tool. if I make a request like this:
GET /wikipedia/commons/6/63/Wikipedia-logo.png HTTP/1.1
Host: upload.wikimedia.org
It works. So the reason is that Wikipedia is not conform to the standard. It should accept absolute urls. But it works if I visit the site without a proxy because the browser uses absolute URIs only with proxies. If there is no proxy configured it uses a relative one.

Browser Cache Control, Dynamic Content

Problem: I can't seem to get FireFox to cache images sent from a dynamic server
Setup: Static Apache Server with reverse proxy to a dynamic server (mod_perl2) at backend.
Here is the request URL for the server. It is sent to the the dynamic server, where the cookie is used to validate access to the image:
Request Headers
Host: <OBSCURED>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: <OBSCURED>
Cookie: pz_cred=4KCNr0RM15%2FJCOt%2BEa6%2BL62z%2Fxvbp2xNQHY5pJw5d6Q
Pragma: no-cache
Cache-Control: no-cache
The dynamic server streams the image back to the server, and provides the following response:
Response Headers
Date: Tue, 24 Nov 2009 04:28:07 GMT
Server: Apache/2.2.11 (Ubuntu) mod_apreq2-20051231/2.6.0 mod_perl/2.0.4 Perl/v5.10.0
Cache-Control: public, max-age=31536000
Content-Length: 25496
Content-Type: image/jpeg
Via: 1.1 127.0.1.1:8081
Keep-Alive: timeout=15, max=75
Connection: Keep-Alive
So far, so good (me thinks). However, on reload of the page, the image does not appear cached, and a request is again sent:
Request Headers
Host: <OBSCURED>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) Gecko/2009102815 Ubuntu/9.04 (jaunty) Firefox/3.0.15
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: <OBSCURED>
Cookie: pz_cred=4KCNr0RM15%2FJCOt%2BEa6%2BL62z%2Fxvbp2xNQHY5pJw5d6Q
Cache-Control: max-age=0
It doesn't seem that request should happen as the browser should have cached the image. As it is, a 200 response is received, same as the first, and the image appears to be re-fetched (although the browser does appear to be using the cached images).
The problem appears to be hinted at by the Cache-Control: max-age=0 in the reload request header, above.
Does anyone know why this is happening? Perhaps it is the Via header in the response that is causing the problem?
The original request has
Cache-Control: no-cache
which tells all the intermediate HTTP caches (including Firefox's) that you don't want to use a cached response, you want to get the response from the origin web server itself.
The response says:
Cache-Control: public, max-age=31536000
which tells everyone that as far as the origin server is concerned, the response may be cached. The server seems to be configured to enable the PNG image to be cached: HTTP 1.1 (section 14.21) says:
Note: if a response includes a
Cache-Control field with the max-age
directive (see section 14.9.3), that
directive overrides the Expires field.
Your second request says:
Cache-Control: max-age=0
which tells all the intermediate HTTP caches that you won't take any cached response older than 0 seconds.
One thing to watch out for: if you hit the Reload button in Firefox, you are asking to reload from the origin web server. To test the caching of the image, navigate away from the page and back, or open it up in a new tab. Not sure why you saw no-cache the first time and max-age=0 the second though.
BTW, I like the FireBug plug-in for Firefox. You can take a look at the request and response headers with it and all sorts of other good stuff.
My previous answer was only partially correct.
The problem is the way FireFox 3 handles reload events. Apparently, it almost always requests content again from the origin server. Thus the Cache-Control: max-age=0 request header.
Firefox does use cached images to render a page on reload, but then it still makes all the requests to update them "in the background". It then replace them as they come in.
Therefore, the page renders fast, YSlow reports cached content. But the server is still getting nailed.
The resolution is to interrogate the incoming headers in the dynamic server script and determine if a 'If-Modified-Since' header is provided. If this is the case, and it is determined the content has not changed, an HTTP_NOT_MODIFIED (304) response is returned.
This is not optimal -- I'd rather Firefox not make the requests at all -- but it cuts the page load time in half, and greatly reduces bandwidth. Given the way Firefox works on reload, this appears the best solution.
Other Comments: Jim Ferran's point about navigating away from page and returning has merit -- the cache is always used, and no requests are outgoing (+1 to Jim). Also, content that is dynamically added (e.g. AJAX calls after the initial load) appear to use the cache as well.
Hope this helps someone besides me :)
Looks like solved it:
Removed the proxy via header
Added a Last-Modified header
Added a far-future expires date
Firebug still shows 200 responses from the origin server, however, YSlow recognizes the images as cached. According to YSlow, total image download size when fresh is greater than 500K; with the cache primed, it shows 0K download size.
Here is the response header from the Origin server which does the trick:
Date: Tue, 24 Nov 2009 08:54:24 GMT
Server: Apache/2.2.11 (Ubuntu) mod_apreq2-20051231/2.6.0 mod_perl/2.0.4 Perl/v5.10.0
Last-Modified: Sun, 22 Nov 2009 07:28:25 GMT
Expires: Tue, 30 Nov 2010 19:00:25 GMT
Content-Length: 10883
Content-Type: image/jpeg
Keep-Alive: timeout=15, max=89
Connection: Keep-Alive
Because of the way I'm requesting the images, it really should not matter if these dates are static; my app knows the last mod time before requesting the image and appends this to the request URL on the client side to create a unique URL for each image version, e.g. http://myserver.com/img/125.jpg?20091122 (the info comes from a AJAX JSON feed). I could, for example, make the last modified date 01 Jan 2000, and the Expires date sometime in the year 2050.
If YSlow is correct -- and performance testing implies it is -- then FireBug should really report these local cache hits instead of a 200 response.

Resources