Cloudant couchapp fails suddenly with CSP sandbox error - content-security-policy

I have a couchapp which has been hosted on Cloudant free plan for years. A few days ago it started failing: the html, css and img files load but it doesn't load any of the js. The browser console error is:
Blocked script execution in 'https://b482ecaa-1ac2-4933-bec9-ecade207eea0-bluemix.cloudant.com/wxd/_design/app/index.html' because the document's frame is sandboxed and the 'allow-scripts' permission is not set.
The index.html response headers include Content-Security-Policy: sandbox
I have a replica of this database on my local LAN and it does not have this problem, and its response headers do not include this.
I haven't changed the couchapp or any configuration at all, so Cloudant must have changed some configuration. I notice that Cloudant was updated to 2.96 on 10/14, which coincides with the timing, but I don't see anything in the release notes which mentions sandbox or Content-Security-Policy.
The Couchdb docs mention configuration variables related to CSP, but I can't find any way to change these configuration settings in the Cloudant dashboard, nor can I find any mention of it in the Cloudant docs.
Is this a configuration error on the part of Cloudant, and if so, are they likely to reverse it? If not, is there a way to change this configuration for my site or any other workaround?
UPDATE: I had an idea that perhaps I could override this by including a <meta http-equiv="Content-Security-Policy" tag in the document, but according to CSP documentation, sandbox is not allowed in a meta tag.

Unfortunately, CouchApps will no longer run on Cloudant. As explained in this blog post a new Content-Security-Policy: sandbox header has been added to all attachment fetches. This prevents JavaScript execution and therefore JavaSript-based CouchApps will cease to function.
The reason for the change is to mitigate this CVE.

Related

Is it possible to update parts/directives in the "content-security-policy" header using DeclarativeNetRequest API?

I am in the process of migrating from Manifest V2 to V3, from Web Request API to Declarative Net Request API. Using Web Request, I modify the "content-security-policy" header by adding a domain into the list of various directives (default-src, frame-src, etc). I tried using the "append" operation in the rule action. Is it possible to target a directive? What if the directive does not exist? Does append just add the supplied string to the end? With Web Request, I was able to examine each directive and update each accordingly, before returning the new value. This allowed me to inject a script that is needed into each frame.
Instead, would it be possible to continue to use the Web Request API with V3? In my setup, I have my chrome extension "Published - unlisted". I do use the force install option when deploying the extension to our internal users, and the only reason I have it unlisted and not private is so that the users who have the extension can get updated whenever a new version is released. Would it be possible to have users updated without having the extension listed? Perhaps by hosting the extension in my own server? Please advise on what can be done to have the ability to update the response header, specifically the "content-security-policy" header the way I have done before, and whether I can continue to use Web Request API going forward (using V3). In the Chrome dev website, there's a mention about continuing to use Web Request if force install is used, and only if its "deployed to a given domain or to trusted testers", but I'm not sure what that actually means. What would I need to do to meet the criteria?
I tried using the append operation in the rule action via the Declarative Net Request API, but its not working as expected. I dont see the security policy being updated when I inspect the response header in dev tools. I also get errors stating that many scripts, images, etc violate the security policy for websites that did not have one to begin with (My extension targets any website).

Need to disable loading resources on login page of XPages application

We are in the midst of troubleshooting intermittent 403 errors in our application. One of the issues that came up was that we are loading resources on our login page. The resources intermittently produce 401 errors in our Integration environment. None of these are needed until after the user has logged into the application.
All of these resources are stored in the NSF, none use a CDN. These resources are loaded via a Theme on every page.
Naturally, I thought the solution was to disable the theme of the login XPage. I went about setting disableTheme="true" on the <xp:view> tag. For some reason this did not work and the resources are still loaded on the page.
Can anyone suggest how to make sure the resources are NOT loaded on this particular page?
It seems that resources contained within the WebContent/ path of an NSF are still protected and require login when that NSF's ACL is set to no access for Anonymous. A couple of ways to get around this are:
host the resource files from the server's /domino/... path (html, js, lib, icons)
load those resources from a CDN
use an alternate theme (mixed results, see the comment thread on the OP's question)
computing the individual resource's (or resources) tag to render for all but a login page
As Per Henrik Lausten answered on another post, the implementation with the solution #4 would effectively be:
<resource rendered="#{javascript:view.getPageName() != '/login.xsp'}">
<content-type>application/x-javascript</content-type>
<href>js/jquery-1.11.1.min.css</href>
</resource>
For reference: there's a 5th method that involves setting $PublicAccess to "1" on the resources in the WebContent folder to mark them as publicly available.
John Dalsgaard has a blog post about it with example code.

Domino, CORS and the OPTIONS request

I'm working on an AngularJS app that uses Domino as a backend. Since I want more customization options than Domino Access Services (DAS) gives me, my next choice was the REST Service from the Extension Library.
The app is running on a separate domain from Domino, so I need to add CORS headers to make that scenario work. With CORS, the browser (for some requests) first makes a preflight HTTP OPTIONS request to the server to check what methods are allowed (more on CORS here: http://www.html5rocks.com/en/tutorials/cors/).
The problem I now run into is that Domino throws a Method Not Allowed error (response code 405) on that OPTIONS request. I already added it to the list of allowed methods in my internet site document (although I'm not sure if the REST service will honor that at all). The request comes through fine with DAS.
Looking at the source code of the RestDocumentJsonService in the Extension Library it seems that the OPTIONS method isn't supported at all.
Any thoughts on how to make this work? Or for a workaround? I know that I can write my own servlet or install a proxy in front of Domino, but I don't want to go that route (yet ;-)
If you are trying to use Authenticated CORS you will need minimum four headers to work
Access-Control-Allow-Credentials: true
access-control-allow-header: X-Requested-With, Origin, Accept, Accept-Version, Content-Type
access-control-allow-method: OPTIONS, GET, PUT, POST, DELETE, PATCH
access-control-allow-origin: http://yourOtherDomain.com
Unfortunately you can only add 3 headers through the Web Site documents
You cannot add anything through a Phase Listener because the ExtLib Rest Services do not go through the XSP Phases
You can use a proxy such as nginx or in my case I used IHS
http://xomino.com/2014/04/20/adding-custom-http-headers-to-domino-r9-using-ibm-http-server-ihs/
Or you can just roll your own REST service and add whatever headers you want
Mark, just a quick comment. I am not sure if this would work for you.
But what I do in a current project is to put the Angular app in the WebContent folder of the NSF. This serves several purposes - one of them being ease of deployment with the right version of the backend code in the same NSF. I have set the database up for source control and edit the Angular part directly in the on-disk project of the NSF and just sync them when I need to run it. As a side effect this setup will also solve any CORS issues as client side code is launched from the same domain as my REST service is called from ;-)
/John

Emitting node.js views and scripts as snippets

I have built a node.js app for which i would like to realize "snippets" to be included in external web applications. It means that i must create some javascript scripts to be included and called from external apps that call a node.js view and its scripts/css .
Does node.js provide a way to do it natively or do i have to create the script that embeds the view and the related client libraries?
enable cross-origin resource sharing:
Cross-Origin Resource Sharing (CORS) is a specification that enables truly open access across domain-boundaries. If you serve public content, please consider using CORS to open it up for universal JavaScript/browser access.
Must read: http://enable-cors.org/#how-expressJS
Important stuff:
Access-Control-Allow-Origin
Access-Control-Allow-Headers
Sounds like components might be your answer:
https://github.com/component/component
http://tjholowaychuk.com/post/27984551477/components
I hope I understand your question - You want to display an html-like snippet on a different site.
One way of doing it is to provide an API, but it will probably be a
JSON API, and the other site will have to display it on its own
(somebody already noted CORS is needed for this). You could just serve a JSON with html in it (though you need to make sure the other app doesn't escape it)
You could have your server serve an image (like they do in travis CI), but then the other site will show it as an image (copy paste the text won't be possible)
You could use Iframe, serving an html to this other site.
There's the possibility you meant something totally different, like reusing your server an client code - in that case I recommend http://browserify.org, or the already mentioned component.js.

Cross domain DOM/JS pain

I have what I thought was a simple(ish) problem. I'm writing a SCORM package for an external learning resource. It's basically an iframe in a HTML page that the clients install in their LMS (Learning Management System).
The external resource needs to be able to tell the LMS that the user has completed the content. Because the LMS and resource are on different domains, there's obviously a JS security wall stopping me communicating directly. So when the user reaches the end of the content, the external resources sets its URL to have an anchor so the url goes from http://url to http://url#complete
Now I'm trying to get the location from the iframe and I'm failing miserably. I've tried iframe.location and iframe.window.location (.window is nothing too). I can't seem to get a handle on the right thing.
iframe.src shows me the original source URL, but it doesn't change when the iframe updates to the #complete version.
Any tips? Any alternatives? Although I control both pages, unless there's a javascript method to set cross-domain communication, I can't set the http header to allow it because I don't control the LMS server - it just pushes out my static page.
Edit: As an alternative, I'm considering storing the completed event in the session (a cookie would work too, I guess) at the resource end and then making another page that outputs that as a JSONP statement. I think it would be quite easy to do but it's a lot more fuss for something that should be simple. I literally need to flip one switch on the LMS code from the external site.
Use easyXDM, it should make this fairly easy.
Using it you can do cross-domain RPC with no server-side interaction. The readme at github is pretty good.

Resources