In my XPages application I want to keep the connection alive when sending a XMLHttpRequest from one XPage to another. Therefore, I set the "Connection" header to "keep-alive".
On the client-side I have:
xhr=new XMLHttpRequest();
xhr.setRequestHeader("Connection","keep-alive");
and on the server-side (in the afterRenderResponse event of the responding XPage) I use:
response=facesContext.getExternalContext().getResponse();
response.setHeader("Connection","keep-alive");
When inspecting the request and the response (with FireBug), it turns out that the request headers contain "Connection = keep-alive" (as expected), but the response headers contain "Connection = close".
Does anybody know how to override this header?
SOLUTION: In the xsp.properties file set xsp.compress.mode=gzip. This is equal to setting Compression = "GZip, set content length" under Xsp Properties / Page Generation.
EXPLANATION: My application used the server default for compression, which is gzip-nolength. When the content length is not set, the XPages' response (XspHttpServletResponse) seems to always set the "Connection" header to "close". After setting the content length the "Connection" header is no longer present and the connection is kept alive by default.
Related
I am dealing with a weird Error, that is reproducible only in production over a https connection.
Firstly I am trying to download a *.cvs file, my backingbean code looks as follows:
ExternalContext externalContext = facesContext.getExternalContext();
try (OutputStream outputStream = externalContext.getResponseOutputStream()) {
String exportFileName = exportHandler.getExportFileName();
String contentType = externalContext.getMimeType(exportFileName);
externalContext.responseReset();
externalContext.setResponseCharacterEncoding(CHARSET);
externalContext.setResponseContentType(contentType);
externalContext.setResponseHeader("Content-disposition", "attachment; filename=\"" + exportFileName + "\"");
int length = exportHandler.writeCsvTo(outputStream); // with a BufferedWriter the lines are written and the bytes are summed up.
externalContext.setResponseContentLength(length);
outputStream.flush();
}
facesContext.responseComplete();
Everything works fine over an http connection, or on localhost (even https on localhost works), but in production over an https connection, different web browsers give different Error messages.
Chrome gives the following message: "The webpage at ... might be temporarily down or it may have moved permanently to a new web address. ERR_HTTP2_PROTOCOL_ERROR"
Under Firefox starts to load and then nothing happens as it takes forever.
Under IE: "The connection to the website was reset. Error Code: INET_E_DOWNLOAD_FAILURE"
My take would be that, it has to do with the response content length, as I am not sure I still need it after writing the Outputstream.
The response header looks like this:
Connection: keep-alive
Content-disposition: attachment; filename="test.csv"
Content-Type: text/csv; charset=ISO-8859-15
Date: Mon, 04 May 2020 10:16:41 GMT
Transfer-Encoding: chunked
Not sure why ends up my response with a chunked transfer-encoding, as far as I know, that can happen only when content length is not set. Does any body have any idea, do i miss some header constraints in my response, like setting cache-control?
The difference might actually be between HTTP1.1 and 2.0 and not directly https and http. This might be occuring because https allows upgrading the protocol to 2.0 during the handshake, while in basic http no such negotiation takes place.
With that in mind, the header parsing implementations in the browser might be different. If I had to guess, this is about the content-disposition filename.
In HTTP2, most browsers will fail to parse the content-disposition filename if it contains non US-ASCII characters. HTTP1.1 implementations are more lenient with that.
You can still have utf-8 filenames with content-disposition filename* attribute.
Currently the most supported Content-Disposition format is:
attachment; filename="<us-ascii filename>"; filename*=utf-8''<url-encoded-utf-8-filename>
ie.
attachment; filename="EURO rates"; filename*=utf-8''%e2%82%ac%20rates
Using a browser REST client to POST to the activity stream at e.g.
https://connectionsww.demos.ibm.com/connections/opensocial/basic/rest/activitystreams/#me/#all
...with the settings prescribed in IBM Connections OpenSocial API > POSTing new events
...results in the following response:
<error xmlns="http://www.ibm.com/xmlns/prod/sn">
<code>403</code>
<message>You are not authorized to perform the requested action.</message>
<trace></trace>
</error>
What am I missing?
This same approach works nicely on IBM Connections 4.0.
Which setting needs 'switching on'?
Try a URL like this... https://sbtdev.swg.usma.ibm.com:444/connections/opensocial/basic/rest/activitystreams/#me/#all
I added the Basic/Rest component, and it worked for me.
1 - Added URL https://sbtdev.swg.usma.ibm.com:444/connections/opensocial/basic/rest/activitystreams/#me/#all
2 - Changed Method to Post
3 - Added Content-Type: application/json
4 - Authentication -> Basic
5 - Logged IN
6 - Posted
Same thing here: 403 when I make an AJAX call to an IBM Connections 6.0 REST API url. Same error in Chrome, Firefox and IE11. When I open the same URL in a separate browser tab, everything works fine.
Comparing the http headers of both calls, and fiddling with Postman, the diference is the presence and value of the atribute Origin.
Seems that Connections allows calls from its own server. For example, when: Origin: connections.mycompany.com.
It also allows calls when Origin is not defined, which happens when the url is called from a separate browser tab.
There is a doc at IBMs Support site that confirms this - http://www-01.ibm.com/support/docview.wss?uid=swg21999210. It also suggests a workaround that did the job for me: unsetting the Origin attribute in the IBM HTTP Server that is in front of your Connections instance. Add the lines below to the httpd.conf file:
Header unset Origin
RequestHeader unset Origin
I'm writing an extension that requests XML content from a server and displays data in a popup/dialog window. I've added the website to my manifest.json permissions like so:
"permissions": [
"http://*/*"
],
Later I added the following code to my background page:
function loadData() {
var url = "http://www.foo.com/api/data.xml";
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
...
xhr.send();
the problem is, that I get the cross-site security error "Origin chrome-extension://kafkefhcbdbpdlajophblggbkjloppll is not allowed by Access-Control-Allow-Origin. "
The thing is, with "http:///" in the permissions I can request "http://www.foo.com/api", but I can't find any way to allow "http://www.foo.com/api/data.xml".
I've tried both "http:////*" and http://www.foo.com/api/data.xml" in the "permissions". What else should I be doing?
This should work (SOP doesn't apply to chrome extensions),so there are three possibilities:
There is some mistake somewhere
Just to make sure use add <all urls> permission and check that extension really have this permission. (e.g. execute chrome.runtime.getManifest() in console when inspecting background page )
Server itself is checking Origin header and is rejecting request if origin value is unexpected
You can quickly check this by using some http tester and sending request manually (for example Dev Http Client for chrome, since I'm one of the developers). If it shows the same error, it means that the server is really checking origin header.
To fix this you will have to make server somehow accept your origin , or you can use chrome.webRequest to set valid origin header to all the requests sent to the target server (standard XHR api doesn't allow modification of Origin header)
Chrome bug
Well in this case you can only report this error and wait for the best
I can not for the life of me find any documentation regarding the possible properties of varnish (version 3) objects.
We know (from googling, varnish documentation just mumbles and leaves you more frustrated) for example that the request object has the url property (req.url) and also that it has req.http.X-Forwarded-For. But has anyone ever in any way found... say... a list?
Thanks!
/joakim
You can't really give a comprehensive list of things like req.http.X-Forwarded-For because req.http.* are HTTP headers. The Cookie header of a request will be req.http.Cookie and the User-Agent header will be req.http.User-Agent. There are a lot of standard headers, but you can set any arbitrary header and it will show up in req.http.___________. You can see the headers of the HTTP response in resp.http.*. Same for backend response in beresp.http.*.
All of the other properties are listed here: https://www.varnish-cache.org/docs/3.0/reference/vcl.html#variables
Using ServiceStack 3.9.2x.
Out of the box (and as seen on the metadata page) service stack comes with built-in support for a bunch of content types - xml, json, jsv, etc. What is the best way to tell ServiceStack to limit the set of supported content. For example my service only knows how to speak JSON and I don't want ServiceStack to honor requests that sport "Content-type: application/xml" or "Accept: application/xml" headers. In said case I would like ServiceStack to respond with a 406 (Not Acceptable) response, ideally including in the body the set of supported content (per HTTP 1.1 spec).
Also how does ServiceStack decide what the default type of the content is for requests that do not sport an Accept or Content-Type header (I think I am seeing it render HTML now)? Is there a way to tell ServiceStack to assume a specific content type in these cases?
See this answer to find out how to set the default content type in ServiceStack: ServiceStack default format
You can use a Request Filter to detect the requested content type with:
httpReq.ResponseContentType
In your global request filter you can choose to allow it (do nothing) or write directly to the response, e.g. 406 with list of supported content as you wish.
ServiceStack order of operations
The Implementation architecture diagram shows a visual cue of the order of operations that happens in ServiceStack. Where:
EndointHostConfig.RawHttpHandlers are executed before anything else, i.e. returning any ASP.NET IHttpHandler by-passes ServiceStack completely.
The IAppHost.PreRequestFilters gets executed before the Request DTO is deserialized
Request Filter Attributes with Priority < 0 gets executed
Then any Global Request Filters get executed
Followed by Request Filter Attributes with Priority >= 0
Action Request Filters (New API only)
Then your Service is executed
Action Response Filters (New API only)
Followed by Response Filter Attributes with Priority < 0
Then Global Response Filters
Followed by Response Filter Attributes with Priority >= 0
Any time you close the Response in any of your filters, i.e. httpRes.Close() the processing of the response is short-circuited and no further processing is done on that request.