I have a javascript file that sends a FormData variable to another site as follows:
xhr.open("post", "http://host/path/file.php", true);
xhr.send(data);
The data variable is correctly populated I have verified that this is not the issue as the payload on my Network tab reads the correct values, the request header has a content-length > 0 :
Accept:*/*
Accept-Encoding:gzip,deflate
Accept-Language:en-GB,en-US;q=0.8,en;q=0.6
Connection:keep-alive
Content-Length:6021726
Content-Type:multipart/form-data; boundary=----WebKitFormBoundaryAj8A2cYqFIFtNwHI
Host:host
Origin:http://host
Referer:http://host/path
User-Agent:Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36
However the response header content-length=0:
Access-Control-Allow-Origin:*
Content-Length:0
Content-Type:text/html; charset=UTF-8
Date:Thu, 02 Oct 2014 19:34:49 GMT
Server:Microsoft-IIS/8.5
X-Powered-By:PHP/5.6.0
X-Powered-By:ASP.NET
Any help would be greatly appreciated, I read there is an issue with IE and Windows Authentication that causes this but I am using Chrome and Firefox. For both sites I have allowed both Anonymous and Windows Authentication (they are both IIS sites). Any help would be greatly appreciated.
Well it was pointless posting my problem here as no one helped, in the case the some else has this issue I will post what fixed my issue. It seems when you send an XMLHttpRequest of type multipart/form-data that string data can be found in the $_POST variable but any file data is located in the $_FILES variable.
Also I was confused in what the browser was telling me, it wasn't necessarily that data wasn't being sent but the Content-Length of my response header was empty until I actually printed the POST/FILES variables then it was showing a length > 0. I was stuck on that for a while so I thought I would also add it to my solution.
Related
In TrackJS, some user agents are parsed as normal browsers, e.g.:
Mozilla/5.0 (Linux; Android 7.0; SM-G930V Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.125 Mobile Safari/537.36 (compatible; Google-Read-Aloud; +https://support.google.com/webmasters/answer/1061943)
Chrome Mobile 59.0.3071
I tried to do it by ignore rules in settings, but it doesn't work.
So I need to filtrate errors by token in user agent.
Is it possible do this without JS?
More similar user agents: https://developers.google.com/search/docs/advanced/crawling/overview-google-crawlers
The TrackJS UI doesn't allow you to create Ignore Rules against the raw UserAgent, only the parsed browser and operating system. Instead, use the client-side ignore capability with the onError function.
Build your function to detect the tokens you want to exclude, and return false from the function if you don't want it to be sent.
We had to downgrade from servicestack v5.4 free edition back to v4.5.14 paid edition. The only change needed to make the downgrade compile was one line in the Service code:
v5.4 code:
[FallbackRoute("/{PathInfo*}", Matches="AcceptsHtml")]
v4.5.14 code:
[FallbackRoute("/{PathInfo*}")]
I have not yet figured out how to implement the 'matches' portion in 4.5.14, however the code still seems to run and when launched from VS2017 where the service runs as a command line webservice, however intermittently a infinite-redirect occurs. In prod where the app runs as a windows service, the infinite redirect happens 100% of the time.
The result is when I visit the url:
https://server.domain.com:port
Which should simply redirect to:
https://server.domain.com:port/login
What happens is this:
https://server.domain.com:9797/login?redirect=https%3a%2f%2fserver.domain.com%3a9797%2flogin%3fredirect%3dhttps%253a%252f%252fserver.domain.com%253a9797%252flogin%253fredirect%253dhttps%25253a%25252f%25252fserver.domain.com%25253a9797%25252flogin%25253fredirect%25253dhttps%2525253a%2525252f%2525252fserver.domain.com%2525253a9797%2525252flogin%2525253fredirect%2525253dhttps%252525253a%252525252f%252525252fserver.domain.com%252525253a9797%252525252flogin%252525253fredirect%252525253dhttps%25252525253a%25252525252f%25252525252fserver.domain.com%25252525253a9797%25252525252flogin%25252525253fredirect%25252525253dhttps%2525252525253a%2525252525252f%2525252525252fserver.domain.com%2525252525253a9797%2525252525252flogin%2525252525253fredirect%2525252525253dhttps%252525252525253a%252525252525252f%252525252525252fserver.domain.com%252525252525253a9797%252525252525252flogin%252525252525253fredirect%252525252525253dhttps%25252525252525253a%25252525252525252f%25252525252525252fserver.domain.com%25252525252525253a9797%25252525252525252flogin%25252525252525253fredirect%25252525252525253dhttps%2525252525252525253a%2525252525252525252f%2525252525252525252fserver.domain.com%2525252525252525253a9797%2525252525252525252flogin%2525252525252525253fredirect%2525252525252525253dhttps%252525252525252525253a%252525252525252525252f%252525252525252525252fserver.domain.com%252525252525252525253a9797%252525252525252525252flogin%252525252525252525253fredirect%252525252525252525253dhttps%25252525252525252525253a%25252525252525252525252f%25252525252525252525252fserver.domain.com%25252525252525252525253a9797%25252525252525252525252flogin%25252525252525252525253fredirect%25252525252525252525253dhttps%2525252525252525252525253a%2525252525252525252525252f%2525252525252525252525252fserver.domain.com%2525252525252525252525253a9797%2525252525252525252525252flogin%2525252525252525252525253fredirect%2525252525252525252525253dhttps%252525252525252525252525253a%252525252525252525252525252f%252525252525252525252525252fserver.petersc
Has anyone seen this before? Any suggestions for where to start debugging this would be appreciated.
More Info
So I tried removing the Authenticate attribute from my service to see if the loop was being caused by Authentication or something else. Turns out it's the authentication that's causing the loop. Once I commented out the attribute, everything worked as expected.
Update
I this loop is definitely caused by the AuthenticateAttribute.
I commented out the line 'url = url.AddQueryParam(...' so that I would not get a huge query string of garbage in the hopes that would fix something. But it looks like something else is not right. Below is the headers from the initial request.
GET https://myServer.myDomain.com:9797/ HTTP/1.1
Host: myServer.myDomain.com:9797
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cookie: ss-pid=qt9Lqb2YvWUu9RzLBlfr
Here are the response headers
HTTP/1.1 302 Found
Transfer-Encoding: chunked
Location: https://myServer.myDomain.com:9797/login
Vary: Accept
Server: Microsoft-HTTPAPI/2.0
Set-Cookie: ss-pid=kczdbSouUzx6aURug3ZU;path=/;expires=Fri, 01 Apr 2039 21:24:01 GMT;HttpOnly
Set-Cookie: ss-id=nAQeqGptASLQ1fZj4xs7;path=/;HttpOnly
X-Powered-By: ServiceStack/4.514 NET45 Win32NT/.NET
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS
Access-Control-Allow-Headers: Content-Type
Date: Mon, 01 Apr 2019 21:24:01 GMT
After the first request, there are about 60 redirects which all look like:
Request:
GET https://myServer.myDomain.com:9797/login HTTP/1.1
Host: myServer.myDomain.com:9797
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cookie: ss-pid=kczdbSouUzx6aURug3ZU; ss-id=nAQeqGptASLQ1fZj4xs7
Response:
HTTP/1.1 302 Found
Transfer-Encoding: chunked
Location: https://windows7vm1.petersco.com:9797/login
Vary: Accept
Server: Microsoft-HTTPAPI/2.0
X-Powered-By: ServiceStack/4.514 NET45 Win32NT/.NET
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS
Access-Control-Allow-Headers: Content-Type
Date: Mon, 01 Apr 2019 21:24:01 GMT
I'm not seeing anything to indicate why this should loop. The only thing that changed was the version of ServiceStack, why would one version find the html page and the other not? Is there something special I need to add to v4.5.14 to get it to respond with index.html?
So I could not believe that the AuthenticateAttribute would have such a glaring problem, ServiceStack is far too mature and far too awesome for this to be a bug. So using that assumption (it is usually safe to assume you are the problem and not the genius who found the one bug that everyone else missed), I started to look at the routes and compare them to some old samples for SPA's available on github and noticed that none of them had a FallbackRoute defined.
This seemed odd to me, but seeing as how I don't know the history of how this feature came to become part of the v5.* template in the first place, I thought removing these lines may work. It did.
Removing this:
[FallbackRoute("/{PathInfo*}"]
public class FallbackForClientRoutes
{
public string PathInfo { get; set; }
}
And this:
public object Any(FallbackForClientRoutes request) =>
new PageResult(Request.GetPage("/"));
Made everything go back to normal, navigating to the base url redirects to ~login and all the api methods are back to being authenticated. I have lost the ability to navigate directly to a URL like http://myServer.myDomain.com:port/ListCompanies... but my guess is that has something to do with Routing as well (so more homework to do).
A security scan performed on my tomcat install is reporting this problem:
The fileDownloaded cookie is sent over a secure connection but does not have the "secure" attribute set. The "secure" attribute tells the browser to only transmit cookies over connections secured with SSL. This protects the values from being inadverntly sent over unencrypted HTTP connections.
I've gone through and set the secure="true" in server.xml and set useHttpOnly="true" in context.xml
This fixed the issue on all the actual pages they now all show Set Cookie secure in the header
But this fileDownlaoded cookie is the last lingering bit reporting a problem.
I've searched everywhere and can't seem to find any reference to anyone else having this problem.
I'm starting to wonder if this is actually a configuration issue with the iPlanet Web Server and not the Tomcat App Server
Request
GET ------ HTTP/1.1
Host: www
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0
Cookie: JSESSIONID=*****
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: https://********dashboard.page
Connection: keep-alive
Content-Length: 0
Response
HTTP/1.1 200 OK
Server: none
Date: Sun, 23 Aug 2015 05:08:05 GMT
X-frame-options: SAMEORIGIN
Location:
Proxy-agent: Oracle-iPlanet-Web-Server/7.0
Content-disposition: attachment; filename="table_export.xls"
Set-cookie: fileDownloaded=true;Version=1
Content-type: application/vnd.ms-excel
Via: 1.1 *******
Content-Length: 4608
Tomcat never sets a cookie called fileDownloaded: this is an application issue.
If you want your cookies to have the "secure" flag set, then you need to make sure that you set it yourself:
Cookie cookie = new Cookie("fileDownloaded", "true");
cookie.setVersion(1);
cookie.setSecure(true);
response.addCookie(cookie);
You might want to check that the current connection is actually secure before setting that flag.
For our specific implementation we needed to use:
javax.ws.rs.core.NewCookie.NewCookie()
NewCookie(Cookie cookie, String comment, int maxAge, Date expiry, boolean secure, boolean httpOnly)
Content-Type/Accept/MIME HTTP headers issue?
JasperReports Server (5.2.0) (update 2014-08-20/21: 5.5.0 & 5.6.0 alike)
running on Tomcat 7
clients tried
Internet Explorer
5.2.0 tests (default below)
9.0.8112.16421 64bit (default below)
11.0.9600.17105 64bit
5.5.0 tests (update 2014-08-20)
8.0.7601.17514
9.0.8112.16421
10.0.9200.16384
Firefox 28.0
Chrome (34.0.1847.131 m)
If I navigate in the JasperReports Server Web GUI to my previously uploaded Inhaltsresource (content resource), a *.xlsx Excel document, it works well in Firefox and Chrome, by offering to save or open the file, but it fails in Internet Explorer, by displaying the files binary content in the tab :-(
I did quite some research, but could not find a definitive cause, although some points may point at the cause:
(more general observation:)
the IEs/Jasper GUIs sent HTTP request header (ACCEPT string) seems to be wrong/incomplete/IE-incompatible
(thus) the Jasper Servlets HTTP response header (Content-Type string) seems to be wrong/incomplete/IE-incompatible
(when thinking about this a little deeper:)
shouldn't the JasperServer itself (or the Tomcat as the container to a certain degree on delivery) try to determine the to-be-delivered content type?
either by letting the user-set it manually or better by determining it via heuristics (file extension, content parsing, ...)
this way it could also be stored along with the file (I would only do it if the user want's to override the result of the heuristically determined type)
since the filename or the URL already easily indicate that it is a *.xlsx file and the content starts with PK... it already strongly indicates that it really is a (ZIP-packed) Excel file
so I would see two basic ways this should work in general...
the request header (Jasper-delivered GUI page) should define the content type explicitely (maybe only, if it can't be easily determined by the response functionality itself)
(generally maybe more appropriate:) the response header (Jasper/Tomcat server logic) should set the requested, correct or estimated content-type explicitely
looking at the header responses of IE or FF one can clearly see that no Content-Type is set here, although the REST-API call further down has it set (and it works there) to application/octet-stream;charset=UTF-8
Here are details that I checked already:
ok: the HTTP response headers for FF and IE do not significantly differ to me (although the request headers are quite different) (see below), thus indicating some issue with the magic of result content detection (where FF and Chrome seem to be better in this case)
the HTTP Headers of IE and FF request/response cycles:
IE 9 (captured with onboard dev tools):
request header
Anforderung GET http://...:8080/jasperserver/fileview/fileview/....xlsx? HTTP/1.1
Accept application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */*
Accept-Language de-DE
User-Agent Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)
UA-CPU AMD64
Accept-Encoding gzip, deflate
Host ...:8080
Proxy-Connection Keep-Alive
Cookie userTimezone=Europe/Berlin; JSESSIONID=0FEF6E9F46EB2202A041A0A6F37B249A; userLocale=de_DE; treefoldersTree=1%7Copen%3B4%7Copen%3B5%7Copen%3B8%7Copen%3B; lastFolderUri=/...
response header
Antwort HTTP/1.0 200 OK
Server Apache-Coyote/1.1
Cache-Control no-store
Expires Thu, 01 Jan 1970 01:00:00 CET
P3P CP="ALL"
Pragma
Content-Language de-DE
Content-Length 453242
Date Thu, 08 May 2014 10:54:46 GMT
X-Cache MISS from ..some-proxy-host..
X-Cache-Lookup MISS from ..some-proxy-host..:8080
Via 1.1 ..some-proxy-host..:8080 (squid/2.7.STABLE8)
Connection keep-alive
Proxy-Connection keep-alive
FF (captured with HttpFox addon)
request header
(Request-Zeile) GET /jasperserver/fileview/fileview/....xlsx? HTTP/1.1
Host viasaxinfo.list.smwa.sachsen.de:8080
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language de,en-US;q=0.7,en;q=0.3
Accept-Encoding gzip, deflate
Referer http://...:8080/jasperserver/flow.html?_flowId=searchFlow
Cookie userLocale=de; userTimezone=Europe/Berlin; JSESSIONID=E3989F65A4198047DA87FBB7BB73ABBA; treefoldersTree=1%7Copen%3B4%7Copen%3B5%7Copen%3B8%7Copen%3B; lastFolderUri=/...
Connection keep-alive
response header
(Status-Zeile) HTTP/1.0 200 OK
Server Apache-Coyote/1.1
Cache-Control no-store
Expires Thu, 01 Jan 1970 01:00:00 CET
P3P CP="ALL"
Content-Language de
Content-Length 453242
Date Thu, 08 May 2014 11:00:48 GMT
X-Cache MISS from ..some-proxy-host..
X-Cache-Lookup MISS from ..some-proxy-host..:8080
Via 1.1 ..some-proxy-host..:8080 (squid/2.7.STABLE8)
Connection keep-alive
Proxy-Connection keep-alive
ok: the compatibility view in IE does not help it
checking potential HTTP response problems (which differ)
Pragma: should have the same meaning like Cache-Control: Public
What does the HTTP header Pragma: Public mean?
Content-Language: shouldn't matter here I guess
checking potential HTTP request problems
order of request header rows shouldn't matter
Accept: problematic?
seems ok looking at the specs http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Accept-Language: shouldn't matter
Cookie: content shouldn't matter
Proxy-Connection: disabling/enabling proxy settings did not change something
ok: MIME type setup in tomcat7/conf/web.xml
<mime-mapping>
<extension>xlsx</extension>
<mime-type>application/vnd.openxmlformats-officedocument.spreadsheetml.sheet</mime-type>
</mime-mapping>
putting it as well under jasperserver/WEB-INF/web.xml does not help either
some more details about this can be also found here:
http://blogs.adobe.com/techcomm/2012/11/handling-xlsx-docx-and-pptx-baggage-files-when-publishing-to-robohelp-server.html
http://filext.com/faq/office_mime_types.php
using the Rest API (.../jasperserver/rest/resource/...) works in both FF and IE
IE 9:
with fileData=true (brings up a dialog whether to open or save the file where opening works as expected)
HTTP request header
Anforderung GET http://...:8080/jasperserver/rest/resource/....xlsx?fileData=true HTTP/1.1
Accept text/html, application/xhtml+xml, */*
Accept-Language de-DE
User-Agent Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
UA-CPU AMD64
Accept-Encoding gzip, deflate
Host ...:8080
Proxy-Connection Keep-Alive
Cookie userTimezone=Europe/Berlin; userLocale=de_DE; JSESSIONID=1B91EC2172C438C51A551CB967A3148D; treefoldersTree=1%7Copen%3B4%7Copen%3B5%7Copen%3B7%7Copen%3B10%7Copen%3B; lastFolderUri=...; foldersPanelWidth=239
HTTP response header
Antwort HTTP/1.0 200 OK
Server Apache-Coyote/1.1
Cache-Control private
Expires Thu, 01 Jan 1970 01:00:00 CET
P3P CP="ALL"
Content-Disposition attachment; filename=....xlsx
Content-Type application/octet-stream;charset=UTF-8
Date Fri, 09 May 2014 12:44:05 GMT
X-Cache MISS from LIST-SRV-PROXY03
X-Cache-Lookup MISS from LIST-SRV-PROXY03:8080
Via 1.1 ...some-proxy-host...:8080 (squid/2.7.STABLE8)
Connection close
without fileData=true returning the expected resource meta data XML (displayed inline)
<resourceDescriptor name="....xlsx" wsType="contentResource" uriString="/....xlsx" isNew="false">
<label><![CDATA[....xlsx]]></label>
<creationDate>1399636098445</creationDate>
<resourceProperty name="PROP_RESOURCE_TYPE">
<value><![CDATA[com.jaspersoft.jasperserver.api.metadata.common.domain.ContentResource]]></value>
</resourceProperty>
<resourceProperty name="PROP_PARENT_FOLDER">
<value><![CDATA[/...]]></value>
</resourceProperty>
<resourceProperty name="PROP_VERSION">
<value><![CDATA[0]]></value>
</resourceProperty>
<resourceProperty name="PROP_SECURITY_PERMISSION_MASK">
<value><![CDATA[1]]></value>
</resourceProperty>
<resourceProperty name="CONTENT_TYPE">
<value><![CDATA[contentResource]]></value>
</resourceProperty>
<resourceProperty name="DATA_ATTACHMENT_ID">
<value><![CDATA[/....xlsx]]></value>
</resourceProperty>
</resourceDescriptor>
I spent quite some time on this, but neither googleing (I wonder why nobody else seems to have this issue although it looks very common to me) nor various debugging did help. Maybe I would have to play in detail with the related Jasper classes to debug further, but maybe somebody else had this issue as well or knows a solution?
it seems there is a manual workaround possible: http://community.jaspersoft.com/jasperreportsr-server/issues/3716#comment-808481
we implemented a servlet filter class to try to set the Content-Disposition header of the response in the cases when we knew that the MIME type was wrongly set. As we knew that the response was flushed after being processed by the web service end point, we set the header BEFORE being processed as Content-Disposition: attachment; filename='filename.extension'. This turned out to work, and we were able to download the file with an appropriate file extension.
but they also mention that it would work with a v5.6.0 although it did not in our tests (see comment above: Opening file content resource (Excel) of JasperReports Server / Tomcat with Internet Explorer displays binary data inline)
v5.6.0, and apparently on this release the MIME type of the response was correctly set, so we finally get to a proper solution for our problem.
I'm building an app in NodeJS that stores files in Amazon S3 using the Knox S3 client. Everything works well for uploading files, moving files around, etc.
Now I want to use the Query String Authentication mechanism to allow direct downloads of the files. To do this, I have some code on my NodeJS server call to the Knox library and create a signed url.
The code looks like this:
exports.getS3Policy = function(file) {
var date = moment().add("min", 60).toDate();
var expires = new Date(date.getUTCFullYear(), date.getUTCMonth(), date.getUTCDate(), date.getUTCHours(), date.getUTCMinutes(), date.getUTCSeconds());
return knoxClient.signedUrl(file, expires);
};
This code returns a proper URL with the authentication parameters. For example:
https://my-bucket.s3.amazonaws.com/some/folder/file.ext?Expires=1234567890&AWSAccessKeyId=ABCDEFGHIJKLMNO&Signature=someEncodedSignature
According to all of the documents I've read, this is a proper URL. I'm not getting any errors from Amazon with this url. The expiration is correct (I can verify this by creating an expiration of 1 second and then getting an expired error). The file path is correct, as well.
When I hit the url in my browser, though, my browser (latest Chrome on OSX) cancels the download of the file, even though I'm getting a 200 ok response with the right file information.
Here is a copy of the request info from Chrome dev tools (sensitive bits replaced):
Request URL:https://my-bucket.s3.amazonaws.com/some/folder/file.ext?Expires=1234567890&AWSAccessKeyId=ABCDEFGHIJKLMNO&Signature=someEncodedSignature
Request Method:GET
Status Code:200 OK
Request Headers
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
DNT:1
Host:my-bucket.s3.amazonaws.com
Pragma:no-cache
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36
Query String Parameters
Expires:1234567890
AWSAccessKeyId:ABCDEFGHIJKLMNO
Signature:someEncodedSignature
Response Headers
Accept-Ranges:bytes
Content-Length:341390
Content-Type:application/octet-stream
Date:Tue, 10 Sep 2013 13:22:55 GMT
ETag:"fc4d24e752097f212e111f2736af7162"
Last-Modified:Tue, 10 Sep 2013 01:40:31 GMT
Server:AmazonS3
x-amz-id-2:some-id
x-amz-request-id:some-request-id
As you can see, the server response is "200 ok". The content-length of 341390 is also the correct lenght of the file I'm attempting to download - this is the actual file size. I'm getting the content type as "application/octet-stream" because that's how I told S3 to store the files... I just want the raw download, basically.
But after getting this response from S3, Chrome cancels the download. Here's a screencap from devtools, again:
FireFox and Safari both download the file as expected. Why is chrome canceling the download? What am I doing wrong? Is it the content type? or ?
Of course I find the answer as soon as I post the question... it's a bug in Chrome
https://code.google.com/p/chromium/issues/detail?id=104331
The fix will be available starting with Chrome/Chromium 30.x. Please
open a new issue if you are seeing similar issues with versions of
Chrome 30 or above.
The supported means of indicating that a resource must be downloaded
is to use the Content-Disposition header field
(https://www.rfc-editor.org/rfc/rfc6266).
Looks like I have to get S3 to set a content-disposition in the response.