Building HTTP headers in Rocket 0.5-rc (migrating from 0.4) - rust

I am working on a project which was build using rust-nightly a year ago and try to migrate it to a recent rustc-stable.
It used rocket 0.4 and I am migrating this to 0.5-rc.2.
There are those imports:
use rocket::http::hyper::header::{ContentDisposition, DispositionType, DispositionParam, Charset};
I replaced that with
use rocket::http::hyper::header;
Now before it was possible to build the filename header by a struct:
.header(ContentDisposition {
disposition: DispositionType::Attachment,
parameters: vec![DispositionParam::Filename(
Charset::Iso_8859_1, // The character set for the bytes of the filename
None, // The optional language tag
format!("reviews_{}.csv", unix_time)
.into_bytes() // the actual bytes of the filename
)]
Is there way to do this in 0.5 as well, or do I have to format / create the header manually, like this:
.header(header::CONTENT_DISPOSITION {
disposition: "attachment",
parameters: .... // create filename header manually

Something like this seems to work - I am still wondering if that is the intended and optional solution:
.header(Header::new(header::CONTENT_DISPOSITION, format!("attachment; reviews_{}.csv", unix_time).into_bytes()))
As I only have ASCII characters I left out the charset code.

Related

nodejs skipping single quote from json key in output

I see a very weird problem when json when used in nodejs, it is skipping single quote from revision key . I want to pass this json as input to node request module and since single quote is missing from 'revision' key so it is not taking as valid json input. Could someone help how to retain it so that I can use it. I have tried multiple attempts but not able to get it correct.
What did I try ?
console.log(jsondata)
jsondata = {
'splits': {
'os-name': 'ubuntu',
'platform-version': 'os',
'traffic-percent': 100,
'revision': 'master'
}
}
Expected :-
{ splits:
{ 'os-name': 'ubuntu',
'platform-version': 'os',
'traffic-percent': 100,
'revision': 'master'
}
}
But in actual output single quote is missing from revision key :-
{ splits:
{ 'os-name': 'ubuntu',
'platform-version': 'os',
'traffic-percent': 100,
revision: 'master'
}
}
Run 2 :- Tried below code this also produce same thing.
data = JSON.stringify(jsondata)
result = JSON.parse(data)
console.log(result)
Run 3:- Used another way to achieve it
jsondata = {}
temp = {}
splits = []
temp['revision'] = 'master',
temp['os-name'] = 'ubuntu'
temp['platform-version'] = 'os'
temp['traffic-percent'] = 100
splits.push(temp)
jsondata['splits'] = splits
console.log(jsondata)
Run 4: tries replacing single quotes to double quotes
Run 5 : Change the order of revision line
This is what is supposed to happen. The quotes are kept only if the object key it’s not a valid JavaScript identifier. In your example, the 'splits' & 'revision' don't have a dash in their name, so they are the only ones with the quotes removed.
You shouldn't receive any error using this object - if you do, update this post mentioning the scenario and the error.
You should note that JSON and JavaScript are not the same things.
JSON is a format where all keys and values are surrounded by double quotes ("key" and "value"). A JSON string is produced by JSON.stringify, and is required by JSON.parse.
A JavaScript object has very similar syntax to the JSON file format, but is more flexible - the values can be surrounded by double quotes or single quotes, and the keys can have no quotes at all as long as they are valid JavaScript identifiers. If the keys have spaces, dashes, or other non-valid characters, then they need to be surrounded by single quotes or double quotes.
If you need your string to be valid JSON, generate it with JSON.stringify. If it's OK for it to be just valid JavaScript, then it's already fine - it does not matter whether the quotes are there or not.
If, for some reason, you need some imaginary third option (perhaps you are interacting with an API where someone has written their own custom string parser, and they are demanding that all keys are surrounded by single quotes?) you will probably need to write your own little string generator.

Parsing formatted strings in Go

The Problem
I have slice of string values wherein each value is formatted based on a template. In my particular case, I am trying to parse Markdown URLs as shown below:
- [What did I just commit?](#what-did-i-just-commit)
- [I wrote the wrong thing in a commit message](#i-wrote-the-wrong-thing-in-a-commit-message)
- [I committed with the wrong name and email configured](#i-committed-with-the-wrong-name-and-email-configured)
- [I want to remove a file from the previous commit](#i-want-to-remove-a-file-from-the-previous-commit)
- [I want to delete or remove my last commit](#i-want-to-delete-or-remove-my-last-commit)
- [Delete/remove arbitrary commit](#deleteremove-arbitrary-commit)
- [I tried to push my amended commit to a remote, but I got an error message](#i-tried-to-push-my-amended-commit-to-a-remote-but-i-got-an-error-message)
- [I accidentally did a hard reset, and I want my changes back](#i-accidentally-did-a-hard-reset-and-i-want-my-changes-back)
What I want to do?
I am looking for ways to parse this into a value of type:
type Entity struct {
Statement string
URL string
}
What have I tried?
As you can see, all the items follow the pattern: - [{{ .Statement }}]({{ .URL }}). I tried using the fmt.Sscanf function to scan each string as:
var statement, url string
fmt.Sscanf(s, "[%s](%s)", &statement, &url)
This results in:
statement = "I"
url = ""
The issue is with the scanner storing space-separated values only. I do not understand why the URL field is not getting populated based on this rule.
How can I get the Markdown values as mentioned above?
EDIT: As suggested by Marc, I will add couple of clarification points:
This is a general purpose question on parsing strings based on a format. In my particular case, a Markdown parser might help me but my intention to learn how to handle such cases in general where a library might not exist.
I have read the official documentation before posting here.
Note: The following solution only works for "simple", non-escaped input markdown links. If this suits your needs, go ahead and use it. For full markdown-compatibility you should use a proper markdown parser such as gopkg.in/russross/blackfriday.v2.
You could use regexp to get the link text and the URL out of a markdown link.
So the general input text is in the form of:
[some text](somelink)
A regular expression that models this:
\[([^\]]+)\]\(([^)]+)\)
Where:
\[ is the literal [
([^\]]+) is for the "some text", it's everything except the closing square brackets
\] is the literal ]
\( is the literal (
([^)]+) is for the "somelink", it's everything except the closing brackets
\) is the literal )
Example:
r := regexp.MustCompile(`\[([^\]]+)\]\(([^)]+)\)`)
inputs := []string{
"[Some text](#some/link)",
"[What did I just commit?](#what-did-i-just-commit)",
"invalid",
}
for _, input := range inputs {
fmt.Println("Parsing:", input)
allSubmatches := r.FindAllStringSubmatch(input, -1)
if len(allSubmatches) == 0 {
fmt.Println(" No match!")
} else {
parts := allSubmatches[0]
fmt.Println(" Text:", parts[1])
fmt.Println(" URL: ", parts[2])
}
}
Output (try it on the Go Playground):
Parsing: [Some text](#some/link)
Text: Some text
URL: #some/link
Parsing: [What did I just commit?](#what-did-i-just-commit)
Text: What did I just commit?
URL: #what-did-i-just-commit
Parsing: invalid
No match!
You could create a simple lexer in pure-Go code for this use case. There's a great talk by Rob Pike from years ago that goes into the design of text/template which would be applicable. The implementation chains together a series of state functions into an overall state machine, and delivers the tokens out through a channel (via Goroutine) for later processing.

Any way to figure out what language a certain file is in?

If I have an arbitrary file sent to me, using Node.js, how can I figure out what language it's in? It could be a PHP file, HTML, HTML with JavaScript inline, JavaScript, C++ and so on. Given that each of these languages is unique, but shares some syntax with other languages.
Are there any packages or concepts available to figure out what programming language a particular file is written in?
You'd need to get the extension of the file. Are you getting this file with the name including the extension or just the raw file? There is no way to tell if you do not either get the file name with the extension or to scan the dir it's uploaded to and grabbing the names of the files, and performing a directory listing task to loop through them all. Node has file system abilities so both options work. You need the file's name with extension saved in a variable or array to perform this. Depending on how you handle this you can build an array of file types by extensions optionally you can try using this node.js mime
Example:
var fileExtenstions = {h : "C/C++ header", php : "PHP file", jar : "Java executeable"};
You can either split the string that contains the files name using split() or indexOf() with substring.
Split Example:
var fileName = "hey.h"; // C/C++/OBJ-C header
var fileParts = fileName.split(".");
// result would be...
// fileParts[0] = "hey";
// fileParts[1] = "h";
Now you can loop the array of extensions to see what it is and return the description of the file you set in the object literal you can use a 2d array and a for loop on the numeric index and check the first index to see it's the extension and return the second index(second index is 1)
indexOf Example:
var fileName = "hey.h";
var delimiter = ".";
var extension = fileName.substring( indexOf( delimiter ), fileName.length );
now loop through the object and compare the value

Unicode problem with CStdioFile in VC++

I am trying to read in a URL such as http://google.com
The Url is opened fined, and as soon as I read in the first line in the while(...) loop, instead of getting some sensible characters representing html, I get weird Chinese characters into sCurlLine which is a CString. I think i am missing a unicode encoding/decoding part.
The following is the simple code that reads a URL. The while loop reads line by line the file and the text is then updated into a text box.
Thanks for the help
void CInetSessionDlg::OnBnClickedBurl()
{
CStdioFile * fpUrlFile;
CString sCurlLine;
UpdateData(TRUE);
LPCTSTR url = m_sURL;
fpUrlFile = m_misSession.OpenURL(url);
if(fpUrlFile)
{
while(fpUrlFile->ReadString(sCurlLine))
{
m_sResult += sCurlLine;
UpdateData(FALSE);
}
}
}
Check that you are building is configured to use the correct project configuration settings.
Setting found: Project Properties|General|Project Defaults|Character Set
Maybe you have the wrong set "Not Set" | "Use Unicode"

How to encode the filename parameter of Content-Disposition header in HTTP?

Web applications that want to force a resource to be downloaded rather than directly rendered in a Web browser issue a Content-Disposition header in the HTTP response of the form:
Content-Disposition: attachment; filename=FILENAME
The filename parameter can be used to suggest a name for the file into which the resource is downloaded by the browser. RFC 2183 (Content-Disposition), however, states in section 2.3 (The Filename Parameter) that the file name can only use US-ASCII characters:
Current [RFC 2045] grammar restricts
parameter values (and hence
Content-Disposition filenames) to
US-ASCII. We recognize the great
desirability of allowing arbitrary
character sets in filenames, but it is
beyond the scope of this document to
define the necessary mechanisms.
There is empirical evidence, nevertheless, that most popular Web browsers today seem to permit non-US-ASCII characters yet (for the lack of a standard) disagree on the encoding scheme and character set specification of the file name. Question is then, what are the various schemes and encodings employed by the popular browsers if the file name “naïvefile” (without quotes and where the third letter is U+00EF) needed to be encoded into the Content-Disposition header?
For the purpose of this question, popular browsers being:
Google Chrome
Safari
Internet Explorer or Edge
Firefox
Opera
I know this is an old post but it is still very relevant. I have found that modern browsers support rfc5987, which allows utf-8 encoding, percentage encoded (url-encoded). Then Naïve file.txt becomes:
Content-Disposition: attachment; filename*=UTF-8''Na%C3%AFve%20file.txt
Safari (5) does not support this. Instead you should use the Safari standard of writing the file name directly in your utf-8 encoded header:
Content-Disposition: attachment; filename=Naïve file.txt
IE8 and older don't support it either and you need to use the IE standard of utf-8 encoding, percentage encoded:
Content-Disposition: attachment; filename=Na%C3%AFve%20file.txt
In ASP.Net I use the following code:
string contentDisposition;
if (Request.Browser.Browser == "IE" && (Request.Browser.Version == "7.0" || Request.Browser.Version == "8.0"))
contentDisposition = "attachment; filename=" + Uri.EscapeDataString(fileName);
else if (Request.Browser.Browser == "Safari")
contentDisposition = "attachment; filename=" + fileName;
else
contentDisposition = "attachment; filename*=UTF-8''" + Uri.EscapeDataString(fileName);
Response.AddHeader("Content-Disposition", contentDisposition);
I tested the above using IE7, IE8, IE9, Chrome 13, Opera 11, FF5, Safari 5.
Update November 2013:
Here is the code I currently use. I still have to support IE8, so I cannot get rid of the first part. It turns out that browsers on Android use the built in Android download manager and it cannot reliably parse file names in the standard way.
string contentDisposition;
if (Request.Browser.Browser == "IE" && (Request.Browser.Version == "7.0" || Request.Browser.Version == "8.0"))
contentDisposition = "attachment; filename=" + Uri.EscapeDataString(fileName);
else if (Request.UserAgent != null && Request.UserAgent.ToLowerInvariant().Contains("android")) // android built-in download manager (all browsers on android)
contentDisposition = "attachment; filename=\"" + MakeAndroidSafeFileName(fileName) + "\"";
else
contentDisposition = "attachment; filename=\"" + fileName + "\"; filename*=UTF-8''" + Uri.EscapeDataString(fileName);
Response.AddHeader("Content-Disposition", contentDisposition);
The above now tested in IE7-11, Chrome 32, Opera 12, FF25, Safari 6, using this filename for download: 你好abcABCæøåÆØÅäöüïëêîâéíáóúýñ½§!#¤%&()=`#£$€{[]}+´¨^~'-_,;.txt
On IE7 it works for some characters but not all. But who cares about IE7 nowadays?
This is the function I use to generate safe file names for Android. Note that I don't know which characters are supported on Android but that I have tested that these work for sure:
private static readonly Dictionary<char, char> AndroidAllowedChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-+,#£$€!½§~'=()[]{}0123456789".ToDictionary(c => c);
private string MakeAndroidSafeFileName(string fileName)
{
char[] newFileName = fileName.ToCharArray();
for (int i = 0; i < newFileName.Length; i++)
{
if (!AndroidAllowedChars.ContainsKey(newFileName[i]))
newFileName[i] = '_';
}
return new string(newFileName);
}
#TomZ: I tested in IE7 and IE8 and it turned out that I did not need to escape apostrophe ('). Do you have an example where it fails?
#Dave Van den Eynde: Combining the two file names on one line as according to RFC6266 works except for Android and IE7+8 and I have updated the code to reflect this. Thank you for the suggestion.
#Thilo: No idea about GoodReader or any other non-browser. You might have some luck using the Android approach.
#Alex Zhukovskiy: I don't know why but as discussed on Connect it doesn't seem to work terribly well.
There is no interoperable way to encode non-ASCII names in Content-Disposition. Browser compatibility is a mess.
The theoretically correct syntax for use of UTF-8 in Content-Disposition is very weird: filename*=UTF-8''foo%c3%a4 (yes, that's an asterisk, and no quotes except an empty single quote in the middle)
This header is kinda-not-quite-standard (HTTP/1.1 spec acknowledges its existence, but doesn't require clients to support it).
There is a simple and very robust alternative: use a URL that contains the filename you want.
When the name after the last slash is the one you want, you don't need any extra headers!
This trick works:
/real_script.php/fake_filename.doc
And if your server supports URL rewriting (e.g. mod_rewrite in Apache) then you can fully hide the script part.
Characters in URLs should be in UTF-8, urlencoded byte-by-byte:
/mot%C3%B6rhead # motörhead
There is discussion of this, including links to browser testing and backwards compatibility, in the proposed RFC 5987, "Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters."
RFC 2183 indicates that such headers should be encoded according to RFC 2184, which was obsoleted by RFC 2231, covered by the draft RFC above.
RFC 6266 describes the “Use of the Content-Disposition Header Field in the Hypertext Transfer Protocol (HTTP)”. Quoting from that:
6. Internationalization Considerations
The “filename*” parameter (Section 4.3), using the encoding defined
in [RFC5987], allows the server to transmit characters outside the
ISO-8859-1 character set, and also to optionally specify the language
in use.
And in their examples section:
This example is the same as the one above, but adding the "filename"
parameter for compatibility with user agents not implementing
RFC 5987:
Content-Disposition: attachment;
filename="EURO rates";
filename*=utf-8''%e2%82%ac%20rates
Note: Those user agents that do not support the RFC 5987 encoding
ignore “filename*” when it occurs after “filename”.
In Appendix D there is also a long list of suggestions to increase interoperability. It also points at a site which compares implementations. Current all-pass tests suitable for common file names include:
attwithisofnplain: plain ISO-8859-1 file name with double quotes and without encoding. This requires a file name which is all ISO-8859-1 and does not contain percent signs, at least not in front of hex digits.
attfnboth: two parameters in the order described above. Should work for most file names on most browsers, although IE8 will use the “filename” parameter.
That RFC 5987 in turn references RFC 2231, which describes the actual format. 2231 is primarily for mail, and 5987 tells us what parts may be used for HTTP headers as well. Don't confuse this with MIME headers used inside a multipart/form-data HTTP body, which is governed by RFC 2388 (section 4.4 in particular) and the HTML 5 draft.
The following document linked from the draft RFC mentioned by Jim in his answer further addresses the question and definitely worth a direct note here:
Test Cases for HTTP Content-Disposition header and RFC 2231/2047 Encoding
Put the file name in double quotes. Solved the problem for me. Like this:
Content-Disposition: attachment; filename="My Report.doc"
http://kb.mozillazine.org/Filenames_with_spaces_are_truncated_upon_download
I've tested multiple options. Browsers do not support the specs and act differently, I believe double quotes is the best option.
I use the following code snippets for encoding (assuming fileName contains the filename and extension of the file, i.e.: test.txt):
PHP:
if ( strpos ( $_SERVER [ 'HTTP_USER_AGENT' ], "MSIE" ) > 0 )
{
header ( 'Content-Disposition: attachment; filename="' . rawurlencode ( $fileName ) . '"' );
}
else
{
header( 'Content-Disposition: attachment; filename*=UTF-8\'\'' . rawurlencode ( $fileName ) );
}
Java:
fileName = request.getHeader ( "user-agent" ).contains ( "MSIE" ) ? URLEncoder.encode ( fileName, "utf-8") : MimeUtility.encodeWord ( fileName );
response.setHeader ( "Content-disposition", "attachment; filename=\"" + fileName + "\"");
in asp.net mvc2 i use something like this:
return File(
tempFile
, "application/octet-stream"
, HttpUtility.UrlPathEncode(fileName)
);
I guess if you don't use mvc(2) you could just encode the filename using
HttpUtility.UrlPathEncode(fileName)
In ASP.NET Web API, I url encode the filename:
public static class HttpRequestMessageExtensions
{
public static HttpResponseMessage CreateFileResponse(this HttpRequestMessage request, byte[] data, string filename, string mediaType)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new MemoryStream(data);
stream.Position = 0;
response.Content = new StreamContent(stream);
response.Content.Headers.ContentType =
new MediaTypeHeaderValue(mediaType);
// URL-Encode filename
// Fixes behavior in IE, that filenames with non US-ASCII characters
// stay correct (not "_utf-8_.......=_=").
var encodedFilename = HttpUtility.UrlEncode(filename, Encoding.UTF8);
response.Content.Headers.ContentDisposition =
new ContentDispositionHeaderValue("attachment") { FileName = encodedFilename };
return response;
}
}
In PHP this did it for me (assuming the filename is UTF8 encoded):
header('Content-Disposition: attachment;'
. 'filename="' . addslashes(utf8_decode($filename)) . '";'
. 'filename*=utf-8\'\'' . rawurlencode($filename));
Tested against IE8-11, Firefox and Chrome.
If the browser can interpret filename*=utf-8 it will use the UTF8 version of the filename, else it will use the decoded filename. If your filename contains characters that can't be represented in ISO-8859-1 you might want to consider using iconv instead.
Just an update since I was trying all this stuff today in response to a customer issue
With the exception of Safari configured for Japanese, all browsers our customer tested worked best with filename=text.pdf - where text is a customer value serialized by ASP.Net/IIS in utf-8 without url encoding. For some reason, Safari configured for English would accept and properly save a file with utf-8 Japanese name but that same browser configured for Japanese would save the file with the utf-8 chars uninterpreted. All other browsers tested seemed to work best/fine (regardless of language configuration) with the filename utf-8 encoded without url encoding.
I could not find a single browser implementing Rfc5987/8187 at all. I tested with the latest Chrome, Firefox builds plus IE 11 and Edge. I tried setting the header with just filename*=utf-8''texturlencoded.pdf, setting it with both filename=text.pdf; filename*=utf-8''texturlencoded.pdf. Not one feature of Rfc5987/8187 appeared to be getting processed correctly in any of the above.
If you are using a nodejs backend you can use the following code I found here
var fileName = 'my file(2).txt';
var header = "Content-Disposition: attachment; filename*=UTF-8''"
+ encodeRFC5987ValueChars(fileName);
function encodeRFC5987ValueChars (str) {
return encodeURIComponent(str).
// Note that although RFC3986 reserves "!", RFC5987 does not,
// so we do not need to escape it
replace(/['()]/g, escape). // i.e., %27 %28 %29
replace(/\*/g, '%2A').
// The following are not required for percent-encoding per RFC5987,
// so we can allow for a little better readability over the wire: |`^
replace(/%(?:7C|60|5E)/g, unescape);
}
I tested the following code in all major browsers, including older Explorers (via the compatibility mode), and it works well everywhere:
$filename = $_GET['file']; //this string from $_GET is already decoded
if (strstr($_SERVER['HTTP_USER_AGENT'],"MSIE"))
$filename = rawurlencode($filename);
header('Content-Disposition: attachment; filename="'.$filename.'"');
I ended up with the following code in my "download.php" script (based on this blogpost and these test cases).
$il1_filename = utf8_decode($filename);
$to_underscore = "\"\\#*;:|<>/?";
$safe_filename = strtr($il1_filename, $to_underscore, str_repeat("_", strlen($to_underscore)));
header("Content-Disposition: attachment; filename=\"$safe_filename\""
.( $safe_filename === $filename ? "" : "; filename*=UTF-8''".rawurlencode($filename) ));
This uses the standard way of filename="..." as long as there are only iso-latin1 and "safe" characters used; if not, it adds the filename*=UTF-8'' url-encoded way. According to this specific test case, it should work from MSIE9 up, and on recent FF, Chrome, Safari; on lower MSIE version, it should offer filename containing the ISO8859-1 version of the filename, with underscores on characters not in this encoding.
Final note: the max. size for each header field is 8190 bytes on apache. UTF-8 can be up to four bytes per character; after rawurlencode, it is x3 = 12 bytes per one character. Pretty inefficient, but it should still be theoretically possible to have more than 600 "smiles" %F0%9F%98%81 in the filename.
From .NET 4.5 (and Core 1.0) you can use ContentDispositionHeaderValue to do the formatting for you.
var fileName = "Naïve file.txt";
var h = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
h.FileNameStar = fileName;
h.FileName = "fallback-ascii-name.txt";
Response.Headers.Add("Content-Disposition", h.ToString());
h.ToString() Will result in:
attachment; filename*=utf-8''Na%C3%AFve%20file.txt; filename=fallback-ascii-name.txt
PHP framework Symfony 4 has $filenameFallback in HeaderUtils::makeDisposition.
You can look into this function for details - it is similar to the answers above.
Usage example:
$filenameFallback = preg_replace('#^.*\.#', md5($filename) . '.', $filename);
$disposition = $response->headers->makeDisposition(ResponseHeaderBag::DISPOSITION_ATTACHMENT, $filename, $filenameFallback);
$response->headers->set('Content-Disposition', $disposition);
For those who need a JavaScript way of encoding the header, I found that this function works well:
function createContentDispositionHeader(filename:string) {
const encoded = encodeURIComponent(filename);
return `attachment; filename*=UTF-8''${encoded}; filename="${encoded}"`;
}
This is based on what Nextcloud seems to be doing when downloading a file. The filename appears first as UTF-8 encoded, and possibly for compatibility with some browsers, the filename also appears without the UTF-8 prefix.
Classic ASP Solution
Most modern browsers support passing the Filename as UTF-8 now but as was the case with a File Upload solution I use that was based on FreeASPUpload.Net (site no longer exists, link points to archive.org) it wouldn't work as the parsing of the binary relied on reading single byte ASCII encoded strings, which worked fine when you passed UTF-8 encoded data until you get to characters ASCII doesn't support.
However I was able to find a solution to get the code to read and parse the binary as UTF-8.
Public Function BytesToString(bytes) 'UTF-8..
Dim bslen
Dim i, k , N
Dim b , count
Dim str
bslen = LenB(bytes)
str=""
i = 0
Do While i < bslen
b = AscB(MidB(bytes,i+1,1))
If (b And &HFC) = &HFC Then
count = 6
N = b And &H1
ElseIf (b And &HF8) = &HF8 Then
count = 5
N = b And &H3
ElseIf (b And &HF0) = &HF0 Then
count = 4
N = b And &H7
ElseIf (b And &HE0) = &HE0 Then
count = 3
N = b And &HF
ElseIf (b And &HC0) = &HC0 Then
count = 2
N = b And &H1F
Else
count = 1
str = str & Chr(b)
End If
If i + count - 1 > bslen Then
str = str&"?"
Exit Do
End If
If count>1 then
For k = 1 To count - 1
b = AscB(MidB(bytes,i+k+1,1))
N = N * &H40 + (b And &H3F)
Next
str = str & ChrW(N)
End If
i = i + count
Loop
BytesToString = str
End Function
Credit goes to Pure ASP File Upload by implementing the BytesToString() function from include_aspuploader.asp in my own code I was able to get UTF-8 filenames working.
Useful Links
Multipart/form-data and UTF-8 in a ASP Classic application
Unicode, UTF, ASCII, ANSI format differences
The method mimeHeaderEncode($string) from the library class Unicode does the job.
$file_name= Unicode::mimeHeaderEncode($file_name);
Example in drupal/php:
https://github.com/drupal/core-utility/blob/8.8.x/Unicode.php
/**
* Encodes MIME/HTTP headers that contain incorrectly encoded characters.
*
* For example, Unicode::mimeHeaderEncode('tést.txt') returns
* "=?UTF-8?B?dMOpc3QudHh0?=".
*
* See http://www.rfc-editor.org/rfc/rfc2047.txt for more information.
*
* Notes:
* - Only encode strings that contain non-ASCII characters.
* - We progressively cut-off a chunk with self::truncateBytes(). This ensures
* each chunk starts and ends on a character boundary.
* - Using \n as the chunk separator may cause problems on some systems and
* may have to be changed to \r\n or \r.
*
* #param string $string
* The header to encode.
* #param bool $shorten
* If TRUE, only return the first chunk of a multi-chunk encoded string.
*
* #return string
* The mime-encoded header.
*/
public static function mimeHeaderEncode($string, $shorten = FALSE) {
if (preg_match('/[^\x20-\x7E]/', $string)) {
// floor((75 - strlen("=?UTF-8?B??=")) * 0.75);
$chunk_size = 47;
$len = strlen($string);
$output = '';
while ($len > 0) {
$chunk = static::truncateBytes($string, $chunk_size);
$output .= ' =?UTF-8?B?' . base64_encode($chunk) . "?=\n";
if ($shorten) {
break;
}
$c = strlen($chunk);
$string = substr($string, $c);
$len -= $c;
}
return trim($output);
}
return $string;
}
We had a similar problem in a web application, and ended up by reading the filename from the HTML <input type="file">, and setting that in the url-encoded form in a new HTML <input type="hidden">. Of course we had to remove the path like "C:\fakepath\" that is returned by some browsers.
Of course this does not directly answer OPs question, but may be a solution for others.
I normally URL-encode (with %xx) the filenames, and it seems to work in all browsers. You might want to do some tests anyway.

Resources