Sending an email using sockets. I want to separate plain text and attachment. I use MIME multipart/mixed; boundary.
cSockSSL.send("MIME-Version: 1.0\r\n".encode())
cSockSSL.send("Content-Type: multipart/mixed; boundary=gg4g5gg\r\n".encode())
cSockSSL.send("--gg4g5gg".encode('ascii'))
cSockSSL.send("Content-Type: text/plain\r\n".encode())
cSockSSL.send("Some text".encode())
cSockSSL.send("--gg4g5gg".encode())
cSockSSL.send("Content-Type: text/plain\r\n".encode())
cSockSSL.send("Content-Disposition: attachment; filename = gg.txt\r\n".encode())
cSockSSL.send(txt_file)
cSockSSL.send("--gg4g5gg--".encode())
cSockSSL.send("\r\n.\r\n".encode())
In this case, I get an empty email with the header. If I delete first boundary I'll get this:
Some text--gg4g5ggContent-Type: text/plain
Content-Disposition: attachment; filename = gg.txt
Hey! I'm txt file!--gg4g5gg--
How to correctly split content-type?
Your email data is malformed, as you are missing several required line breaks.
Per RFC 2822, you need to separate the email's headers from the email's body using \r\n\r\n instead of \r\n.
And per RFC 2045, you need to separate MIME headers from MIME bodies using \r\n\r\n instead of \r\n. And, you are missing \r\n in front of each MIME boundary that follows a text body, and after each MIME boundary.
So, essentially, you are sending an email that looks like this, all squished together:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=gg4g5gg
--gg4g5ggContent-Type: text/plain
Some text--gg4g5ggContent-Type: text/plain
Content-Disposition: attachment; filename = gg.txt
<txt_file>--gg4g5gg--
.
But it needs to look more like this instead:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=gg4g5gg
--gg4g5gg
Content-Type: text/plain
Some text
--gg4g5gg
Content-Type: text/plain
Content-Disposition: attachment; filename = gg.txt
<txt_file>
--gg4g5gg--
.
So, try this instead:
cSockSSL.send("DATA\r\n".encode())
# verify response is 354...
cSockSSL.send("MIME-Version: 1.0\r\n".encode())
cSockSSL.send("Content-Type: multipart/mixed; boundary=gg4g5gg\r\n".encode())
cSockSSL.send("\r\n".encode()) # add a line break
cSockSSL.send("--gg4g5gg\r\n".encode('ascii')) # add a line break
cSockSSL.send("Content-Type: text/plain\r\n".encode())
cSockSSL.send("\r\n".encode()) # add a line break
cSockSSL.send("Some text".encode())
cSockSSL.send("\r\n--gg4g5gg\r\n".encode()) # add line breaks
cSockSSL.send("Content-Type: text/plain\r\n".encode())
cSockSSL.send("Content-Disposition: attachment; filename=gg.txt\r\n".encode())
cSockSSL.send("\r\n".encode()) # add a line break
cSockSSL.send(txt_file)
cSockSSL.send("\r\n--gg4g5gg--\r\n".encode()) # add line breaks
cSockSSL.send(".\r\n".encode())
Something else to be aware of - because the DATA command terminator is .\r\n, if "Some text" or txt_file contain any lines that begin with the . character, you MUST escape each leading . as .., per RFC 2821 Section 4.5.2.
I suggest you study the RFCs mentioned above more carefully.
Hi I am new to antrl and have a problem that I am not able to solve during the last days:
I wanted to write a grammar that recognizes this text (in reality I want to parse something different, but for the case of this question I simplified it)
100abc
150100
200def
Here each rows starts with 3 digits, that identifiy the type of the line (header, content, trailer), than 3 characters follow, that are the payload of the line.
I thought I could recogize this with this grammar:
grammar Types;
file : header content trailer;
A : [a-z|A-Z|0-9];
NL: '\n';
header : '100' A A A NL;
content: '150' A A A NL;
trailer: '200' A A A NL;
But this does not work. When the lexer reads the "100" in the second line ("150100") it reads it into one token with 100 as the value and not as three Tokens of type A. So the parser sees a "100" token where it expects an A Token.
I am pretty sure that this happens because the Lexer wants to match the longest phrase for one Token, so it cluster together the '1','0','0'. I found no way to solve this. Putting the Rule A above the parser Rule that contains the string literal "100" did not work. And also factoring the '100' into a fragement as follows did not work.
grammar Types;
file : header content trailer;
A : [a-z|A-Z|0-9];
NL: '\n';
HUNDRED: '100';
header : HUNDRED A A A NL;
content: '150' A A A NL;
trailer: '200' A A A NL;
I also read some other posts like this:
antlr4 mixed fragments in tokens
Lexer, overlapping rule, but want the shorter match
But I did not think, that it solves my problem, or at least I don't see how that could help me.
One of your token definitions is incorrect: A : [a-z|A-Z|0-9]; Don't use a vertical line inside a range [] set. A correct definition is: A : [a-zA-Z0-9];. ANTLR with version >= 4.6 will notify about duplicated chars | inside range set.
As I understand you mixed tokens and rules concept. Tokens defined with UPPER first letter unlike rules that defined with lower case first letter. Your header, content and trailer are tokens, not rules.
So, the final version of correct grammar on my opinion is
grammar Types;
file : Header Content Trailer;
A : [a-zA-Z0-9];
NL: '\r' '\n'? | '\n' | EOF; // Or leave only one type of newline.
Header : '100' A A A NL;
Content: '150' A A A NL;
Trailer: '200' A A A NL;
Your input text will be parsed to (file 100abc\n 150100\n 200def)
I am trying to deal with a url string
http://localhost:8000/lastnames/location/city/215722?filter=beginswith:p&paging=(offset:2,limit:2)
How do I handle parsing out those sub objects? The (offset:2,limit:2) just gets parsed out as a string. These are accepted delimiters in the URL spec so I thought something like url.parse (in node) would handle this.
The "URL specification" (actually, "Uniform Resource Identifier (URI): Generic Syntax, RFC-3986" defines the syntax of the query component to be:
query = *( pchar / "/" / "?" )
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
In other words, sub-delims (which include parentheses) and colons really are just ordinary characters in a query.
If requested (by passing true as its second argument), url.parse will also split the query into key-value assignments using the sub-delims = and &, as per the application/x-www-form-urlencoded format. Other sub-delims are not involved in query string encoding.
Note that url.parse doesn't decode pct-encoded sequences such as %25; for that, you need decodeURIComponent. That should be done only after the components are fully broken down into their parts.
In short, if you want to parse (offset:2,limit:2) into some other structured object, you'll need to do that yourself, possibly by using regexes or -- if the format is complicated enough -- a parser generator like jison. In any event, you should leave the percent-decoding step until the very end; otherwise, percent-encoded sub-delims won't be parsed correctly.
For my route I set:
String encoding = "iso-8859-1";
JaxbDataFormat jaxb = new JaxbDataFormat( Data.class.getPackage().getName() );
if( encoding != null) {
jaxb.setEncoding( encoding );
}
from( "file://" + location + "?charset=" + encoding )
.routeId(this.getClass().getSimpleName()) // Give a nice name
. etc.
then when I provide the file in this ISO encoding I get an exception stack:
[com.ctc.wstx.exc.WstxIOException: Invalid UTF-8 start byte 0xfc (at char #3964, byte #127)]
java.io.IOException: javax.xml.bind.UnmarshalException
- with linked exception:
[com.ctc.wstx.exc.WstxIOException: Invalid UTF-8 start byte 0xfc (at char #3964, byte #127)]
at org.apache.camel.converter.jaxb.JaxbDataFormat.unmarshal(JaxbDataFormat.java:153)
at org.apache.camel.processor.UnmarshalProcessor.process(UnmarshalProcessor.java:57)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:73)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:91)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
What could I be doing wrong?
According to the Camel doc, jaxb.setEncoding will not help, as this parameter is only used when marshalling XML documents but not when unmarshalling them.
In the ideal world, the encoding declaration in the prolog (first magic line in XML file) matches the actual encoding of the file:
<?xml version="1.0" encoding="ISO-8859-1"?>
This information is (or at least should be) automatically used by the file reading utility such as JAXB.
0xfc is a ISO-8859-1 encoded ü. In your case, check the prolog encoding declaration. If it doesn't say ISO-8859-1 it is faked. Ask the producer of the file (I hope, it wasn't you...) to set the declaration accordingly. Normally, this is correctly done by the XML marhalling framework.
If you cannot convince the producer of the file to set the correct declaration, then the things are getting trickier. In this case, you must know or guess the encoding and setting the camel header accordingly in the route:
.setHeader(Exchange.CHARSET_NAME, "ISO-8859-1")
According to the source code of JaxbDataFormat (here), this encoding is only taken into account if the filterNonXmlChars property of the JaxbDataFormat instance is set to true:
jaxb.setFilterNonXmlChars(true);
Alternatively, you may also set the Exchange.FILTER_NON_XML_CHARS property to true.
Web applications that want to force a resource to be downloaded rather than directly rendered in a Web browser issue a Content-Disposition header in the HTTP response of the form:
Content-Disposition: attachment; filename=FILENAME
The filename parameter can be used to suggest a name for the file into which the resource is downloaded by the browser. RFC 2183 (Content-Disposition), however, states in section 2.3 (The Filename Parameter) that the file name can only use US-ASCII characters:
Current [RFC 2045] grammar restricts
parameter values (and hence
Content-Disposition filenames) to
US-ASCII. We recognize the great
desirability of allowing arbitrary
character sets in filenames, but it is
beyond the scope of this document to
define the necessary mechanisms.
There is empirical evidence, nevertheless, that most popular Web browsers today seem to permit non-US-ASCII characters yet (for the lack of a standard) disagree on the encoding scheme and character set specification of the file name. Question is then, what are the various schemes and encodings employed by the popular browsers if the file name “naïvefile” (without quotes and where the third letter is U+00EF) needed to be encoded into the Content-Disposition header?
For the purpose of this question, popular browsers being:
Google Chrome
Safari
Internet Explorer or Edge
Firefox
Opera
I know this is an old post but it is still very relevant. I have found that modern browsers support rfc5987, which allows utf-8 encoding, percentage encoded (url-encoded). Then Naïve file.txt becomes:
Content-Disposition: attachment; filename*=UTF-8''Na%C3%AFve%20file.txt
Safari (5) does not support this. Instead you should use the Safari standard of writing the file name directly in your utf-8 encoded header:
Content-Disposition: attachment; filename=Naïve file.txt
IE8 and older don't support it either and you need to use the IE standard of utf-8 encoding, percentage encoded:
Content-Disposition: attachment; filename=Na%C3%AFve%20file.txt
In ASP.Net I use the following code:
string contentDisposition;
if (Request.Browser.Browser == "IE" && (Request.Browser.Version == "7.0" || Request.Browser.Version == "8.0"))
contentDisposition = "attachment; filename=" + Uri.EscapeDataString(fileName);
else if (Request.Browser.Browser == "Safari")
contentDisposition = "attachment; filename=" + fileName;
else
contentDisposition = "attachment; filename*=UTF-8''" + Uri.EscapeDataString(fileName);
Response.AddHeader("Content-Disposition", contentDisposition);
I tested the above using IE7, IE8, IE9, Chrome 13, Opera 11, FF5, Safari 5.
Update November 2013:
Here is the code I currently use. I still have to support IE8, so I cannot get rid of the first part. It turns out that browsers on Android use the built in Android download manager and it cannot reliably parse file names in the standard way.
string contentDisposition;
if (Request.Browser.Browser == "IE" && (Request.Browser.Version == "7.0" || Request.Browser.Version == "8.0"))
contentDisposition = "attachment; filename=" + Uri.EscapeDataString(fileName);
else if (Request.UserAgent != null && Request.UserAgent.ToLowerInvariant().Contains("android")) // android built-in download manager (all browsers on android)
contentDisposition = "attachment; filename=\"" + MakeAndroidSafeFileName(fileName) + "\"";
else
contentDisposition = "attachment; filename=\"" + fileName + "\"; filename*=UTF-8''" + Uri.EscapeDataString(fileName);
Response.AddHeader("Content-Disposition", contentDisposition);
The above now tested in IE7-11, Chrome 32, Opera 12, FF25, Safari 6, using this filename for download: 你好abcABCæøåÆØÅäöüïëêîâéíáóúýñ½§!#¤%&()=`#£$€{[]}+´¨^~'-_,;.txt
On IE7 it works for some characters but not all. But who cares about IE7 nowadays?
This is the function I use to generate safe file names for Android. Note that I don't know which characters are supported on Android but that I have tested that these work for sure:
private static readonly Dictionary<char, char> AndroidAllowedChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-+,#£$€!½§~'=()[]{}0123456789".ToDictionary(c => c);
private string MakeAndroidSafeFileName(string fileName)
{
char[] newFileName = fileName.ToCharArray();
for (int i = 0; i < newFileName.Length; i++)
{
if (!AndroidAllowedChars.ContainsKey(newFileName[i]))
newFileName[i] = '_';
}
return new string(newFileName);
}
#TomZ: I tested in IE7 and IE8 and it turned out that I did not need to escape apostrophe ('). Do you have an example where it fails?
#Dave Van den Eynde: Combining the two file names on one line as according to RFC6266 works except for Android and IE7+8 and I have updated the code to reflect this. Thank you for the suggestion.
#Thilo: No idea about GoodReader or any other non-browser. You might have some luck using the Android approach.
#Alex Zhukovskiy: I don't know why but as discussed on Connect it doesn't seem to work terribly well.
There is no interoperable way to encode non-ASCII names in Content-Disposition. Browser compatibility is a mess.
The theoretically correct syntax for use of UTF-8 in Content-Disposition is very weird: filename*=UTF-8''foo%c3%a4 (yes, that's an asterisk, and no quotes except an empty single quote in the middle)
This header is kinda-not-quite-standard (HTTP/1.1 spec acknowledges its existence, but doesn't require clients to support it).
There is a simple and very robust alternative: use a URL that contains the filename you want.
When the name after the last slash is the one you want, you don't need any extra headers!
This trick works:
/real_script.php/fake_filename.doc
And if your server supports URL rewriting (e.g. mod_rewrite in Apache) then you can fully hide the script part.
Characters in URLs should be in UTF-8, urlencoded byte-by-byte:
/mot%C3%B6rhead # motörhead
There is discussion of this, including links to browser testing and backwards compatibility, in the proposed RFC 5987, "Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters."
RFC 2183 indicates that such headers should be encoded according to RFC 2184, which was obsoleted by RFC 2231, covered by the draft RFC above.
RFC 6266 describes the “Use of the Content-Disposition Header Field in the Hypertext Transfer Protocol (HTTP)”. Quoting from that:
6. Internationalization Considerations
The “filename*” parameter (Section 4.3), using the encoding defined
in [RFC5987], allows the server to transmit characters outside the
ISO-8859-1 character set, and also to optionally specify the language
in use.
And in their examples section:
This example is the same as the one above, but adding the "filename"
parameter for compatibility with user agents not implementing
RFC 5987:
Content-Disposition: attachment;
filename="EURO rates";
filename*=utf-8''%e2%82%ac%20rates
Note: Those user agents that do not support the RFC 5987 encoding
ignore “filename*” when it occurs after “filename”.
In Appendix D there is also a long list of suggestions to increase interoperability. It also points at a site which compares implementations. Current all-pass tests suitable for common file names include:
attwithisofnplain: plain ISO-8859-1 file name with double quotes and without encoding. This requires a file name which is all ISO-8859-1 and does not contain percent signs, at least not in front of hex digits.
attfnboth: two parameters in the order described above. Should work for most file names on most browsers, although IE8 will use the “filename” parameter.
That RFC 5987 in turn references RFC 2231, which describes the actual format. 2231 is primarily for mail, and 5987 tells us what parts may be used for HTTP headers as well. Don't confuse this with MIME headers used inside a multipart/form-data HTTP body, which is governed by RFC 2388 (section 4.4 in particular) and the HTML 5 draft.
The following document linked from the draft RFC mentioned by Jim in his answer further addresses the question and definitely worth a direct note here:
Test Cases for HTTP Content-Disposition header and RFC 2231/2047 Encoding
Put the file name in double quotes. Solved the problem for me. Like this:
Content-Disposition: attachment; filename="My Report.doc"
http://kb.mozillazine.org/Filenames_with_spaces_are_truncated_upon_download
I've tested multiple options. Browsers do not support the specs and act differently, I believe double quotes is the best option.
I use the following code snippets for encoding (assuming fileName contains the filename and extension of the file, i.e.: test.txt):
PHP:
if ( strpos ( $_SERVER [ 'HTTP_USER_AGENT' ], "MSIE" ) > 0 )
{
header ( 'Content-Disposition: attachment; filename="' . rawurlencode ( $fileName ) . '"' );
}
else
{
header( 'Content-Disposition: attachment; filename*=UTF-8\'\'' . rawurlencode ( $fileName ) );
}
Java:
fileName = request.getHeader ( "user-agent" ).contains ( "MSIE" ) ? URLEncoder.encode ( fileName, "utf-8") : MimeUtility.encodeWord ( fileName );
response.setHeader ( "Content-disposition", "attachment; filename=\"" + fileName + "\"");
in asp.net mvc2 i use something like this:
return File(
tempFile
, "application/octet-stream"
, HttpUtility.UrlPathEncode(fileName)
);
I guess if you don't use mvc(2) you could just encode the filename using
HttpUtility.UrlPathEncode(fileName)
In ASP.NET Web API, I url encode the filename:
public static class HttpRequestMessageExtensions
{
public static HttpResponseMessage CreateFileResponse(this HttpRequestMessage request, byte[] data, string filename, string mediaType)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new MemoryStream(data);
stream.Position = 0;
response.Content = new StreamContent(stream);
response.Content.Headers.ContentType =
new MediaTypeHeaderValue(mediaType);
// URL-Encode filename
// Fixes behavior in IE, that filenames with non US-ASCII characters
// stay correct (not "_utf-8_.......=_=").
var encodedFilename = HttpUtility.UrlEncode(filename, Encoding.UTF8);
response.Content.Headers.ContentDisposition =
new ContentDispositionHeaderValue("attachment") { FileName = encodedFilename };
return response;
}
}
In PHP this did it for me (assuming the filename is UTF8 encoded):
header('Content-Disposition: attachment;'
. 'filename="' . addslashes(utf8_decode($filename)) . '";'
. 'filename*=utf-8\'\'' . rawurlencode($filename));
Tested against IE8-11, Firefox and Chrome.
If the browser can interpret filename*=utf-8 it will use the UTF8 version of the filename, else it will use the decoded filename. If your filename contains characters that can't be represented in ISO-8859-1 you might want to consider using iconv instead.
Just an update since I was trying all this stuff today in response to a customer issue
With the exception of Safari configured for Japanese, all browsers our customer tested worked best with filename=text.pdf - where text is a customer value serialized by ASP.Net/IIS in utf-8 without url encoding. For some reason, Safari configured for English would accept and properly save a file with utf-8 Japanese name but that same browser configured for Japanese would save the file with the utf-8 chars uninterpreted. All other browsers tested seemed to work best/fine (regardless of language configuration) with the filename utf-8 encoded without url encoding.
I could not find a single browser implementing Rfc5987/8187 at all. I tested with the latest Chrome, Firefox builds plus IE 11 and Edge. I tried setting the header with just filename*=utf-8''texturlencoded.pdf, setting it with both filename=text.pdf; filename*=utf-8''texturlencoded.pdf. Not one feature of Rfc5987/8187 appeared to be getting processed correctly in any of the above.
If you are using a nodejs backend you can use the following code I found here
var fileName = 'my file(2).txt';
var header = "Content-Disposition: attachment; filename*=UTF-8''"
+ encodeRFC5987ValueChars(fileName);
function encodeRFC5987ValueChars (str) {
return encodeURIComponent(str).
// Note that although RFC3986 reserves "!", RFC5987 does not,
// so we do not need to escape it
replace(/['()]/g, escape). // i.e., %27 %28 %29
replace(/\*/g, '%2A').
// The following are not required for percent-encoding per RFC5987,
// so we can allow for a little better readability over the wire: |`^
replace(/%(?:7C|60|5E)/g, unescape);
}
I tested the following code in all major browsers, including older Explorers (via the compatibility mode), and it works well everywhere:
$filename = $_GET['file']; //this string from $_GET is already decoded
if (strstr($_SERVER['HTTP_USER_AGENT'],"MSIE"))
$filename = rawurlencode($filename);
header('Content-Disposition: attachment; filename="'.$filename.'"');
I ended up with the following code in my "download.php" script (based on this blogpost and these test cases).
$il1_filename = utf8_decode($filename);
$to_underscore = "\"\\#*;:|<>/?";
$safe_filename = strtr($il1_filename, $to_underscore, str_repeat("_", strlen($to_underscore)));
header("Content-Disposition: attachment; filename=\"$safe_filename\""
.( $safe_filename === $filename ? "" : "; filename*=UTF-8''".rawurlencode($filename) ));
This uses the standard way of filename="..." as long as there are only iso-latin1 and "safe" characters used; if not, it adds the filename*=UTF-8'' url-encoded way. According to this specific test case, it should work from MSIE9 up, and on recent FF, Chrome, Safari; on lower MSIE version, it should offer filename containing the ISO8859-1 version of the filename, with underscores on characters not in this encoding.
Final note: the max. size for each header field is 8190 bytes on apache. UTF-8 can be up to four bytes per character; after rawurlencode, it is x3 = 12 bytes per one character. Pretty inefficient, but it should still be theoretically possible to have more than 600 "smiles" %F0%9F%98%81 in the filename.
From .NET 4.5 (and Core 1.0) you can use ContentDispositionHeaderValue to do the formatting for you.
var fileName = "Naïve file.txt";
var h = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
h.FileNameStar = fileName;
h.FileName = "fallback-ascii-name.txt";
Response.Headers.Add("Content-Disposition", h.ToString());
h.ToString() Will result in:
attachment; filename*=utf-8''Na%C3%AFve%20file.txt; filename=fallback-ascii-name.txt
PHP framework Symfony 4 has $filenameFallback in HeaderUtils::makeDisposition.
You can look into this function for details - it is similar to the answers above.
Usage example:
$filenameFallback = preg_replace('#^.*\.#', md5($filename) . '.', $filename);
$disposition = $response->headers->makeDisposition(ResponseHeaderBag::DISPOSITION_ATTACHMENT, $filename, $filenameFallback);
$response->headers->set('Content-Disposition', $disposition);
For those who need a JavaScript way of encoding the header, I found that this function works well:
function createContentDispositionHeader(filename:string) {
const encoded = encodeURIComponent(filename);
return `attachment; filename*=UTF-8''${encoded}; filename="${encoded}"`;
}
This is based on what Nextcloud seems to be doing when downloading a file. The filename appears first as UTF-8 encoded, and possibly for compatibility with some browsers, the filename also appears without the UTF-8 prefix.
Classic ASP Solution
Most modern browsers support passing the Filename as UTF-8 now but as was the case with a File Upload solution I use that was based on FreeASPUpload.Net (site no longer exists, link points to archive.org) it wouldn't work as the parsing of the binary relied on reading single byte ASCII encoded strings, which worked fine when you passed UTF-8 encoded data until you get to characters ASCII doesn't support.
However I was able to find a solution to get the code to read and parse the binary as UTF-8.
Public Function BytesToString(bytes) 'UTF-8..
Dim bslen
Dim i, k , N
Dim b , count
Dim str
bslen = LenB(bytes)
str=""
i = 0
Do While i < bslen
b = AscB(MidB(bytes,i+1,1))
If (b And &HFC) = &HFC Then
count = 6
N = b And &H1
ElseIf (b And &HF8) = &HF8 Then
count = 5
N = b And &H3
ElseIf (b And &HF0) = &HF0 Then
count = 4
N = b And &H7
ElseIf (b And &HE0) = &HE0 Then
count = 3
N = b And &HF
ElseIf (b And &HC0) = &HC0 Then
count = 2
N = b And &H1F
Else
count = 1
str = str & Chr(b)
End If
If i + count - 1 > bslen Then
str = str&"?"
Exit Do
End If
If count>1 then
For k = 1 To count - 1
b = AscB(MidB(bytes,i+k+1,1))
N = N * &H40 + (b And &H3F)
Next
str = str & ChrW(N)
End If
i = i + count
Loop
BytesToString = str
End Function
Credit goes to Pure ASP File Upload by implementing the BytesToString() function from include_aspuploader.asp in my own code I was able to get UTF-8 filenames working.
Useful Links
Multipart/form-data and UTF-8 in a ASP Classic application
Unicode, UTF, ASCII, ANSI format differences
The method mimeHeaderEncode($string) from the library class Unicode does the job.
$file_name= Unicode::mimeHeaderEncode($file_name);
Example in drupal/php:
https://github.com/drupal/core-utility/blob/8.8.x/Unicode.php
/**
* Encodes MIME/HTTP headers that contain incorrectly encoded characters.
*
* For example, Unicode::mimeHeaderEncode('tést.txt') returns
* "=?UTF-8?B?dMOpc3QudHh0?=".
*
* See http://www.rfc-editor.org/rfc/rfc2047.txt for more information.
*
* Notes:
* - Only encode strings that contain non-ASCII characters.
* - We progressively cut-off a chunk with self::truncateBytes(). This ensures
* each chunk starts and ends on a character boundary.
* - Using \n as the chunk separator may cause problems on some systems and
* may have to be changed to \r\n or \r.
*
* #param string $string
* The header to encode.
* #param bool $shorten
* If TRUE, only return the first chunk of a multi-chunk encoded string.
*
* #return string
* The mime-encoded header.
*/
public static function mimeHeaderEncode($string, $shorten = FALSE) {
if (preg_match('/[^\x20-\x7E]/', $string)) {
// floor((75 - strlen("=?UTF-8?B??=")) * 0.75);
$chunk_size = 47;
$len = strlen($string);
$output = '';
while ($len > 0) {
$chunk = static::truncateBytes($string, $chunk_size);
$output .= ' =?UTF-8?B?' . base64_encode($chunk) . "?=\n";
if ($shorten) {
break;
}
$c = strlen($chunk);
$string = substr($string, $c);
$len -= $c;
}
return trim($output);
}
return $string;
}
We had a similar problem in a web application, and ended up by reading the filename from the HTML <input type="file">, and setting that in the url-encoded form in a new HTML <input type="hidden">. Of course we had to remove the path like "C:\fakepath\" that is returned by some browsers.
Of course this does not directly answer OPs question, but may be a solution for others.
I normally URL-encode (with %xx) the filenames, and it seems to work in all browsers. You might want to do some tests anyway.