Consider following Node code
const bufA = Buffer.from('tést');
bufA :
<Buffer 74 c3 a9 73 74>
Why four characters in input translated to 5 hexadecimal bytes ?
When you call Buffer.from(string), a couple of things happen:
The encoding is defaulted to utf-8
The JS string, which is internally stored as an array of UCS-2 characters, is encoded into UTF-8
In UTF-8, é is a multi-byte character, like most accented characters from Latin scripts. Here's more information on how the character is encoded in different systems: https://www.fileformat.info/info/unicode/char/e9/index.htm
As you can see, the UTF-8 representation of this character is 0xC3 0xA9, which corresponds to the second and third bytes c3 a9 in your Buffer.
This also means that, when decoding Buffers (e.g. when concatenating data coming in from a Stream), some characters may fall on the buffer boundary and it may be impossible to decode the string until you have the remainder of the character (0xC3 on its own would be invalid). This is why code examples you find on the Web which do:
let result = '';
stream.on('data', function(buf) {
// BUG! Does not account for multi-byte characters.
result += buf.toString();
});
are almost always wrong - unless the Stream is already set up for handling encoding itself (then, you'll get strings, not buffers, when reading).
Related
On this RFC: https://www.rfc-editor.org/rfc/rfc7616#page-19 at page 19, there's this example of a text encoded in UTF-8:
J U+00E4 s U+00F8 n D o e
4A C3A4 73 C3B8 6E 20 44 6F 65
How do I represent it in a Rust String?
I tried https://mothereff.in/utf-8 and doing J\00E4s\00F8nDoe but it didn't work.
"Jäsøn Doe" should work fine. Rust source files are always UTF-8 encoded and a string literal may contain any Unicode scalar value (that is, any code point except surrogates, which must not be encoded in UTF-8).
If your editor does not support UTF-8 encoding, but supports ASCII, you can use Unicode code point escapes, which are documented in the Rust reference:
A 24-bit code point escape starts with U+0075 (u) and is followed by up to six hex digits surrounded by braces U+007B ({) and U+007D (}). It denotes the Unicode code point equal to the provided hex value.
suggesting the correct syntax should be "J\u{E4}s\u{F8}n Doe".
You can refer to Rust By Example as everything is not covered in rust eBook
(https://doc.rust-lang.org/stable/rust-by-example/std/str.html#literals-and-escapes)
You can use the syntax \u{your_unicode}
let unicode_str = String::from("J\u{00E4}s\u{00F8}nDoe");
println!("{}", unicode_str);
I'm working on a flutter project and I'm currently getting an error with some of the strings I try do decode using the base64.decode() method. I've created a short dart code which can reproduce the problem I'm facing with a specific string:
import 'dart:convert';
void main() {
final message = 'RU5UUkVHQUdSQVRJU1==';
print(utf8.decode(base64.decode(message)));
}
I'm getting the following error message:
Uncaught Error: FormatException: Invalid encoding before padding (at character 19)
RU5UUkVHQUdSQVRJU1==
I've tried decoding the same string with JavaScript and it works fine. Would be glad if someone could explain why am I getting this error, and possibly show me a solution. Thanks.
Base64 encoding breaks binary data into 6-bit segments of 3 full bytes and represents those as printable characters in ASCII standard. It does that in essentially two steps.
The first step is to break the binary string down into 6-bit blocks. Base64 only uses 6 bits (corresponding to 2^6 = 64 characters) to ensure encoded data is printable and humanly readable. None of the special characters available in ASCII are used.
The 64 characters (hence the name Base64) are 10 digits, 26 lowercase characters, 26 uppercase characters as well as the Plus sign (+) and the Forward Slash (/). There is also a 65th character known as a pad, which is the Equal sign (=). This character is used when the last segment of binary data doesn't contain a full 6 bits
So RU5UUkVHQUdSQVRJU1== doesn't follow the encoding pattern.
Use Underline character "_" as Padding Character and Decode With Pad Bytes Deleted
For some reason dart:convert's base64.decode chokes on strings padded with = with the "invalid encoding before padding error". This happens even if you use the package's own padding method base64.normalize which pads the string with the correct padding character =.
= is indeed the correct padding character for base64 encoding. It is used to fill out base64 strings when fewer than 24 bits are available in the input group. See RFC 4648, Section 4.
However, RFC 4648 Section 5 which is a base64 encoding scheme for Urls uses the underline character _ as padding instead of = to be Url safe.
Using _ as the padding character will cause base64.decode to decode without error.
In order to further decode the generated list of bytes to Utf8, you will need to delete the padding bytes or you will get an "Invalid UTF-8 byte" error.
See the code below. Here is the same code as a working dartpad.dev example.
import 'dart:convert';
void main() {
//String message = 'RU5UUkVHQUdSQVRJU1=='; //as of dart 2.18.2 this will generate an "invalid encoding before padding" error
//String message = base64.normalize('RU5UUkVHQUdSQVRJU1'); // will also generate same error
String message = 'RU5UUkVHQUdSQVRJU1';
print("Encoded String: $message");
print("Decoded String: ${decodeB64ToUtf8(message)}");
}
decodeB64ToUtf8(String message) {
message =
padBase64(message); // pad with underline => ('RU5UUkVHQUdSQVRJU1__')
List<int> dec = base64.decode(message);
//remove padding bytes
dec = dec.sublist(0, dec.length - RegExp(r'_').allMatches(message).length);
return utf8.decode(dec);
}
String padBase64(String rawBase64) {
return (rawBase64.length % 4 > 0)
? rawBase64 += List.filled(4 - (rawBase64.length % 4), "_").join("")
: rawBase64;
}
The string RU5UUkVHQUdSQVRJU1== is not a compliant base 64 encoding according to RFC 4648, which in section 3.5, "Canonical Encoding," states:
The padding step in base 64 and base 32 encoding can, if improperly
implemented, lead to non-significant alterations of the encoded data.
For example, if the input is only one octet for a base 64 encoding,
then all six bits of the first symbol are used, but only the first
two bits of the next symbol are used. These pad bits MUST be set to
zero by conforming encoders, which is described in the descriptions
on padding below. If this property do not hold, there is no
canonical representation of base-encoded data, and multiple base-
encoded strings can be decoded to the same binary data. If this
property (and others discussed in this document) holds, a canonical
encoding is guaranteed.
In some environments, the alteration is critical and therefore
decoders MAY chose to reject an encoding if the pad bits have not
been set to zero. The specification referring to this may mandate a
specific behaviour.
(Emphasis added.)
Here we will manually go through the base 64 decoding process.
Taking your encoded string RU5UUkVHQUdSQVRJU1== and performing the mapping from the base 64 character set (given in "Table 1: The Base 64 Alphabet" of the aforementioned RFC), we have:
R U 5 U U k V H Q U d S Q V R J U 1 = =
010001 010100 111001 010100 010100 100100 010101 000111 010000 010100 011101 010010 010000 010101 010001 001001 010100 110101 ______ ______
(using __ to represent the padding characters).
Now, grouping these by 8 instead of 6, we get
01000101 01001110 01010100 01010010 01000101 01000111 01000001 01000111 01010010 01000001 01010100 01001001 01010011 0101____ ________
E N T R E G A G R A T I S P
The important part is at the end, where there are some non-zero bits followed by padding. The Dart implementation is correctly determining that the padding provided doesn't make sense provided that the last four bits of the previous character do not decode to zeros.
As a result, the decoding of RU5UUkVHQUdSQVRJU1== is ambiguous. Is it ENTREGAGRATIS or ENTREGAGRATISP? It's precisely this reason why the RFC states, "These pad bits MUST be set to zero by conforming encoders."
In fact, because of this, I'd argue that an implementation that decodes RU5UUkVHQUdSQVRJU1== to ENTREGAGRATIS without complaint is problematic, because it's silently discarding non-zero bits.
The RFC-compliant encoding of ENTREGAGRATIS is RU5UUkVHQUdSQVRJUw==.
The RFC-compliant encoding of ENTREGAGRATISP is RU5UUkVHQUdSQVRJU1A=.
This further highlights the ambiguity of your input RU5UUkVHQUdSQVRJU1==, which matches neither.
I suggest you check your encoder to determine why it's providing you with non-compliant encodings, and make sure you're not losing information as a result.
Using python3 and I've got a string which displayed as bytes
strategyName=\xe7\x99\xbe\xe5\xba\xa6
I need to change it into readable chinese letter through decode
orig=b'strategyName=\xe7\x99\xbe\xe5\xba\xa6'
result=orig.decode('UTF-8')
print()
which shows like this and it is what I want
strategyName=百度
But if I save it in another string,it works different
str0='strategyName=\xe7\x99\xbe\xe5\xba\xa6'
result_byte=str0.encode('UTF-8')
result_str=result_byte.decode('UTF-8')
print(result_str)
strategyName=ç¾åº¦é£é©çç¥
Please help me about why this happening,and how can I fix it.
Thanks a lot
Your problem is using a str literal when you're trying to store the UTF-8 encoded bytes of your string. You should just use the bytes literal, but if that str form is necessary, the correct approach is to encode in latin-1 (which is a 1-1 converter for all ordinals below 256 to the matching byte value) to get the bytes with utf-8 encoded data, then decode as utf-8:
str0 = 'strategyName=\xe7\x99\xbe\xe5\xba\xa6'
result_byte = str0.encode('latin-1') # Only changed line
result_str = result_byte.decode('UTF-8')
print(result_str)
Of course, the other approach could be to just type the Unicode escapes you wanted in the first place instead of byte level escapes that correspond to a UTF-8 encoding:
result_str = 'strategyName=\u767e\u5ea6'
No rigmarole needed.
In the following:
my $string = "Can you \x{FB01}nd my r\x{E9}sum\x{E9}?\n";
The x{FB01} and x{E9} are code points. And code points are encoded via an encoding scheme to a series of octets.
So the character è which has the codepoint \x{FB01} is part of the string of $string. But how does this work? Are all the characters in this sentence (including the ASCII ones) encoded via UTF-8?
If yes why do I get the following behavior?
my $str = "Some arbitrary string\n";
if(Encode::is_utf8($str)) {
print "YES str IS UTF8!\n";
}
else {
print "NO str IT IS NOT UTF8\n";
}
This prints "NO str IT IS NOT UTF8\n"
Additionally Encode::is_utf8($string) returns true.
In what way are $string and $str different and one is considered UTF-8 and the other not?
And in any case what is the encoding of $str? ASCII? Is this the default for Perl?
In C, a string is a collection of octets, but Perl has two string storage formats:
String of 8-bit values.
String of 72-bit values. (In practice, limited to 32-bit or 64-bit.)
As such, you don't need to encode code points to store them in a string.
my $s = "\x{2660}\x{2661}";
say length $s; # 2
say sprintf '%X', ord substr($s, 0, 1); # 2660
say sprintf '%X', ord substr($s, 1, 1); # 2661
(Internally, an extension of UTF-8 called "utf8" is used to store the strings of 72-bit chars. That's not something you should ever have to know except to realize the performance implications, but there are bugs that expose this fact.)
Encode's is_utf8 reports which type of string a scalar contains. It's a function that serves absolutely no use except to debug the bugs I previously mentioned.
An 8-bit string can store the value of "abc" (or the string in the OP's $str), so Perl used the more efficient 8-bit (UTF8=0) string format.
An 8-bit string can't store the value of "\x{2660}\x{2661}" (or the string in the OP's $string), so Perl used the 72-bit (UTF8=1) string format.
Zero is zero whether it's stored in a floating point number, a signed integer or an unsigned integer. Similarly, the storage format of strings conveys no information about the value of the string.
You can store code points in an 8-bit string (if they're small enough) just as easily as a 72-bit string.
You can store bytes in a 72-bit string just as easily as an 8-bit string.
In fact, Perl will switch between the two formats at will. For example, if you concatenate $string with $str, you'll get a string in the 72-bit format.
You can alter the storage format of a string with the builtins utf8::downgrade and utf8::upgrade, should you ever need to work around a bug.
utf8::downgrade($s); # Switch to strings of 8-bit values (UTF8=0).
utf8::upgrade($s); # Switch to strings of 72-bit values (UTF8=1).
You can see the effect using Devel::Peek.
>perl -MDevel::Peek -e"$s=chr(0x80); utf8::downgrade($s); Dump($s);"
SV = PV(0x7b8a74) at 0x4a84c4
REFCNT = 1
FLAGS = (POK,pPOK)
PV = 0x7bab9c "\200"\0
CUR = 1
LEN = 12
>perl -MDevel::Peek -e"$s=chr(0x80); utf8::upgrade($s); Dump($s);"
SV = PV(0x558a6c) at 0x1cc843c
REFCNT = 1
FLAGS = (POK,pPOK,UTF8)
PV = 0x55ab94 "\302\200"\0 [UTF8 "\x{80}"]
CUR = 2
LEN = 12
The \x{FB01} and \x{E9} are code points.
Not quiet, the numeric values inside the braces are codepoints. The whole \x expression is just a notation for a character. There are several notations for characters, most of them starting with a backslash, but the common one is the simple string literal. You might as well write:
use utf8;
my $string = "Can you find my résumé?\n";
# ↑ ↑ ↑
And code points are encoded via an encoding scheme to a series of octets.
True, but so far your string is a string of characters, not a buffer of octets.
But how does this work?
Strings consist of characters. That's just Perl's model. You as a programmer are supposed to deal with it at this level.
Of course, the computer can't, and the internal data structure must have some form of internal encoding. Far too much confusion ensues because "Perl can't keep a secret", the details leak out occasionally.
Are all the characters in this sentence (including the ASCII ones) encoded via UTF-8?
No, the internal encoding is lax UTF8 (no dash). It does not have some of the restrictions that UTF-8 (a.k.a. UTF-8-strict) has.
UTF-8 goes up to 0x10_ffff, UTF8 goes up to 0xffff_ffff_ffff_ffff on my 64-bit system. Codepoints greater than 0xffff_ffff will emit a non-portability warning, though.
In UTF-8 certain codepoints are non-characters or illegal characters. In UTF8, anything goes.
Encode::is_utf8
… is an internals function, and is clearly marked as such. You as a programmer are not supposed to peek. But since you want to peek, no one can stop you. Devel::Peek::Dump is a better tool for getting at the internals.
Read http://p3rl.org/UNI for an introduction to the topic of encoding in Perl.
is_utf8 is a badly-named function that doesn't mean what you think it means or have anything to do with that. The answer to your question is that $string doesn't have an encoding, because it's not encoded. When you call Encode::encode with some encoding, the result of that will be a string that is encoded, and has a known encoding
I'm attempting to convert a string of characters from ASCII to EBCDIC using an IBM codepage. The conversion is correct except for lower case 'a' which is converted to an unprintable character.
Here is a piece of a groovy script running in Windows 7 that illustrates the problem.
groovy:000> letters='abcdABCD'
===> abcdABCD
groovy:000> String.format("%04x", new BigInteger(1, letters.getBytes())
===> 6162636441424344
groovy:000> lettersx=new String(letters.getBytes('IBM500'))
===> ?éâä┴┬├─
groovy:000> String.format("%04x", new BigInteger(1, lettersx.getBytes()))
===> 3f828384c1c2c3c4
After converting to EBCDIC all the characters in the string are valid except the first one, a lower case 'a'. Try as I might I can't find any information on this problem. I've tried a number of IBM code pages with the same results (IBM01140, IBM1047 etc.)
The problem is in this expression:
new String(letters.getBytes('IBM500'))
letters.getBytes creates a byte-array containing (in hexadecimal):
81 82 83 84 C1 C2 C3 C4
but then you're immediately converting that back to a Unicode String using your platform default encoding:
new String( <byte-array> );
If you want the ordinal values of the the characters in your String to be equal to the byte value, you must specify an encoding that does that, for example ISO-8859-1:
new String(letters.getBytes('IBM500'), "ISO-8859-1")
The encoding you're using does not define a character encoding for byte 81 so it is replacing it with ? (3f). You're most likely using Windows-1252.
Strings contain characters, not bytes. Java will always apply an encoding conversion when going from one to the other.
EDIT: responding to #mister270's comment:
Here's a program in Java to demonstrate:
public class Ebcdic
{
public static void main(String[] args) throws Exception
{
String letters = "abcdABCD";
byte[] ebcdic = letters.getBytes("IBM500");
System.out.print("Ebcdic bytes:");
for (byte b: ebcdic)
{
System.out.format(" %02X", b & 0xFF);
}
System.out.println();
String lettersEbcdic = new String(ebcdic, "ISO-8859-1");
System.out.print("Ebcdic bytes stored in chars:");
for (char c: lettersEbcdic.toCharArray())
{
System.out.format(" %04X", (int) c);
}
System.out.println();
System.out.println("Ebcdic bytes in chars printed in using my default platform encoding: " + lettersEbcdic);
}
}
Output is:
Ebcdic bytes: 81 82 83 84 C1 C2 C3 C4
Ebcdic bytes stored in chars: 0081 0082 0083 0084 00C1 00C2 00C3 00C4
Ebcdic bytes in chars printed in using my default platform encoding: ????��ǎ
What this shows is that
the Ebcdic conversion into the byte-array is occurring correctly using "IBM500"
the "identity" conversion of bytes to chars using "ISO-8859-1" is occurring correctly
My system doesn't have a mapping to convert Unicode character U+0081 etc to my default platform character encoding so it displays it as ?
Java (so Groovy too) stores characters internally as Unicode. UTF16, to be precise. If you want to encode them as Ebcdic, then they stop being characters and should no longer be held in the Strings. Ebcdic is an 8-bit encoding so each character can be stored in a byte. If you need to interface with a system that expects a particular encoding (in your case, Ebcdic), then that system really should accept bytes, not Strings, otherwise you end up with just these sorts of confusion.
If you must use Strings to hold Ebcdic bytes, then you must use the ISO-8859-1 encoding whenever you use an InputStream or OutputStream (including System.out) to ensure that your ebcdic codes are not "translated" from bytes to characters