Node.js buffer with é character - node.js

I've created a simple buffer, and I've put into it my string "Héllo"
Here's my code:
var str = "Héllo";
var buff = new Buffer(str.length);
buff.write(str);
console.log(buff.toString("utf8"));
however, it returns "Héll" and not Héllo, why?
How can I fix that?

UTF-8 characters can have different length - from 1 byte to 4 bytes - look at this answer https://stackoverflow.com/a/9533324/4486609
So, it is not ok to assume that it have 2 bytes, as you did.
As for the right length, look at this https://stackoverflow.com/a/9864762/4486609

.length is reporting the number of chars, not the number of bytes. But new Buffer() is expecting the number of bytes. The 'é' requires two bytes. So the last char is falling off the end of the buffer and being truncated.
If you don't need to support anything older than Node.js 4.x.x, you can use Buffer.from():
let buffer = Buffer.from('Héllo');
console.log(buffer.toString()); // 'Héllo'

Related

How to convert padbytes function to coldfusion

I have the following code in node and I am trying to convert to ColdFusion:
// a correct implementation of PKCS7. The rijndael js has a PKCS7 padding already implemented
// however, it incorrectly pads expecting the phrase to be multiples of 32 bytes when it should pad based on multiples
// 16 bytes. Also, every javascript implementation of PKCS7 assumes utf-8 char encoding. C# however is unicode or utf-16.
// This means that chars need to be treated in our code as 2 byte chars and not 1 byte chars.
function padBytes(string){
const strArray = [...new Buffer(string, 'ucs2')];
const need = 16 - ((strArray.length) % 16);
for(let i = 0; i < need; i++) {
strArray.push(need);
}
return Buffer.from(strArray);
}
I'm trying to understand exactly what this function is doing to convert it. As I think I understand it, it's converting the string to UTF-16 (UCS2) and then adding padding to each character. However, I don't understand why the need variable is the value it is, nor how exactly to achieve that in CF.
I also don't understand why it's only pushing the same value into the array over and over again. For starters, in my example script the string is 2018-06-14T15:44:10Z testaccount. The string array length is 64. I'm not sure how to achieve even that in CF.
I've tried character encoding, converting to binary and stuff to UTF-16 and just don't understand well enough the js function to replicate it in ColdFusion. I feel I'm missing something with the encoding.
EDIT:
The selected answer solves this problem, but because I was eventually trying to use the input data for encryption, the easier method was to not use this function at all but do the following:
<cfset stringToEncrypt = charsetDecode(input,"utf-16le") />
<cfset variables.result = EncryptBinary(stringToEncrypt, theKey, theAlgorithm, theIV) />
Update:
We followed up in chat and turns out the value is ultimately used with encrypt(). Since encrypt() already handles padding (automatically), no need for the custom padBytes() function. However, it did require switching to the less commonly used encryptBinary() function to maintain the UTF-16 encoding. The regular encrypt() function only handles UTF-8, which produces totally different results.
Trycf.com Example:
// Result with sample key/iv: P22lWwtD8pDrNdQGRb2T/w==
result = encrypt("abc", theKey, theAlgorithm, theEncoding, theIV);
// Result Result with sample key/iv: LJCROj8trkXVq1Q8SQNrbA==
input = charsetDecode("abc", "utf-16le");
result= binaryEncode(encryptBinary(input, theKey, theAlgorithm, theIV), "base64);
it's converting the string to utf-16
(ucs2) and then adding padding to each character.
... I feel I'm missing something with the encoding.
Yes, the first part seems to be decoding the string as UTF-16 (or UCS2 which are slightly different). As to what you're missing, you're not the only one. I couldn't get it to work either until I found this comment which explained "UTF-16" prepends a BOM. To omit the BOM, use either "UTF-16BE" or "UTF-16LE" depending on the endianess needed.
why it's only pushing the same value into the array over and over again.
Because that's the definition of PCKS7 padding. Instead of padding with something like nulls or zeroes, it calculates how many bytes padding are needed. Then uses that number as the padding value. For example, say a string needs an extra three bytes padding. PCKS7 appends the value 3 - three times: "string" + "3" + "3" + "3".
The rest of the code is similar in CF. Unfortunately, the results of charsetDecode() aren't mutable. You must build a separate array to hold the padding, then combine the two.
Note, this example combines the arrays using CF2016 specific syntax, but it could also be done with a simple loop instead
Function:
function padBytes(string text){
var combined = [];
var padding = [];
// decode as utf-16
var decoded = charsetDecode(arguments.text,"utf-16le");
// how many padding bytes are needed?
var need = 16 - (arrayLen(decoded) % 16);
// fill array with any padding bytes
for(var i = 0; i < need; i++) {
padding.append(need);
}
// concatenate the two arrays
// CF2016+ specific syntax. For earlier versions, use a loop
combined = combined.append(decoded, true);
combined = combined.append(padding, true);
return combined;
}
Usage:
result = padBytes("2018-06-14T15:44:10Z testaccount");
writeDump(binaryEncode( javacast("byte[]", result), "base64"));

Buffer constructor not treating encoding correctly for buffer length

I am trying to construct a utf16le string from a javascript string as a new buffer object.
It appears that setting a new Buffer('xxxxxxxxxx', utf16le) will actually have a length of 1/2 what it is expected to have. Such as we will only see 5 x's in the console logs.
var test = new Buffer('xxxxxxxxxx','utf16le');
for (var i=0;i<test.length;i++) {
console.log(i+':'+String.fromCharCode(test[i]));
}
Node version is v0.8.6
It is really unclear what you want to accomplish here. Your statement can mean (at least) 2 things:
How to convert an JS-String into a UTF-16-LE Byte-Array
How to convert a Byte-Array containing a UTF-16-LE String into a JS-String
What you are doing in your code sample is decoding a Byte-Array in a string represented as UTF-16-LE to a UTF-8 string and storing that as a buffer. Until you actually state what you want to accomplish, you have 0 chance of getting a coherent answer.
new Buffer('FF', 'hex') will yield a buffer of length 1 with all bits of the octet set. Which is likely the opposite of what you think it does.

Why does ToBase64String change a 16 byte string to 24 bytes

I have the following code. When I check the value of variable i it is 16 bytes but then when the output is converted to Base64 it is 24 bytes.
byte[] bytOut = ms.GetBuffer();
int i = 0;
for (i = 0; i < bytOut.Length; i++)
if (bytOut[i] == 0)
break;
// convert into Base64 so that the result can be used in xml
return System.Convert.ToBase64String(bytOut, 0, i);
Is this expected? I am trying to cut down storage and this is one of my problems.
Base64 expresses the input string made of 8-bit bytes using 64 human-readable characters (64 characters = 6 bits of information).
The key to the answer of your question is that it the encoding works in 24 bit chunks, so every 24 bits or fraction thereof results in 4 characters of output.
16 bytes * 8 bits = 128 bits of information
128 bits / 24 bits per chunk = 5.333 chunks
So the final output will be 6 chunks or 24 characters.
The fractional chunks are handled with equal signs, which represent the trailing "null bits". In your case, the output will always end in '=='.
Yes, you'd expect to see some expansion. You're representing your data in a base with only 64 characters. All those unprintable ASCII characters still need a way to be encoded though. So you end up with slight expansion of the data.
Here's a link that explains how much: Base64: What is the worst possible increase in space usage?
Edit: Based on your comment above, if you need to reduce size, you should look at compressing the data before you encrypt. This will get you the max benefit from compression. Compressing encrypted binary does not work.
This is because a base64 string can contain only 64 characters ( and that is because it should be displayable) in other hand and byte has a variety of 256 characters so it can contain more information in it.
Base64 is a great way to represent binary data in a string using only standard, printable characters. It is not, however, a good way to represent string data because it takes more characters than the original string.

Remove trailing "=" when base64 encoding

I am noticing that whenever I base64 encode a string, a "=" is appended at the end. Can I remove this character and then reliably decode it later by adding it back, or is this dangerous? In other words, is the "=" always appended, or only in certain cases?
I want my encoded string to be as short as possible, that's why I want to know if I can always remove the "=" character and just add it back before decoding.
The = is padding. <!------------>
Wikipedia says
An additional pad character is
allocated which may be used to force
the encoded output into an integer
multiple of 4 characters (or
equivalently when the unencoded binary
text is not a multiple of 3 bytes) ;
these padding characters must then be
discarded when decoding but still
allow the calculation of the effective
length of the unencoded text, when its
input binary length would not be a
multiple of 3 bytes (the last non-pad
character is normally encoded so that
the last 6-bit block it represents
will be zero-padded on its least
significant bits, at most two pad
characters may occur at the end of the
encoded stream).
If you control the other end, you could remove it when in transport, then re-insert it (by checking the string length) before decoding.
Note that the data will not be valid Base64 in transport.
Also, Another user pointed out (relevant to PHP users):
Note that in PHP base64_decode will accept strings without padding, hence if you remove it to process it later in PHP it's not necessary to add it back. – Mahn Oct 16 '14 at 16:33
So if your destination is PHP, you can safely strip the padding and decode without fancy calculations.
I wrote part of Apache's commons-codec-1.4.jar Base64 decoder, and in that logic we are fine without padding characters. End-of-file and End-of-stream are just as good indicators that the Base64 message is finished as any number of '=' characters!
The URL-Safe variant we introduced in commons-codec-1.4 omits the padding characters on purpose to keep things smaller!
http://commons.apache.org/codec/apidocs/src-html/org/apache/commons/codec/binary/Base64.html#line.478
I guess a safer answer is, "depends on your decoder implementation," but logically it is not hard to write a decoder that doesn't need padding.
In JavaScript you could do something like this:
// if this is your Base64 encoded string
var str = 'VGhpcyBpcyBhbiBhd2Vzb21lIHNjcmlwdA==';
// make URL friendly:
str = str.replace(/\+/g, '-').replace(/\//g, '_').replace(/\=+$/, '');
// reverse to original encoding
if (str.length % 4 != 0){
str += ('===').slice(0, 4 - (str.length % 4));
}
str = str.replace(/-/g, '+').replace(/_/g, '/');
See also this Fiddle: http://jsfiddle.net/7bjaT/66/
= is added for padding. The length of a base64 string should be multiple of 4, so 1 or 2 = are added as necessary.
Read: No, you shouldn't remove it.
On Android I am using this:
Global
String CHARSET_NAME ="UTF-8";
Encode
String base64 = new String(
Base64.encode(byteArray, Base64.URL_SAFE | Base64.NO_PADDING | Base64.NO_CLOSE | Base64.NO_WRAP),
CHARSET_NAME);
return base64.trim();
Decode
byte[] bytes = Base64.decode(base64String,
Base64.URL_SAFE | Base64.NO_PADDING | Base64.NO_CLOSE | Base64.NO_WRAP);
equals this on Java:
Encode
private static String base64UrlEncode(byte[] input)
{
Base64 encoder = new Base64(true);
byte[] encodedBytes = encoder.encode(input);
return StringUtils.newStringUtf8(encodedBytes).trim();
}
Decode
private static byte[] base64UrlDecode(String input) {
byte[] originalValue = StringUtils.getBytesUtf8(input);
Base64 decoder = new Base64(true);
return decoder.decode(originalValue);
}
I had never problems with trailing "=" and I am using Bouncycastle as well
If you're encoding bytes (at fixed bit length), then the padding is redundant. This is the case for most people.
Base64 consumes 6 bits at a time and produces a byte of 8 bits that only uses six bits worth of combinations.
If your string is 1 byte (8 bits), you'll have an output of 12 bits as the smallest multiple of 6 that 8 will fit into, with 4 bits extra. If your string is 2 bytes, you have to output 18 bits, with two bits extra. For multiples of six against multiple of 8 you can have a remainder of either 0, 2 or 4 bits.
The padding says to ignore those extra four (==) or two (=) bits. The padding is there tell the decoder about your padding.
The padding isn't really needed when you're encoding bytes. A base64 encoder can simply ignore left over bits that total less than 8 bits. In this case, you're best off removing it.
The padding might be of some use for streaming and arbitrary length bit sequences as long as they're a multiple of two. It might also be used for cases where people want to only send the last 4 bits when more bits are remaining if the remaining bits are all zero. Some people might want to use it to detect incomplete sequences though it's hardly reliable for that. I've never seen this optimisation in practice. People rarely have these situations, most people use base64 for discrete byte sequences.
If you see answers suggesting to leave it on, that's not a good encouragement if you're simply encoding bytes, it's enabling a feature for a set of circumstances you don't have. The only reason to have it on in that case might be to add tolerance to decoders that don't work without the padding. If you control both ends, that's a non-concern.
If you're using PHP the following function will revert the stripped string to its original format with proper padding:
<?php
$str = 'base64 encoded string without equal signs stripped';
$str = str_pad($str, strlen($str) + (4 - ((strlen($str) % 4) ?: 4)), '=');
echo $str, "\n";
Using Python you can remove base64 padding and add it back like this:
from math import ceil
stripped = original.rstrip('=')
original = stripped.ljust(ceil(len(stripped) / 4) * 4, '=')
Yes, there are valid use cases where padding is omitted from a Base 64 encoding.
The JSON Web Signature (JWS) standard (RFC 7515) requires Base 64 encoded data to omit
padding. It expects:
Base64 encoding [...] with all trailing '='
characters omitted (as permitted by Section 3.2) and without the
inclusion of any line breaks, whitespace, or other additional
characters. Note that the base64url encoding of the empty octet
sequence is the empty string. (See Appendix C for notes on
implementing base64url encoding without padding.)
The same applies to the JSON Web Token (JWT) standard (RFC 7519).
In addition, Julius Musseau's answer has indicated that Apache's Base 64 decoder doesn't require padding to be present in Base 64 encoded data.
I do something like this with java8+
private static String getBase64StringWithoutPadding(String data) {
if(data == null) {
return "";
}
Base64.Encoder encoder = Base64.getEncoder().withoutPadding();
return encoder.encodeToString(data.getBytes());
}
This method gets an encoder which leaves out padding.
As mentioned in other answers already padding can be added after calculations if you need to decode it back.
For Android You may have trouble if You want to use android.util.base64 class, since that don't let you perform UnitTest others that integration test - those uses Adnroid environment.
In other hand if You will use java.util.base64, compiler warns You that You sdk may to to low (below 26) to use it.
So I suggest Android developers to use
implementation "commons-codec:commons-codec:1.13"
Encoding object
fun encodeObjectToBase64(objectToEncode: Any): String{
val objectJson = Gson().toJson(objectToEncode).toString()
return encodeStringToBase64(objectJson.toByteArray(Charsets.UTF_8))
}
fun encodeStringToBase64(byteArray: ByteArray): String{
return Base64.encodeBase64URLSafeString(byteArray).toString() // encode with no padding
}
Decoding to Object
fun <T> decodeBase64Object(encodedMessage: String, encodeToClass: Class<T>): T{
val decodedBytes = Base64.decodeBase64(encodedMessage)
val messageString = String(decodedBytes, StandardCharsets.UTF_8)
return Gson().fromJson(messageString, encodeToClass)
}
Of course You may omit Gson parsing and put straight away into method Your String transformed to ByteArray

MS Visual c++ 2008 char buffer longer than defined

I've got a char* buffer to hold a file that i read in binary mode. I know the length of the file is 70 bytes and this is the value being used to produce a buffer of the correct size. The problem is, there is 17 or 18 extra spaces in the array so some random characters are being added to the end. Could the be a unicode issue?
ulFLen stores the size of the file in bytes and has the correct value (70 for the file i'm testing on)
//Set up a buffer to store the file
pcfBuffer = new char[ulFLen];
//Reading the file
cout<<"Inputting File...";
fStream.seekg(0,ios::beg);
fStream.read(pcfBuffer,ulFLen);
if(!fStream.good()){cout<<"FAILED"<<endl;}else{cout<<"SUCCESS"<<endl;}
As it is a char array, you probably forgot a terminating NUL character.
The right way in this case would be:
//Set up a buffer to store the file and a terminating NUL character
pcfBuffer = new char[ulFLen+1];
//Reading the file
cout<<"Inputting File...";
fStream.seekg(0,ios::beg);
fStream.read(pcfBuffer,ulFLen);
if(!fStream.good()){cout<<"FAILED"<<endl;}else{cout<<"SUCCESS"<<endl;}
// Add NUL character
pcfBuffer[ulFLen] = 0;
But note that you only need a terminating NUL character for routines that depend on it, like string routines or when using printf using %s. If you use routines that use the fact that you know the length (70 characters), it will work without a NUL character, too.
Add following snippet after the data has been read, it will add the terminating Zero which is needed.
And btw it should be pcfBuffer = new char[ulFLen+1];
size_t read_count = fStream.gcount();
if(read_count<=ulFlen)
pcfBuffer[read_count]=0;
This will work no matter how much data has to be read (in your case gcount() should always return 70, so you could do following instead: pcfBuffer[70]=0;)

Resources