Is Base64 Encoded BSON smaller then BSON?
Piskvor's right, base64-encoded-anything is longer than raw. You base64-encode something to get it down a channel with a limited character repertoire, not as a means of reducing size.
Perhaps the question should be: Is Base64-encoded BSON smaller then JSON?
If so then JSON-vs-BSON is very much dependent on the content. For example arbitrary floating point numbers like 1.2345678901234567 are more efficiently stored in 8 binary bytes in BSON than the JSON string digit version. But the more common numbers like, say, 1, are much more efficiently stored as strings in JSON.
For string values, BSON loses 4 bytes for a length word, but gets some back for every " and \ JSON has to escape, plus more in strings with control characters where JSON has to use a hex sequence. (Some JSON encoders also \u-escape every non-ASCII character to ensure safe transmission regardless of character set.)
IMO: BSON does not have a big compactness advantage over JSON in general. Its strength lies more in simplicity of decoding in a low-level language, plus datatypes JavaScript doesn't have. It can have marginal advantages for binary strings and a few other cases; it's certainly worth checking for a particular workload. But it's telling that the examples in the BSON specification itself are considerably smaller in JSON.
As for base64-encoded BSON: the same, except 33% worse.
No: with base64, 3 bytes of plaintext become 4 bytes of encoded text, therefore the result will always be larger, no matter what the data payload is. See also: http://en.wikipedia.org/wiki/Base64
just wrote this as my soolution to shortify bson
please check, it can help you:
var bsonShortify = {
encode:function(bson){
return this._hex2urlBase(bson.substr(0,bson.length/2))+this._hex2urlBase(bson.substr(bson.length/2,bson.length/2));
},
decode:function(token){
return this._urlBase2hex(token.substr(0,token.length/2))+this._urlBase2hex(token.substr(token.length/2,token.length/2));
},
_base:62,
_baseChars:"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz",
_urlBase2hex:function(token){
var s = 0, n, l = (n = token.split("")).length, i = 0;
while(l--) s += this._baseChars.indexOf(n[i++]) * Math.pow(this._base, l);
return s.toString(16);
},
_hex2urlBase:function(bson){
var s = "", n = parseInt(bson,16);
while(n) s = this._baseChars[n % this._base] + s, n = Math.floor(n / this._base);
return s;
}
}
TEST
//we have bson
var bson = '4f907f7e53a58f4313000028';
//let's encode it
var urlstring = bsonShortify.encode(bson) // = OqAYQdCHijCDMbRg
//let's decode urlstring
var decoded_bson = bsonShortify.decode(urlstring); // = 4f907f7e53a58f4313000028
console.log('bson',bson);
console.log('urlstring',urlstring);
console.log('decoded_bson',decoded_bson);
Related
I have the following code in node and I am trying to convert to ColdFusion:
// a correct implementation of PKCS7. The rijndael js has a PKCS7 padding already implemented
// however, it incorrectly pads expecting the phrase to be multiples of 32 bytes when it should pad based on multiples
// 16 bytes. Also, every javascript implementation of PKCS7 assumes utf-8 char encoding. C# however is unicode or utf-16.
// This means that chars need to be treated in our code as 2 byte chars and not 1 byte chars.
function padBytes(string){
const strArray = [...new Buffer(string, 'ucs2')];
const need = 16 - ((strArray.length) % 16);
for(let i = 0; i < need; i++) {
strArray.push(need);
}
return Buffer.from(strArray);
}
I'm trying to understand exactly what this function is doing to convert it. As I think I understand it, it's converting the string to UTF-16 (UCS2) and then adding padding to each character. However, I don't understand why the need variable is the value it is, nor how exactly to achieve that in CF.
I also don't understand why it's only pushing the same value into the array over and over again. For starters, in my example script the string is 2018-06-14T15:44:10Z testaccount. The string array length is 64. I'm not sure how to achieve even that in CF.
I've tried character encoding, converting to binary and stuff to UTF-16 and just don't understand well enough the js function to replicate it in ColdFusion. I feel I'm missing something with the encoding.
EDIT:
The selected answer solves this problem, but because I was eventually trying to use the input data for encryption, the easier method was to not use this function at all but do the following:
<cfset stringToEncrypt = charsetDecode(input,"utf-16le") />
<cfset variables.result = EncryptBinary(stringToEncrypt, theKey, theAlgorithm, theIV) />
Update:
We followed up in chat and turns out the value is ultimately used with encrypt(). Since encrypt() already handles padding (automatically), no need for the custom padBytes() function. However, it did require switching to the less commonly used encryptBinary() function to maintain the UTF-16 encoding. The regular encrypt() function only handles UTF-8, which produces totally different results.
Trycf.com Example:
// Result with sample key/iv: P22lWwtD8pDrNdQGRb2T/w==
result = encrypt("abc", theKey, theAlgorithm, theEncoding, theIV);
// Result Result with sample key/iv: LJCROj8trkXVq1Q8SQNrbA==
input = charsetDecode("abc", "utf-16le");
result= binaryEncode(encryptBinary(input, theKey, theAlgorithm, theIV), "base64);
it's converting the string to utf-16
(ucs2) and then adding padding to each character.
... I feel I'm missing something with the encoding.
Yes, the first part seems to be decoding the string as UTF-16 (or UCS2 which are slightly different). As to what you're missing, you're not the only one. I couldn't get it to work either until I found this comment which explained "UTF-16" prepends a BOM. To omit the BOM, use either "UTF-16BE" or "UTF-16LE" depending on the endianess needed.
why it's only pushing the same value into the array over and over again.
Because that's the definition of PCKS7 padding. Instead of padding with something like nulls or zeroes, it calculates how many bytes padding are needed. Then uses that number as the padding value. For example, say a string needs an extra three bytes padding. PCKS7 appends the value 3 - three times: "string" + "3" + "3" + "3".
The rest of the code is similar in CF. Unfortunately, the results of charsetDecode() aren't mutable. You must build a separate array to hold the padding, then combine the two.
Note, this example combines the arrays using CF2016 specific syntax, but it could also be done with a simple loop instead
Function:
function padBytes(string text){
var combined = [];
var padding = [];
// decode as utf-16
var decoded = charsetDecode(arguments.text,"utf-16le");
// how many padding bytes are needed?
var need = 16 - (arrayLen(decoded) % 16);
// fill array with any padding bytes
for(var i = 0; i < need; i++) {
padding.append(need);
}
// concatenate the two arrays
// CF2016+ specific syntax. For earlier versions, use a loop
combined = combined.append(decoded, true);
combined = combined.append(padding, true);
return combined;
}
Usage:
result = padBytes("2018-06-14T15:44:10Z testaccount");
writeDump(binaryEncode( javacast("byte[]", result), "base64"));
I am trying to construct a utf16le string from a javascript string as a new buffer object.
It appears that setting a new Buffer('xxxxxxxxxx', utf16le) will actually have a length of 1/2 what it is expected to have. Such as we will only see 5 x's in the console logs.
var test = new Buffer('xxxxxxxxxx','utf16le');
for (var i=0;i<test.length;i++) {
console.log(i+':'+String.fromCharCode(test[i]));
}
Node version is v0.8.6
It is really unclear what you want to accomplish here. Your statement can mean (at least) 2 things:
How to convert an JS-String into a UTF-16-LE Byte-Array
How to convert a Byte-Array containing a UTF-16-LE String into a JS-String
What you are doing in your code sample is decoding a Byte-Array in a string represented as UTF-16-LE to a UTF-8 string and storing that as a buffer. Until you actually state what you want to accomplish, you have 0 chance of getting a coherent answer.
new Buffer('FF', 'hex') will yield a buffer of length 1 with all bits of the octet set. Which is likely the opposite of what you think it does.
I am trying to read data from csv and put that in drop down. This CSV is written in Hindi font (shusha.ttf).
While reading each line I am getting junk values.
string sFileName = "C://MyFile.csv";
Assembly assem = Assembly.GetCallingAssembly();
FileStream[] fss = assem.GetFiles();
if (!File.Exists(sFileName))
{
MessageBox.Show("Items File Not Present");
return false;
}
StreamReader sr = new StreamReader(sFileName);
string sItem = null;
bool isFirstLine = true;
do
{
sItem = sr.ReadLine();
if (sItem != null)
{
string[] arrItems = sItem.Split(',');
if (!isFirstLine)
{
listItems.Add(arrItems[0]);
}
isFirstLine = false;
}
} while (sItem != null);
return true;
You're not providing an encoding paramter to the StreamReader, so it is assuming a default encoding, which is not the encoding the file was written with.
Not all text files or csv files are the same. Encoding systems choose how to convert 'characters' (glyphs, word pictures, letters, whatever) to bytes to store in a computer.
There are many different encoding systems - ASCII, EBDIC, Utf8, Utf16, Utf32, etc.
You need to figure out which encoding the file was written with and pass that as the Encoding parameter to the StreamReader class.
I would have figured that the file was written with UTF8, since it's a pretty universal standard for non-english text; StreamReader's default is to use UTF8 when you don't provide a value, so it is probably not utf8. It's possible it's UTF16, or perhaps even some other completely different encoding.
For the curious who want some background on Unicode - unicode is a standard that assigns simple numbers to glyphs, ranging form ascii to snowmen to mandarin, etc. Unicode just gives each glyph a number, known as a code point. Unicode however is NOT an encoding - it doesn't say how to actually represent those code points as bytes.
UTF8 is a unicode encoding that can cover the entirety of the unicode space, as is UTF16 and UTF32. UTF8 writes 1 byte out for code points below a certain value, 2 bytes for code points below a certain higher value, and so on, and uses signaling bits in each byte to help indicate whether a code point was written using one, two, three, etc bytes.
Internally, for instance, C# represents strings using UTF16, which is why if you look at the raw memory for strings containing only ascii text, you'll see lots of '0' values - ascii doesn't need the other 8 bits, so the values end up being all 0.
Here's a link from wikipedia that explains how UTF8 packs bits from the code point value, with signalling bits, into bytes to store in memory: https://en.wikipedia.org/wiki/UTF-8
I'm trying to base64 encode a huge input file and end up with an text output file, and I'm trying to find out whether it's possible to encode the input file bit-by-bit, or whether I need to encode the entire thing at once.
This will be done on the AS/400 (iSeries), if that makes any difference. I'm using my own base64 encoding routine (written in RPG) which works excellently, and, were it not a case of size limitations, would be fine.
It's not possible bit-by-bit but 3 bytes at a time, or multiples of 3 bytes at time will do!.
In other words if you split your input file in "chunks" which size(s) is (are) multiples of 3 bytes, you can encode the chunks separately and piece together the resulting B64-encoded pieces together (in the corresponding orde, of course. Note that the last chuink needn't be exactly a multiple of 3 bytes in size, depending on the modulo 3 value of its size its corresponding B64 value will have a few of these padding characters (typically the equal sign) but that's ok, as thiswill be the only piece that has (and needs) such padding.
In the decoding direction, it is the same idea except that you need to split the B64-encoded data in multiples of 4 bytes. Decode them in parallel / individually as desired and re-piece the original data by appending the decoded parts together (again in the same order).
Example:
"File" contents =
"Never argue with the data." (Jimmy Neutron).
Straight encoding = Ik5ldmVyIGFyZ3VlIHdpdGggdGhlIGRhdGEuIiAoSmltbXkgTmV1dHJvbik=
Now, in chunks:
"Never argue --> Ik5ldmVyIGFyZ3Vl
with the --> IHdpdGggdGhl
data." (Jimmy Neutron) --> IGRhdGEuIiAoSmltbXkgTmV1dHJvbik=
As you see piece in that order the 3 encoded chunks amount the same as the code produced for the whole file.
Decoding is done similarly, with arbitrary chuncked sized provided they are multiples of 4 bytes. There is absolutely not need to have any kind of correspondance between the sizes used for encoding. (although standardizing to one single size for each direction (say 300 and 400) may makes things more uniform and easier to manage.
It is a trivial effort to split any given bytestream into chunks.
You can base64 any chunk of bytes without problem.
The problem you are faced with is that unless you place specific requirements on your chunks (multiples of 3 bytes), the sequence of base64-encoded chunks will be different than the actual output you want.
In C#, this is one (sloppy) way you could do it lazily. The execution is actually deferred until string.Concat is called, so you can do anything you want with the chunked strings. (If you plug this into LINQPad you will see the output)
void Main()
{
var data = "lorum ipsum etc lol this is an example!!";
var bytes = Encoding.ASCII.GetBytes(data);
var testFinal = Convert.ToBase64String(bytes);
var chunkedBytes = bytes.Chunk(3);
var base64chunks = chunkedBytes.Select(i => Convert.ToBase64String(i.ToArray()));
var final = string.Concat(base64chunks);
testFinal.Dump(); //output
final.Dump(); //output
}
public static class Extensions
{
public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> list, int chunkSize)
{
while(list.Take(1).Count() > 0)
{
yield return list.Take(chunkSize);
list = list.Skip(chunkSize);
}
}
}
Output
bG9ydW0gaXBzdW0gZXRjIGxvbCB0aGlzIGlzIGFuIGV4YW1wbGUhIQ==
bG9ydW0gaXBzdW0gZXRjIGxvbCB0aGlzIGlzIGFuIGV4YW1wbGUhIQ==
Hmmm, if you wrote the base64 conversion yourself you should have noticed the obvious thing: each sequence of 3 octets is represented by 4 characters in base64.
So you can split the base64 data at every multiple of four characters, and it will be possible to convert these chunks back to their original bits.
I don't know how character files and byte files are handled on an AS/400, but if it has both concepts, this should be very easy.
are text files limited in the length of each line?
are text files line-oriented, or are they just character streams?
how many bits does one byte have?
are byte files padded at the end, so that one can only create files that span whole disk sectors?
If you can answer all these questions, what exact difficulties do you have left?
I am noticing that whenever I base64 encode a string, a "=" is appended at the end. Can I remove this character and then reliably decode it later by adding it back, or is this dangerous? In other words, is the "=" always appended, or only in certain cases?
I want my encoded string to be as short as possible, that's why I want to know if I can always remove the "=" character and just add it back before decoding.
The = is padding. <!------------>
Wikipedia says
An additional pad character is
allocated which may be used to force
the encoded output into an integer
multiple of 4 characters (or
equivalently when the unencoded binary
text is not a multiple of 3 bytes) ;
these padding characters must then be
discarded when decoding but still
allow the calculation of the effective
length of the unencoded text, when its
input binary length would not be a
multiple of 3 bytes (the last non-pad
character is normally encoded so that
the last 6-bit block it represents
will be zero-padded on its least
significant bits, at most two pad
characters may occur at the end of the
encoded stream).
If you control the other end, you could remove it when in transport, then re-insert it (by checking the string length) before decoding.
Note that the data will not be valid Base64 in transport.
Also, Another user pointed out (relevant to PHP users):
Note that in PHP base64_decode will accept strings without padding, hence if you remove it to process it later in PHP it's not necessary to add it back. – Mahn Oct 16 '14 at 16:33
So if your destination is PHP, you can safely strip the padding and decode without fancy calculations.
I wrote part of Apache's commons-codec-1.4.jar Base64 decoder, and in that logic we are fine without padding characters. End-of-file and End-of-stream are just as good indicators that the Base64 message is finished as any number of '=' characters!
The URL-Safe variant we introduced in commons-codec-1.4 omits the padding characters on purpose to keep things smaller!
http://commons.apache.org/codec/apidocs/src-html/org/apache/commons/codec/binary/Base64.html#line.478
I guess a safer answer is, "depends on your decoder implementation," but logically it is not hard to write a decoder that doesn't need padding.
In JavaScript you could do something like this:
// if this is your Base64 encoded string
var str = 'VGhpcyBpcyBhbiBhd2Vzb21lIHNjcmlwdA==';
// make URL friendly:
str = str.replace(/\+/g, '-').replace(/\//g, '_').replace(/\=+$/, '');
// reverse to original encoding
if (str.length % 4 != 0){
str += ('===').slice(0, 4 - (str.length % 4));
}
str = str.replace(/-/g, '+').replace(/_/g, '/');
See also this Fiddle: http://jsfiddle.net/7bjaT/66/
= is added for padding. The length of a base64 string should be multiple of 4, so 1 or 2 = are added as necessary.
Read: No, you shouldn't remove it.
On Android I am using this:
Global
String CHARSET_NAME ="UTF-8";
Encode
String base64 = new String(
Base64.encode(byteArray, Base64.URL_SAFE | Base64.NO_PADDING | Base64.NO_CLOSE | Base64.NO_WRAP),
CHARSET_NAME);
return base64.trim();
Decode
byte[] bytes = Base64.decode(base64String,
Base64.URL_SAFE | Base64.NO_PADDING | Base64.NO_CLOSE | Base64.NO_WRAP);
equals this on Java:
Encode
private static String base64UrlEncode(byte[] input)
{
Base64 encoder = new Base64(true);
byte[] encodedBytes = encoder.encode(input);
return StringUtils.newStringUtf8(encodedBytes).trim();
}
Decode
private static byte[] base64UrlDecode(String input) {
byte[] originalValue = StringUtils.getBytesUtf8(input);
Base64 decoder = new Base64(true);
return decoder.decode(originalValue);
}
I had never problems with trailing "=" and I am using Bouncycastle as well
If you're encoding bytes (at fixed bit length), then the padding is redundant. This is the case for most people.
Base64 consumes 6 bits at a time and produces a byte of 8 bits that only uses six bits worth of combinations.
If your string is 1 byte (8 bits), you'll have an output of 12 bits as the smallest multiple of 6 that 8 will fit into, with 4 bits extra. If your string is 2 bytes, you have to output 18 bits, with two bits extra. For multiples of six against multiple of 8 you can have a remainder of either 0, 2 or 4 bits.
The padding says to ignore those extra four (==) or two (=) bits. The padding is there tell the decoder about your padding.
The padding isn't really needed when you're encoding bytes. A base64 encoder can simply ignore left over bits that total less than 8 bits. In this case, you're best off removing it.
The padding might be of some use for streaming and arbitrary length bit sequences as long as they're a multiple of two. It might also be used for cases where people want to only send the last 4 bits when more bits are remaining if the remaining bits are all zero. Some people might want to use it to detect incomplete sequences though it's hardly reliable for that. I've never seen this optimisation in practice. People rarely have these situations, most people use base64 for discrete byte sequences.
If you see answers suggesting to leave it on, that's not a good encouragement if you're simply encoding bytes, it's enabling a feature for a set of circumstances you don't have. The only reason to have it on in that case might be to add tolerance to decoders that don't work without the padding. If you control both ends, that's a non-concern.
If you're using PHP the following function will revert the stripped string to its original format with proper padding:
<?php
$str = 'base64 encoded string without equal signs stripped';
$str = str_pad($str, strlen($str) + (4 - ((strlen($str) % 4) ?: 4)), '=');
echo $str, "\n";
Using Python you can remove base64 padding and add it back like this:
from math import ceil
stripped = original.rstrip('=')
original = stripped.ljust(ceil(len(stripped) / 4) * 4, '=')
Yes, there are valid use cases where padding is omitted from a Base 64 encoding.
The JSON Web Signature (JWS) standard (RFC 7515) requires Base 64 encoded data to omit
padding. It expects:
Base64 encoding [...] with all trailing '='
characters omitted (as permitted by Section 3.2) and without the
inclusion of any line breaks, whitespace, or other additional
characters. Note that the base64url encoding of the empty octet
sequence is the empty string. (See Appendix C for notes on
implementing base64url encoding without padding.)
The same applies to the JSON Web Token (JWT) standard (RFC 7519).
In addition, Julius Musseau's answer has indicated that Apache's Base 64 decoder doesn't require padding to be present in Base 64 encoded data.
I do something like this with java8+
private static String getBase64StringWithoutPadding(String data) {
if(data == null) {
return "";
}
Base64.Encoder encoder = Base64.getEncoder().withoutPadding();
return encoder.encodeToString(data.getBytes());
}
This method gets an encoder which leaves out padding.
As mentioned in other answers already padding can be added after calculations if you need to decode it back.
For Android You may have trouble if You want to use android.util.base64 class, since that don't let you perform UnitTest others that integration test - those uses Adnroid environment.
In other hand if You will use java.util.base64, compiler warns You that You sdk may to to low (below 26) to use it.
So I suggest Android developers to use
implementation "commons-codec:commons-codec:1.13"
Encoding object
fun encodeObjectToBase64(objectToEncode: Any): String{
val objectJson = Gson().toJson(objectToEncode).toString()
return encodeStringToBase64(objectJson.toByteArray(Charsets.UTF_8))
}
fun encodeStringToBase64(byteArray: ByteArray): String{
return Base64.encodeBase64URLSafeString(byteArray).toString() // encode with no padding
}
Decoding to Object
fun <T> decodeBase64Object(encodedMessage: String, encodeToClass: Class<T>): T{
val decodedBytes = Base64.decodeBase64(encodedMessage)
val messageString = String(decodedBytes, StandardCharsets.UTF_8)
return Gson().fromJson(messageString, encodeToClass)
}
Of course You may omit Gson parsing and put straight away into method Your String transformed to ByteArray