Reading stream line by line without knowing its encoding - node.js

I have a situation where I need to process some data from a stream line by line. The problem is that the encoding of the data is not known in advance; it might be UTF-8 or any legacy single-byte encoding (e.g. Latin1, ISO-8859-5, etc). It will not be UTF16 or exotics like EBCDIC, so I can reasonably expect \n to be unambiguous, so in theory I can split it into lines. At some point, when I encounter an empty line, I will need to feed the rest of the stream somewhere else (without splitting it into lines, but still without any reencoding); think in terms of HTTP-style headers followed by an opaque body.
Here is what I got:
function processStream(stream) {
var buffer = '';
function splitLines(data) {
buffer += data;
var lf = buffer.indexOf('\n');
while (lf >= 0) {
var line = buffer.substr(0, lf - 1);
buffer = buffer.substr(lf + 1);
this.emit('line', line);
lf = buffer.indexOf('\n');
}
}
function processHeader(line) {
if (line.length) {
// do something with the line
} else {
// end of headers, stop splitting lines and start processing the body
this
.removeListener('data', splitLines)
.removeAllListeners('line')
.on('data', processBody);
if (buffer.length) {
// process leftover buffer as part of the body
processBody(buffer);
buffer = '';
}
}
}
function processBody(data) {
// do something with the body chunks
}
stream.setEncoding('binary');
stream
.on('data', splitLines)
.on('line', processHeader);
}
It does the job, but the problem is that the binary encoding is deprecated and will probably disappear in the future, leaving me without that option. All other Buffer encodings will either mangle the data or fail to decode it altogether if (most likely, when) it does not match the encoding. Working with Uint8Array instead will mean slow and inconvenient Javascript loops over the data just to find a newline.
Any suggestions on how to split a stream into lines on the fly, while remaining encoding-agnostic without using binary encoding?

Disclaimer: I'm not a Javascript developer.
At some point, when I encounter an empty line, I will need to feed the rest of the stream somewhere else (without splitting it into lines, but still without any reencoding)
Right. In that case, it sounds like you really don't want to think about the data as text at all. Treat it as you would any binary data, and split it on byte 0x0A. (Note that if it came from Windows to start with, you might want to also remove any trailing 0x0D value.)
I know it's text really, but without any encoding information, it's dangerous to impose any sort of interpretation on the data.
So you should keep two pieces of state:
A list of byte arrays
The current buffer
When you receive data, you logically want to create a new array with the current buffer prepending the new data. (For efficiency you may not want to actually create such an array, but I would do so to start with, until you've got it working.) Look for any 0x0A bytes, and split the array accordingly (create a new byte array as a "slice" of the existing array, and add the slice to the list). The new "current buffer" will be whatever data you have left after the final 0x0A.
If you see two 0x0A values in a row, then you'd go into your second mode of just copying the data.
This is all assuming that the Javascript / Node combination allows you to manipulate binary data as binary data, but I'd be shocked if it didn't. The important point is not to interpret it as text at any point.

Related

How to convert padbytes function to coldfusion

I have the following code in node and I am trying to convert to ColdFusion:
// a correct implementation of PKCS7. The rijndael js has a PKCS7 padding already implemented
// however, it incorrectly pads expecting the phrase to be multiples of 32 bytes when it should pad based on multiples
// 16 bytes. Also, every javascript implementation of PKCS7 assumes utf-8 char encoding. C# however is unicode or utf-16.
// This means that chars need to be treated in our code as 2 byte chars and not 1 byte chars.
function padBytes(string){
const strArray = [...new Buffer(string, 'ucs2')];
const need = 16 - ((strArray.length) % 16);
for(let i = 0; i < need; i++) {
strArray.push(need);
}
return Buffer.from(strArray);
}
I'm trying to understand exactly what this function is doing to convert it. As I think I understand it, it's converting the string to UTF-16 (UCS2) and then adding padding to each character. However, I don't understand why the need variable is the value it is, nor how exactly to achieve that in CF.
I also don't understand why it's only pushing the same value into the array over and over again. For starters, in my example script the string is 2018-06-14T15:44:10Z testaccount. The string array length is 64. I'm not sure how to achieve even that in CF.
I've tried character encoding, converting to binary and stuff to UTF-16 and just don't understand well enough the js function to replicate it in ColdFusion. I feel I'm missing something with the encoding.
EDIT:
The selected answer solves this problem, but because I was eventually trying to use the input data for encryption, the easier method was to not use this function at all but do the following:
<cfset stringToEncrypt = charsetDecode(input,"utf-16le") />
<cfset variables.result = EncryptBinary(stringToEncrypt, theKey, theAlgorithm, theIV) />
Update:
We followed up in chat and turns out the value is ultimately used with encrypt(). Since encrypt() already handles padding (automatically), no need for the custom padBytes() function. However, it did require switching to the less commonly used encryptBinary() function to maintain the UTF-16 encoding. The regular encrypt() function only handles UTF-8, which produces totally different results.
Trycf.com Example:
// Result with sample key/iv: P22lWwtD8pDrNdQGRb2T/w==
result = encrypt("abc", theKey, theAlgorithm, theEncoding, theIV);
// Result Result with sample key/iv: LJCROj8trkXVq1Q8SQNrbA==
input = charsetDecode("abc", "utf-16le");
result= binaryEncode(encryptBinary(input, theKey, theAlgorithm, theIV), "base64);
it's converting the string to utf-16
(ucs2) and then adding padding to each character.
... I feel I'm missing something with the encoding.
Yes, the first part seems to be decoding the string as UTF-16 (or UCS2 which are slightly different). As to what you're missing, you're not the only one. I couldn't get it to work either until I found this comment which explained "UTF-16" prepends a BOM. To omit the BOM, use either "UTF-16BE" or "UTF-16LE" depending on the endianess needed.
why it's only pushing the same value into the array over and over again.
Because that's the definition of PCKS7 padding. Instead of padding with something like nulls or zeroes, it calculates how many bytes padding are needed. Then uses that number as the padding value. For example, say a string needs an extra three bytes padding. PCKS7 appends the value 3 - three times: "string" + "3" + "3" + "3".
The rest of the code is similar in CF. Unfortunately, the results of charsetDecode() aren't mutable. You must build a separate array to hold the padding, then combine the two.
Note, this example combines the arrays using CF2016 specific syntax, but it could also be done with a simple loop instead
Function:
function padBytes(string text){
var combined = [];
var padding = [];
// decode as utf-16
var decoded = charsetDecode(arguments.text,"utf-16le");
// how many padding bytes are needed?
var need = 16 - (arrayLen(decoded) % 16);
// fill array with any padding bytes
for(var i = 0; i < need; i++) {
padding.append(need);
}
// concatenate the two arrays
// CF2016+ specific syntax. For earlier versions, use a loop
combined = combined.append(decoded, true);
combined = combined.append(padding, true);
return combined;
}
Usage:
result = padBytes("2018-06-14T15:44:10Z testaccount");
writeDump(binaryEncode( javacast("byte[]", result), "base64"));

Converting binary from string in node.js - which encoding to use?

I have a node.js code which receives a [command + data] via UDP (dgram) and creates an S3 bucket file in AWS.
Problem is, since the command + data is separated with a '|' character, I have to convert to string to apply split() and separate the command and the data. Then I pass this data to S3Bucket.putObject which creates a file with that data. The data is binary so it might contain non-ASCII characters such as 0xE5 (dec 224).
But when I check the created file, it contains a sequence like 0xEF 0xBF 0xBD replacing the original, so I guess this is the consequence of converting the data from binary (buffer in dgram) to string.
I tried to convert the data back to buffer using Buffer.from('binary') and others encodings, but so far I haven't get it right...
How can I solve this problem? What would be the right way to accomplish this?
You don't have to convert to string:
let idx = buffer.indexOf('|');
let command = buffer.slice(0, idx);
let data = buffer.slice(idx + 1);

Buffer constructor not treating encoding correctly for buffer length

I am trying to construct a utf16le string from a javascript string as a new buffer object.
It appears that setting a new Buffer('xxxxxxxxxx', utf16le) will actually have a length of 1/2 what it is expected to have. Such as we will only see 5 x's in the console logs.
var test = new Buffer('xxxxxxxxxx','utf16le');
for (var i=0;i<test.length;i++) {
console.log(i+':'+String.fromCharCode(test[i]));
}
Node version is v0.8.6
It is really unclear what you want to accomplish here. Your statement can mean (at least) 2 things:
How to convert an JS-String into a UTF-16-LE Byte-Array
How to convert a Byte-Array containing a UTF-16-LE String into a JS-String
What you are doing in your code sample is decoding a Byte-Array in a string represented as UTF-16-LE to a UTF-8 string and storing that as a buffer. Until you actually state what you want to accomplish, you have 0 chance of getting a coherent answer.
new Buffer('FF', 'hex') will yield a buffer of length 1 with all bits of the octet set. Which is likely the opposite of what you think it does.

Is it possible to base64-encode a file in chunks?

I'm trying to base64 encode a huge input file and end up with an text output file, and I'm trying to find out whether it's possible to encode the input file bit-by-bit, or whether I need to encode the entire thing at once.
This will be done on the AS/400 (iSeries), if that makes any difference. I'm using my own base64 encoding routine (written in RPG) which works excellently, and, were it not a case of size limitations, would be fine.
It's not possible bit-by-bit but 3 bytes at a time, or multiples of 3 bytes at time will do!.
In other words if you split your input file in "chunks" which size(s) is (are) multiples of 3 bytes, you can encode the chunks separately and piece together the resulting B64-encoded pieces together (in the corresponding orde, of course. Note that the last chuink needn't be exactly a multiple of 3 bytes in size, depending on the modulo 3 value of its size its corresponding B64 value will have a few of these padding characters (typically the equal sign) but that's ok, as thiswill be the only piece that has (and needs) such padding.
In the decoding direction, it is the same idea except that you need to split the B64-encoded data in multiples of 4 bytes. Decode them in parallel / individually as desired and re-piece the original data by appending the decoded parts together (again in the same order).
Example:
"File" contents =
"Never argue with the data." (Jimmy Neutron).
Straight encoding = Ik5ldmVyIGFyZ3VlIHdpdGggdGhlIGRhdGEuIiAoSmltbXkgTmV1dHJvbik=
Now, in chunks:
"Never argue --> Ik5ldmVyIGFyZ3Vl
with the --> IHdpdGggdGhl
data." (Jimmy Neutron) --> IGRhdGEuIiAoSmltbXkgTmV1dHJvbik=
As you see piece in that order the 3 encoded chunks amount the same as the code produced for the whole file.
Decoding is done similarly, with arbitrary chuncked sized provided they are multiples of 4 bytes. There is absolutely not need to have any kind of correspondance between the sizes used for encoding. (although standardizing to one single size for each direction (say 300 and 400) may makes things more uniform and easier to manage.
It is a trivial effort to split any given bytestream into chunks.
You can base64 any chunk of bytes without problem.
The problem you are faced with is that unless you place specific requirements on your chunks (multiples of 3 bytes), the sequence of base64-encoded chunks will be different than the actual output you want.
In C#, this is one (sloppy) way you could do it lazily. The execution is actually deferred until string.Concat is called, so you can do anything you want with the chunked strings. (If you plug this into LINQPad you will see the output)
void Main()
{
var data = "lorum ipsum etc lol this is an example!!";
var bytes = Encoding.ASCII.GetBytes(data);
var testFinal = Convert.ToBase64String(bytes);
var chunkedBytes = bytes.Chunk(3);
var base64chunks = chunkedBytes.Select(i => Convert.ToBase64String(i.ToArray()));
var final = string.Concat(base64chunks);
testFinal.Dump(); //output
final.Dump(); //output
}
public static class Extensions
{
public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> list, int chunkSize)
{
while(list.Take(1).Count() > 0)
{
yield return list.Take(chunkSize);
list = list.Skip(chunkSize);
}
}
}
Output
bG9ydW0gaXBzdW0gZXRjIGxvbCB0aGlzIGlzIGFuIGV4YW1wbGUhIQ==
bG9ydW0gaXBzdW0gZXRjIGxvbCB0aGlzIGlzIGFuIGV4YW1wbGUhIQ==
Hmmm, if you wrote the base64 conversion yourself you should have noticed the obvious thing: each sequence of 3 octets is represented by 4 characters in base64.
So you can split the base64 data at every multiple of four characters, and it will be possible to convert these chunks back to their original bits.
I don't know how character files and byte files are handled on an AS/400, but if it has both concepts, this should be very easy.
are text files limited in the length of each line?
are text files line-oriented, or are they just character streams?
how many bits does one byte have?
are byte files padded at the end, so that one can only create files that span whole disk sectors?
If you can answer all these questions, what exact difficulties do you have left?

MS Visual c++ 2008 char buffer longer than defined

I've got a char* buffer to hold a file that i read in binary mode. I know the length of the file is 70 bytes and this is the value being used to produce a buffer of the correct size. The problem is, there is 17 or 18 extra spaces in the array so some random characters are being added to the end. Could the be a unicode issue?
ulFLen stores the size of the file in bytes and has the correct value (70 for the file i'm testing on)
//Set up a buffer to store the file
pcfBuffer = new char[ulFLen];
//Reading the file
cout<<"Inputting File...";
fStream.seekg(0,ios::beg);
fStream.read(pcfBuffer,ulFLen);
if(!fStream.good()){cout<<"FAILED"<<endl;}else{cout<<"SUCCESS"<<endl;}
As it is a char array, you probably forgot a terminating NUL character.
The right way in this case would be:
//Set up a buffer to store the file and a terminating NUL character
pcfBuffer = new char[ulFLen+1];
//Reading the file
cout<<"Inputting File...";
fStream.seekg(0,ios::beg);
fStream.read(pcfBuffer,ulFLen);
if(!fStream.good()){cout<<"FAILED"<<endl;}else{cout<<"SUCCESS"<<endl;}
// Add NUL character
pcfBuffer[ulFLen] = 0;
But note that you only need a terminating NUL character for routines that depend on it, like string routines or when using printf using %s. If you use routines that use the fact that you know the length (70 characters), it will work without a NUL character, too.
Add following snippet after the data has been read, it will add the terminating Zero which is needed.
And btw it should be pcfBuffer = new char[ulFLen+1];
size_t read_count = fStream.gcount();
if(read_count<=ulFlen)
pcfBuffer[read_count]=0;
This will work no matter how much data has to be read (in your case gcount() should always return 70, so you could do following instead: pcfBuffer[70]=0;)

Resources