Using NodeJS v5.6 I created a file called read-stream.js:
const
fs = require('fs'),
stream = fs.createReadStream(process.argv[2]);
stream.on('data', function(chunk) {
process.stdout.write(chunk);
});
stream.on('error', function(err) {
process.stderr.write("ERROR: " + err.message + "\n");
});
and a data file in plain text called target.txt:
hello world
this is the second line
If I do node read-stream.js target.txt the contents of target.txt are printed normally on my console and all is well.
However if I switch process.stdout.write(chunk); with console.log(chunk); then the result I get is this:
<Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64 0a 74 68 69 73 20 69 73 20 74 68 65 20 73 65 63 6f 6e 64 20 6c 69 6e 65 0a>
I recently found out that by doing console.log(chunk.toString()); the contents of my file are once again printed normally.
As per this question, console.log is supposed to use process.stdout.write with the addition of a \n character. But what exactly is happening with encoding/decodings here?
Thanks in advance.
process.stdout is a stream and its write() function only accepts strings and buffers. chunk is a Buffer object, process.stdout.write writes the bytes of data directly in your console so they appear as strings. console.log builds a string representation of the Buffer object before outputting it, hence the <Buffer at the beginning to indicate the object's type and following are the bytes of this buffer.
On a side note, process.stdout being a stream, you can pipe to it directly instead of reading every chunk:
stream.pipe(process.stdout);
I believe I found out what's happening:
The implementation of console.log in NodeJS is this:
Console.prototype.log = function() {
this._stdout.write(util.format.apply(this, arguments) + '\n');
};
However the util.format function of lib/util.js in NodeJS uses the inspect method on any input object which in turn: Returns a string representation of object, which is useful for debugging.
Thus what's happening here is that due to util.format "object casting", anytime that we pass an object to console.log, that particular object is first turned into a string representation and then is passed to process.stdout.write as a string and finally gets written to the terminal.
So, when we directly use process.stdout.write with buffer objects, util.format is completely skipped and each byte is directly written to terminal as process.stdout.write is designed to handle them directly.
Related
A mqtt client send a binary message to certain topic. My node.js client subscribes to the topic and recieves the binary data. As our architecture payload is Int16Array. But I cannot cast it successfully to Javascript array.
//uint16 array [255, 256, 257, 258] sent as message payload, which contents <ff 00 00 01 01 01 02 01>
When I do this:
mqttClient.on("message", (topic, payload) => {
console.log(payload.buffer);
})
The output like:
ArrayBuffer {
[Uint8Contents]: <30 11 00 07 74 65 73 74 6f 7a 69 ff 00 00 01 01 01 02 01>,
byteLength: 19
}
which cant be cast to Int16Array because of odd length
It also contains more bytes than the original message
As it seems the original bytes exist at the end of the payload, which is offset for some reason.
Buffer also contains the offset and byte length information inside. By using them casting should be successful.
let arrayBuffer = payload.buffer.slice(payload.byteOffset,payload.byteOffset + payload.byteLength)
let int16Array = new Int16Array(arrayBuffer)
let array = Array.from(arrayBuffer)
I am working on Change Streams introduced in MongoDB Version 3.6. Change Streams have a feature where I can specify to start streaming changes from a particular change in history. In native driver for Node.js, to resume change stream, it says (documentation here)
Specifies the logical starting point for the new change stream. This should be the _id field from a previously returned change stream document.
When I print it in console, this is what I am getting
{ _id:
{ _data:
Binary {
_bsontype: 'Binary',
sub_type: 0,
position: 49,
buffer: <Buffer 82 5a 61 a5 4f 00 00 00 01 46 64 5f 69 64 00 64 5a 61 a5 4f 08 c2 95 31 d0 48 a8 2e 00 5a 10 04 7c c9 60 de de 18 48 94 87 3f 37 63 08 da bb 78 04> } },
...
}
My problem is I do not know how to store the _id of this format in a database or a file. Is it possible to convert this binary object to string so I can use it later to resume my change stream from that particular _id. Example code would be greatly appreciated.
Convert BSON Binary to buffer and back
const Binary = require('mongodb').Binary;
const fs = require('fs');
Save _data from _id:
var wstream = fs.createWriteStream('/tmp/test');
wstream.write(lastChange._id._data.read(0));
wstream.close();
Then rebuild resumeToken:
fs.readFile('/tmp/test', void 0, function(err, data) {
const resumeToken = { _data: new Binary(data) };
});
I have following function in node.js inside a http.request()
res.on('data', function (chunk) {
var sr="response: "+chunk;
console.log(chunk);
});
I get this in console
<Buffer 3c 3f 78 6d 6c 20 76 65 72 73 69 6f 6e 3d 22 31 2e 30 22 20 65 6e 63 6f
64 69 6e 67 3d 22 75 74 66 2d 38 22 20 3f 3e 3c 72 65 73 75 6c 74 3e 3c 73 75 63
...>
But when i use this:
res.on('data', function (chunk) {
var sr="response: "+chunk;
console.log(sr);
});
I get a proper xml response like this:
responose: .....xml responose.....
I don't understand why i need to append a string to output the proper response. And what is meant by the response generated in the first code?
chunk is a Buffer, which is Node's way of storing binary data.
Because it's binary data, you need to convert it to a string before you can properly show it (otherwise, console.log will show its object representation). One method is to append it to another string (your second example does that), another method is to call toString() on it:
console.log(chunk.toString());
However, I think this has the potential of failing when chunk contains incomplete characters (an UTF-8 character can consist of multiple bytes, but you don't get the guarantee that chunk isn't cut off right in the middle of such a byte string).
Chunk is just a buffer where the data is stored in Binary, so you could use utf8 for the character encoding as well which will output the data as String, and this you will need to do when you are creating the readStream.
var myReadStream = fs.createReadStream( __dirname + '/readme.txt', 'utf8');
myReadStream.on('data', function(chunk){
console.log('new chunk received');
console.log(chunk);
})
Can someone please explain to me how the zlib library works in Nodejs?
I'm fairly new to Nodejs, and I'm not yet sure how to use buffers and streams.
My simple scenario is a string variable, and I want to either zip or unzip (deflate or inflate, gzip or gunzip, etc') the string to another string.
I.e. (how I would expect it to work)
var zlib = require('zlib');
var str = "this is a test string to be zipped";
var zip = zlib.Deflate(str); // zip = [object Object]
var packed = zip.toString([encoding?]); // packed = "packedstringdata"
var unzipped = zlib.Inflate(packed); // unzipped = [object Object]
var newstr = unzipped.toString([again - encoding?]); // newstr = "this is a test string to be zipped";
Thanks for the helps :)
For anybody stumbling on this in 2016 (and also wondering how to serialize compressed data to a string rather than a file or a buffer) - it looks like zlib (since node 0.11) now provides synchronous versions of its functions that do not require callbacks:
var zlib = require('zlib');
var input = "Hellow world";
var deflated = zlib.deflateSync(input).toString('base64');
var inflated = zlib.inflateSync(new Buffer(deflated, 'base64')).toString();
console.log(inflated);
Syntax has changed to simply:
var inflated = zlib.inflateSync(Buffer.from(deflated, 'base64')).toString()
Update: Didn't realize there was a new built-in 'zlib' module in node 0.5. My answer below is for the 3rd party node-zlib module. Will update answer for the built-in version momentarily.
Update 2: Looks like there may be an issue with the built-in 'zlib'. The sample code in the docs doesn't work for me. The resulting file isn't gunzip'able (fails with "unexpected end of file" for me). Also, the API of that module isn't particularly well-suited for what you're trying to do. It's more for working with streams rather than buffers, whereas the node-zlib module has a simpler API that's easier to use for Buffers.
An example of deflating and inflating, using 3rd party node-zlib module:
// Load zlib and create a buffer to compress
var zlib = require('zlib');
var input = new Buffer('lorem ipsum dolor sit amet', 'utf8')
// What's 'input'?
//input
//<Buffer 6c 6f 72 65 6d 20 69 70 73 75 6d 20 64 6f 6c 6f 72 20 73 69 74 20 61 6d 65 74>
// Compress it
zlib.deflate(input)
//<SlowBuffer 78 9c cb c9 2f 4a cd 55 c8 2c 28 2e cd 55 48 c9 cf c9 2f 52 28 ce 2c 51 48 cc 4d 2d 01 00 87 15 09 e5>
// Compress it and convert to utf8 string, just for the heck of it
zlib.deflate(input).toString('utf8')
//'x???/J?U?,(.?UH???/R(?,QH?M-\u0001\u0000?\u0015\t?'
// Compress, then uncompress (get back what we started with)
zlib.inflate(zlib.deflate(input))
//<SlowBuffer 6c 6f 72 65 6d 20 69 70 73 75 6d 20 64 6f 6c 6f 72 20 73 69 74 20 61 6d 65 74>
// Again, and convert back to our initial string
zlib.inflate(zlib.deflate(input)).toString('utf8')
//'lorem ipsum dolor sit amet'
broofa's answer is great, and that's exactly how I'd like things to work. For me node insisted on callbacks. This ended up looking like:
var zlib = require('zlib');
var input = new Buffer('lorem ipsum dolor sit amet', 'utf8')
zlib.deflate(input, function(err, buf) {
console.log("in the deflate callback:", buf);
zlib.inflate(buf, function(err, buf) {
console.log("in the inflate callback:", buf);
console.log("to string:", buf.toString("utf8") );
});
});
which gives:
in the deflate callback: <Buffer 78 9c cb c9 2f 4a cd 55 c8 2c 28 2e cd 55 48 c9 cf c9 2f 52 28 ce 2c 51 48 cc 4d 2d 01 00 87 15 09 e5>
in the inflate callback: <Buffer 6c 6f 72 65 6d 20 69 70 73 75 6d 20 64 6f 6c 6f 72 20 73 69 74 20 61 6d 65 74>
to string: lorem ipsum dolor sit amet
Here is a non-callback version of the code:
var zlib = require('zlib');
var input = new Buffer.from('John Dauphine', 'utf8')
var deflated= zlib.deflateSync(input);
console.log("Deflated:",deflated.toString("utf-8"));
var inflated = zlib.inflateSync(deflated);
console.log("Inflated:",inflated.toString("utf-8"))
I have a java-based server that allows client applications to connect via other programming languages (java, unity, obj-c). I would like to know how to add javascript to this list using node.js and socket.io. The server listens on a set port and accepts simple json data plus an int for length in bytes, it response in the same format. The format of the "packet" is like so:
first four bytes are the length
00 00 00 1c
remaining bytes are the data
7b 22 69 64 22 3a 31 2c 22 6e 61 6d 65 22 3a 22 73 6f 6d 65 77 69 64 67 65 74 22 7d
The data is sent over TCP and is encoded in little endian. The object in originating from Java is modeled like so:
public class Widget {
private int id;
private String name;
public int getId() { return id; }
public String getName() { return name; }
}
The JSON would be:
{"id":1,"name":"somewidget"}
You will need a TCP socket. Connect it to the service and listen for the data event. When the data event is fired look at the buffers length. If it is <= 4 byte, you propably should discard it*. Else read the first 4 bytes using readUInt32() specifying 'little' as the endianess. Then use this number for the length of the remainding buffer. If the buffer is shorter than the given length, only "read" the remaining length, else "read" the given length. Use the toString method for this. You will get a string that can be parsed using the JSON.parse method, which will return you the JavaScript object matching the JSON.
You can build your packets basicly the same way by instanciating a buffer and writing all the data to it.
Also see the Buffers and net documentation for details.
* I do not know when node fires it's data events but your data might be received fragmented (that is splitted up into multiple data events). It could happen and due to the streaming nature of the tcp protocol it most likely will happen if the JSON string is long enough to not fit into a single frame. In that case, you propably should not simply discard the buffer, but try to reassemble the fragmented data.
So if you want just send request to server and get response you could do this:
var net = require('net');
var soc = net.Socket();
soc.connect(8088);
soc.on('connect', function(){
var data, request, header;
data = {request : true};
data = JSON.stringify(data);
request = new Buffer(Buffer.byteLength(data));
request.write(data);
header = new Buffer(4);
header.write(request.length.toString());
// send request
soc.end(header.toString('binary') + request.toString('binary'));
});
soc.on('data', function(buffer){
// Crop length bytes
var data = JSON.parse(buffer.slice(4).toString('utf-8'));
soc.destroy();
console.log(data);
});