I have a PNG object which I created using node-png and according the docs it's a "readable and writable Stream".
I would like to convert the PNG object to base64 and send it to the client via socket.io where I'll place the string in an image src.
I've tried many things but it seems that it's not trivial to convert a stream into a string.
Note that the data is created inside Node and not from the file system.
How can I achieve this?
Here is what I did for future readers (this helped too):
png.pack();
var chunks = [];
png.on('data', function (chunk) {
chunks.push(chunk);
console.log('chunk:', chunk.length);
});
png.on('end', function () {
var result = Buffer.concat(chunks);
console.log('final result:', result.length);
io.sockets.emit('image', result.toString('base64'));
});
You do not want to convert the stream into a string but its readable chunk:
stream.on('readable', function() {
var string = stream.read().toString('base64');
// send through websocket
});
You can also do this for the complete data running through the stream:
var data = '';
stream.on('readable', function() {
data += stream.read().toString('base64');
});
stream.on('end', function() {
console.log(data);
});
Depends on if the client requires the complete png picture to be available or if it is okay to have single chunks.
However if you are interested how this could look like in an practical example (with the image being uploaded by HTML5 Drag & Drop) you can checkout messenger.
Related
I got a question about buffer. Here is my code:
var Grid = require('gridfs-stream');
var mongodb = require('mongodb');
var gfs = Grid(db, mongodb);
var deferred = Q.defer();
var image_buf = new Buffer('buffer');
var readableStream = gfs.createReadStream(name);
readableStream.on('data',function(chunk){
console.log(chunk);
image_buf = Buffer.concat([image_buf, chunk]);
console.log(image_buf)//differ from the chunk above
});
readableStream.on('end',function(){
db.close();
deferred.resolve(image_buf);
})
return deferred.promise;
What I'm doing is to read an image from MongoDB and put it in the gridfs-stream. I really want to retrieve all chunks in the stream and pass them to another variable so that I can reuse these chunks to draw an image in another API. Therefore I use image_buf and Buffer to perform the task. However, I get a completely different buffer string. As you can see in the above code, I consoled the chunk and the image_buf I got, but they are totally different. Can anyone tell me the reason for this and how can I correctly collect all chunks? Thanks a lot!!!
UPDATE: OK, so I figured it out now: I will append my code below for anyone who is struggling with the same problem as mine:
readableStream.on('data',function(chunk){
console.log("writing!!!");
if (!image_buf)
image_buf = chunk;
else image_buf = Buffer.concat([image_buf, chunk]);
});
The update provided by question poster does not work . So i am going to provide answer of my own. Instead of using new Buffer('buffer') it is better to use an simple array and push chunks into it and use Buffer.concat(bufferArray) at the end to get buffer of stream like this:
var readableStream = gfs.createReadStream(name);
var bufferArray = [];
readableStream.on('data',function(chunk){
bufferArray.push(chunk);
});
readableStream.on('end',function(){
var buffer = Buffer.concat(bufferArray);
deferred.resolve(buffer);
})
var http = require('http');
var map = require('through2-map');
uc = map(function(ch) {
return ch.toString().toUpperCase();
});
server = http.createServer(function(request, response) {
request.on('data',function(chunk){
if (request.method == 'POST') {
//change the data from request to uppercase letters and
//pipe to response.
}
});
});
server.listen(8000);
I have two questions about the code above. First, I read the documentation for request, it said that request is an instance of IncomingMessage, which implements Readable Stream. However, I couldn't find .on method in the Stream documentation. So I don't know what chunk in the callback function in request.on does. Secondly, I want to do some manipulation to the data from request and pipe it to response. Should I pipe from chunk or from request? Thank you for consideration!
is chunk a stream?
nop. The stream is the flow among what the chunks of the whole data are sent.
A simple example, If you read a 1gb file, a stream will read it by chunks of 10k, each chunk will go through your stream, from the beginning to the end, with the right order.
I use a file as example, but a socket, request or whatever streams is based on that idea.
Also, whenever someone sends a request to this server would that entire thing be a chunk?
In the particular case of http requests, only the request body is a stream. It can be the posted files/data. Or the response body of the response. Headers are treated as Objects to apply on the request before the body is written on the socket.
A small example to help you with some concrete code,
var through2 = require('through2');
var Readable = require('stream').Readable;
var s1 = through2(function transform(chunk, enc, cb){
console.log("s1 chunk %s", chunk.toString())
cb(err=null, chunk.toString()+chunk.toString() )
});
var s2 = through2(function transform(chunk, enc, cb){
console.log("s2 chunk %s", chunk.toString())
cb(err=null, chunk)
});
s2.on('data', function (data) {
console.log("s2 data %s", data.toString())
})
s1.on('end', function (data) {
console.log("s1 end")
})
s2.on('end', function (data) {
console.log("s2 end")
})
var rs = new Readable;
rs.push('beep '); // this is a chunk
rs.push('boop'); // this is a chunk
rs.push(null); // this is a signal to end the stream
rs.on('end', function (data) {
console.log("rs end")
})
console.log(
".pipe always return piped stream: %s", rs.pipe(s1)===s1
)
s1.pipe(s2)
I would like to suggest you to read more :
https://github.com/substack/stream-handbook
http://maxogden.com/node-streams.html
https://github.com/maxogden/mississippi
All Streams are instances of EventEmitter (docs), that is where the .on method comes from.
Regarding the second question, you MUST pipe from the Stream object (request in this case). The "data" event emits data as a Buffer or a String (the "chunk" argument in the event listener), not a stream.
Manipulating Streams is usually done by implementing a Transform stream (docs). Though there are many NPM packages available that make this process simpler (like through2-map or the like), though in reality, they produce Transform streams.
Consider the following:
var http = require('http');
var map = require('through2-map');
// Transform Stream to uppercase
var uc = map(function(ch) {
return ch.toString().toUpperCase();
});
var server = http.createServer(function(request, response) {
// Pipe from the request to our transform stream
request
.pipe(uc)
// pipe from transfrom stream to response
.pipe(response);
});
server.listen(8000);
You can test by running curl:
$ curl -X POST -d 'foo=bar' http://localhost:8000
# logs FOO=BAR
I have a nodejs application that calls a server which returns a fragment of an image. I am trying to save this segment in a buffer so that I can later merge them together to make a full image
var req = http.request(options, function(res) {
var segment = parseInt(res.headers["segment"]);
res.setEncoding('binary');
var data = "";
res.on('data', function(chunk) {
data += chunk;
});
res.on('end', function() {
images[segment] = (new Buffer(data, 'binary'));
});
});
after I get all the segments
var totalImage = "";
for (image in images) {
totalImage += image.toString('binary');
}
fs.writeFile('image.png', totalImage, function(e) {
console.log('done');
});
I want to use node-pngjs but I am not sure how to stream the response binary into png so that I can save the pixel buffer instead of the binary itself for later consumption
I attempted to do the following:
res.pipe(new PNG()).on('parse', function(err, data) {
buffers[segment] = data;
});
but this lead to an error 'Invalid file signature' during parse
There may well be more than one issue here, but this code is not correct:
for (image in images) {
totalImage += image.toString('binary');
}
In that code, image is the index into the array (not the contents of the cell of the array) so all you're doing is add up the indexes 0+1+2+3+4 and so on.
Arrays should never for iterated with the for/in construct because that iterates all properties of the object, not just array elements. And, you weren't fetching the actual contents of the array cell which would have been images[image] either. Instead, you could use:
for (var i = 0; i < images.length; i++) {
totalImage += images[i].toString('binary');
}
P.S. I suspect that you may have encoding issues also. You take the data from the http request, but it all into an array of Buffer objects, then you convert those to a "binary" string and add them together into a string variable. Since this is binary data, it really seems like you should only be using a Buffer to collect stuff, not a string.
After looking into Buffers in nodejs a little more, you can combine your Array of buffers like this:
var totalImage = Buffer.concat(images);
This keeps everything in binary buffer format and wouldn't have any encoding issues. Found this here and here.
I am trying to work with the new Streams API in Node.js, but having troubles when specifying a fixed read buffer size.
var http = require('http');
var req = http.get('http://143.226.75.100/waug_mp3_128k', function (res) {
res.on('readable', function () {
var receiveBuffer = res.read(1024);
console.log(receiveBuffer.length);
});
});
This code will receive a few buffers and then exit. However, if I add this line after the console.log() line:
res.read(0);
... all is well again. My program continues to stream as predicted.
Why is this happening? How can I fix it?
It's explained here.
As far as I understand it, by reading only 1024 bytes with each readable event, Node is left to assume that you're not interested in the rest of the data that's in the stream buffers, and discards it. Issuing the read(0) (in the same event loop iteration) 'resets' this behaviour. I'm not sure why the process exits after reading a couple of 1024-byte buffers though; I can recreate it, but I don't understand it yet :)
If you don't have a specific reason to use the 1024-byte reads, just read the entire buffer for each event:
var receiveBuffer = res.read();
Or instead of using non-flowing mode, use flowing mode by using the data/end events instead:
var http = require('http');
var req = http.get('http://143.226.75.100/waug_mp3_128k', function (res) {
var chunks = [];
res.on('data', function(chunk) {
chunks.push(chunk);
console.log('chunk:', chunk.length);
});
res.on('end', function() {
var result = Buffer.concat(chunks);
console.log('final result:', result.length);
});
});
I've read about nodejs event-loop, but what i don't understand well is:
i've created a simple http-server that logs the whole request post-data to a file .
i used apache-ab to flood it with a 700 kb file for-each request it does .
i imagined that each request will write some chunks after each other each tick in the event-loop, but i found that the full post data is written completely after each request and i don't know why , and i cannot understand it .
i'm using something like this
stream = require('fs').createWriteStream('path/to/log.file', {flags: 'a'})
log = function(data){
return stream.write(data)
}
require('http').createServer(function(req, res)
{
// or req.pipe(stream)
req.on('data', function(chunk){
log(chunk.toString() + "\r\n")
})
req.on('end', function(){
res.end("ok")
})
}).listen(8000)
sorry for my bad English :)
I edited the code to output the chunk size and also use an easily identifiable word 'SQUIRREL' to search for in the log file. I also sent the image file using curl instead of apache ab, mainly because I do not have it setup.
If you look at the output of http in the terminal you are running it, you will see the chunk size for each chunk as it is processed, which in my case was 65536 (as you alluded to in your comments we should see). In a text search, I see the word SQUIRREL one time for each chunk that was processed.
There is nothing wrong with your code, hopefully making these changes will allow you to see what you are expecting to see.
stream = require('fs').createWriteStream('./logfile.txt', {flags: 'a'});
log = function(data){
return stream.write(data);
};
require('http').createServer(function(req, res)
{
// or req.pipe(stream)
req.on('data', function(chunk){
log(chunk.toString() + "SQUIRREL ");
console.log(chunk.length);
})
req.on('end', function(){
res.end("ok");
})
}).listen(8000)
curl --data-binary "#IMAG0152.jpg" http://localhost:8000
thanks to everyone tried to help, i found the main issue
the stream.write was returning false i've added the following code to use drain event with .pause() and .resume() and the
problem solved
ok = stream.write(chunk)
if ( ! ok )
{
req.pause()
stream.once('drain', function(){
req.resume()
stream.write(chunk)
})
}