I am using canvas npm for create images, and trying to convert to readable stream and put it in a variable seems to corrupt.
This is how I save it:
let fullStream;
let stream = canvas.createPNGStream(null, { quality: 1 })
stream.on('data', (d) => {
d = d.toString()
if (!fullStream) fullStream = d
else fullStream = fullStream + d
})
The question is, what am I doing wrong for corrupt, how to fix and how to save it in a variable.
The result of fullStream seems fine, but it is not.
createPNGStream is readable stream you just need to pipe the output to writable stream without handling the chunks
const fs = require('fs')
const { createCanvas, loadImage } = require('canvas')
// create canvas
const canvas = createCanvas(200, 200)
const ctx = canvas.getContext('2d')
ctx.font = '30px Impact'
ctx.rotate(0.1)
ctx.fillText('Awesome!', 50, 100)
const text = ctx.measureText('Awesome!')
ctx.strokeStyle = 'rgba(0,0,0,0.5)'
ctx.beginPath()
ctx.lineTo(50, 102)
ctx.lineTo(50 + text.width, 102)
ctx.stroke()
//save to png
const out = fs.createWriteStream(__dirname + '/test.png')
const stream = canvas.createPNGStream()
stream.pipe(out)
out.on('finish', () => console.log('The PNG file was created.'))
Related
I'm trying to process images from the WebCam using Posenet on the server-side, but I'm not sure how to pass the image data to the estimateSinglePose.
Below is the simplified version of the code;
CLIENT
const imageData = context.getImageData(0, 0, 320, 180);
const buffer = imageData.data.buffer;
socket.emit("signal", buffer); //Pass it to the server through websocket
BACKEND
socket.on("signal", (data)=> {
const buffer = new Uint8Array(data);
const image = ts.tensor(data).reshape([180, 320, -1]);
// this where I'm stuck, I don't know how to pass the image to the estimateSinglePose
})
EDIT 1
Passing it to the estimateSinglePose resulted in an error.
Error: Invalid TF_Status: 3
Message: Incompatible shapes: [193,257,4] vs. [3]
at NodeJSKernelBackend.executeSingleOutput (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:209:43)
at Object.kernelFunc (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-node/dist/kernels/Add.js:28:24)
at kernelFunc (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:3139:32)
at /Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:3200:27
at Engine.scopedRun (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:3012:23)
at Engine.runKernelFunc (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:3196:14)
at Engine.runKernel (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:3068:21)
at add_ (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:8969:19)
at Object.add__op [as add] (/Users/xxx/app/server/node_modules/#tensorflow/tfjs-core/dist/tf-core.node.js:3986:29)
at ResNet.preprocessInput (/Users/xxx/app/server/node_modules/#tensorflow-models/posenet/dist/resnet.js:41:19)
estimateSinglePose takes as parameter an HTMLImageElement or HTMLVideoElement. Server side with nodejs, you can use the package canvas to have the same behavior as the canvas in the browser
const posenet = require('#tensorflow-models/posenet');
const {Image, createCanvas} = require('canvas');
const canvas = createCanvas(img.width,img.height); // 180, 320
const ctx = canvas.getContext('2d');
const net = await posenet.load();
socket.on("signal", async (data)=> {
ctx.putImageData(data, 0, 0)
const pose = await net.estimateSinglePose(canvas, {
flipHorizontal: false
});
// you can now use pose
})
I am trying to follow a tutorial and just want to load an image in TensorFlowJS.
import * as tf from '#tensorflow/tfjs-node';
import fs from 'fs';
(async () => {
const desk = fs.readFileSync(__dirname + '/' + 'desk.png');
const buf = Buffer.from(desk);
const imageArray = new Uint8Array(buf);
const pngDecodedTensor = tf.node.decodePng(imageArray);
})();
When I run the above code, I see this error:
The shape of dict['image_tensor'] provided in model.execute(dict) must be [-1,-1,-1,3], but was [1,4032,3024,4]
The image is 3024x4032 and 10.4MB
Thanks for your help
The issue is related to the tensor shape when making the prediction.
The model is expecting a tensor with 3 channels whereas the tense passed as argument has 4 channels.
The tensor can be sliced to use only 3 of its 4 channels.
pngDecodedTensor = tf.node.decodePng(imageArray).slice([0], [-1, -1, 3])
You may want to try the fromPixels function like this:
const { Image } = require('canvas')
// From a buffer:
fs.readFile('images/squid.png', (err, squid) => {
if (err) throw err
const img = new Image()
img.onload = () => ctx.drawImage(img, 0, 0)
img.onerror = err => { throw err }
img.src = squid
})
// From a local file path:
const img = new Image()
img.onload = () => ctx.drawImage(img, 0, 0)
img.onerror = err => { throw err }
img.src = 'images/squid.png'
// From a remote URL:
img.src = 'http://picsum.photos/200/300'
// ... as above
var imgAsTensor = tf.fromPixels(img);
// ... now use it as you wish.
You can learn more about this function here:
I am trying to write on image with hindi language. I am using node-canvas library. There is some problem with my output. Can someone help me ?
const { createCanvas, loadImage, registerFont} = require('canvas')
const canvas = createCanvas(400, 400)
const ctx = canvas.getContext('2d')
var str= "यह. मिसिसिपी है";
console.log(str);
loadImage('missisippi.jpg').then((image) => {
console.log(image);
ctx.drawImage(image, 0 , 0, 400, 400);
ctx.fillText(str,100,40);
var body = canvas.toDataURL(),
base64Data = body.replace(/^data:image\/png;base64,/,""),
binaryData = new Buffer(base64Data, 'base64').toString('binary');
require("fs").writeFile("out.png", binaryData, "binary", function(err) {
console.log(err); // writes out file without error, but it's not a valid image
})
// console.log('<img src="' + canvas.toDataURL() + '" />')
})
This is the output image . You can see that मिसिसिपी is grammatically wrong. (In case u are familiar with Hindi.
I have also tried the very same thing with npm-gm. in that too I faced same issue. Can someone help me out in this issue ?
How can I write text on image with custom font ?
The following is working fine for me -
var fs = require('fs')
var path = require('path')
var Canvas = require('canvas')
function fontFile (name) {
return path.join(__dirname, '/fonts/', name)
}
Canvas.registerFont(fontFile('ARBLI___0.ttf'), {family: 'ARBLI___0'})
var canvas = Canvas.createCanvas(7100, 3500)
var ctx = canvas.getContext('2d')
var Image = Canvas.Image;
var img = new Image();
img.src = 'input/DOIT_Art_Size.jpg';
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
ctx.fillStyle = 'white';
ctx.textAlign = 'center';
ctx.font = '150pt ARBLI___0'
ctx.fillText('यह मिसिसिपी है',3900, 1700)
canvas.createPNGStream().pipe(fs.createWriteStream(path.join(__dirname, 'output/font.png')))
canvas.createJPEGStream().pipe(fs.createWriteStream(path.join(__dirname, 'output/font.jpg')))
My node version is 6.11.2.
Canvas module is 2.0.0-alpha.16.
My input image dimension is 7100*3500 pixels.
I'm trying to create and image on the Google Firebase server with node-canvas and store it in Firebase Storage.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const gcs = require('#google-cloud/storage')();
const path = require('path');
const Canvas = require('canvas-prebuilt');
const env = require('dotenv').config();
try {admin.initializeApp(functions.config().firebase);} catch(e) {}
//Trigger on creation of a new post
exports.default = functions.database.ref('/posts/{postId}').onCreate(event => {
//Get the postID
const postId = event.params.postId;
console.log('We have a new post ' + postId);
//Get data from the postid
return admin.database().ref('/posts/' + postId).once('value').then(function(snapshot) {
const text = snapshot.val().text;
const canvas = new Canvas(1024, 512);
const ctx = canvas.getContext('2d');
//Draw Background
ctx.fillStyle = '#000';
ctx.fillRect(0, 0, 1024 , 512);
//Draw Text
ctx.font = 'bold 66px Arial';
ctx.textAlign = 'center';
ctx.fillStyle = '#fff';
ctx.fillText(text, 120, 256, 784);
// Use the postID to name the file Ex : -L1rVJUAtSbc_FampT0D.png
var filename = postId + '.png';
// Create the file metadata
var metadata = {
contentType: 'image/png'
};
const bucket = gcs.bucket('images');
const filePath = 'images/' + filename;
return canvas.toDataURL('image/png', function(err, png){
//Save on Firebase Storage
return bucket.upload(png, {
destination: filePath,
metadata: metadata
}).then(() => {
console.log('Image uploaded to Storage at ', filePath);
});
});
});
});
But, when I try to save it with toDataURL I get this error :
ENAMETOOLONG: name too long, stat 'data:image/png;base64,iVBORw0 ...'
And when I try with toBuffer I get this one :
TypeError: Path must be a string. Received
at assertPath (path.js:7:11)
at Object.basename (path.js:1362:5)
at Bucket.upload (/user_code/node_modules/#google-cloud/storage/src/bucket.js:2259:43)
at /user_code/node_modules/#google-cloud/storage/node_modules/#google-cloud/common/src/util.js:777:22
at Bucket.wrapper [as upload] (/user_code/node_modules/#google-cloud/storage/node_modules/#google-cloud/common/src/util.js:761:12)
at /user_code/sendTweet.js:107:21
I also try toBlob but the function doesn't exist server side with node-canvas.
Anyone know how I should save the image server side before transfer it to Firebase Storage?
Thanks!
I am trying to convert multiple svg strings to png's so I can render them onto a pdf using PdfMake in nodejs. This works fine for one svg, but when I add multiple svg strings, they get overwritten by the last one. With this example code, it renders two images of png2 (svg2).
const promises = [svg1,svg2].map(str => {
const stream = new Readable();
stream.push(str);
stream.push(null);
return svgPromise(stream);
});
const result = await Promise.all(promises);
const png1 = result[0].content;
const png2 = result[1].content;
function svgPromise(stream) {
return new Promise((resolve, reject) => {
const svg = new Rsvg();
stream.pipe(svg);
svg.on("finish", function() {
const buffer = svg.render({
format: "png",
width: width * 2,
height: height * 2
}).data;
const png = datauri.format(".png", buffer);
resolve(png);
});
});
}
Not sure if this error is related to stream or my promise logic. Any ideas?
Dependencies:
"librsvg": "0.7.0"
"pdfmake": "0.1.35"
"datauri": "1.0.5"
It pays to list all the used modules. Assuming you used datauri, it seems you need to initialize a new instance for every call:
svg.on("finish", function() {
const datauri = new Datauri();
const buffer = svg.render({
format: "png",
width: 16,
height: 16
}).data;
const png = datauri.format(".png", buffer);
resolve(png);
});