There is apparently no easy way to stream images in Raspberry Pi. While there are many hacks available, in my Raspberry Pi Zero it has some trouble keeping a decent framerate.
I suspect one of the main problems is that the 1st Google solution and most of them writes/reads to the SD for each image. I've got so far to read from the terminal an image without touching the SD:
const out = await exec(`fswebcam -r 640x480 -`);
const img = out[0];
console.log(img);
This gives me this on the terminal:
����JFIF``��>CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), default quality
��
$.' ",#(7),01444'9=82<.342��C
And many more. Previously I was doing something similar with buffers:
const file = fs.readFileSync(temp);
console.log(file.toString('base64'));
ctx.socket.emit('frame', { image: true, buffer: file.toString('base64') });
Where file is a Buffer and file.toString('base64') is a string in the form of:
/9j/4AAQSkZJRgABAQEAYABgAAD//gA8Q1JFQVRPUjogZ2QtanBlZyB2MS4wICh1c2luZyBJSkcgSlBFRyB2ODApLCBxdWFsaXR5ID0gMTAwCv ...
And this worked (but through the SD card). So my question is, what is the format of the first output in terminal? And how can I convert it to a Buffer or a String similar to the latter.
I ended up just using the terminal through pipe to convert it to base64:
fswebcam -r 640x480 - | base64
So now my whole snippet is:
// Take a picture in an async way and return it as a base64 encoded string
// Props: https://scottlinux.com/2012/09/01/encode-or-decode-base64-from-the-command-line/
module.exports = async ({ resolution = '640x480', rotate = 0 } = {}) => {
const query = `fswebcam -r ${resolution} --rotate ${rotate} - | base64`;
const out = await exec(query, { maxBuffer: 1024 * 1024 });
return out[0];
};
Related
I'm trying to compress an image with pngquant. Here is the code:
let output = '';
const quant = cp.spawn('pngquant', ['256', '--speed', '10'], {
stdio: [null, null, 'ignore'],
});
quant.stdout.on('data', data => output += data);
quant.on('close', () => {
fs.writeFileSync('image.png', output);
fs.writeFileSync('image_original.png', image);
process.exit(0);
});
quant.stdin.write(image);
image is a Buffer with pure PNG data.
The code works, however, for some reason, it generates incorrect PNG. Not only that, but also it's size is more than original's.
When I execute this from the terminal, I get excellent output file:
pngquant 256 --speed 10 < image_original.png > image.png
I have no idea of what's going on; the data in output file seems pretty PNG-ish.
EDIT: I have managed to make it work:
let output = [];
quant.stdout.on('data', data => output.push(data));
quant.stdin.write(image);
quant.on('close', () => {
const image = Buffer.concat(output);
fs.writeFileSync('image.png', image);
});
I assume that is related to how strings are represented in the NodeJS. Would be happy to get some explanation.
I have tried to convert speech wav file to text using nodejs but it displays error like this:
Error:
data: '{\n "error": "This 8000hz audio input requires a narrow band
model."\n}',
Code :
let directory = `File Directory`;
let dirbuf = Buffer.from(directory);
let files = fs.readdirSync(directory);
// Create the stream.
// Pipe in the audio.
files.forEach(wav_files => {
//how can i convert that wav file into 8000hz and use that same wav file for speech to text convert
fs.createReadStream(wav_files).pipe(recognizeStream);
recognizeStream.on('data', function(event) { onEvent('Data:',event,wav_files); });
}
I am not sure whether you've already explored wav package or not. But I created a cheat like this:
const fs = require('fs');
const WaveFile = require('wavefile').WaveFile;
let wav = new WaveFile(fs.readFileSync("source.wav"));
// do it like this
wav.toSampleRate(8000);
// or like following way with your choice method
// wav.toSampleRate(44100, {method: "cubic"});
// write new file
fs.writeFileSync("target-file.wav", wav.toBuffer());
For complete running example clone node-cheat wav-8000hz and run node wav.js followed by npm i wavefile.
I need a way to use Node.js to convert a photo from HEIC format to either jpg or png. I have searched and cannot seem to find anything that works.
npm -i heic-convert
const convert = require('heic-convert');
async function heicToJpg(file, output) {
console.log(file, output);
const inputBuffer = await promisify(fs.readFile)(file);
const outputBuffer = convert({
buffer: inputBuffer, // the HEIC file buffer
format: 'PNG', // output format
});
return promisify(fs.writeFile)(output, outputBuffer);
}
With heic-convert as Bruno suggested, it works fine.
Here is a node utility that allows you to serially convert HEIC files present in a folder: convert-heic-files
Changing the filename is sufficient for viewing HEIC as jpg:
const fileName = photo.fileName.split(".")[0] + ".jpg";
I am writing a small node.js program that will be able to play wav sound files on a chosen audio device.
The sound starts well but it is stoped before the end of the file.
Here is my code :
const fs = require("fs");
const wav = require("wav");
const portAudio = require("naudiodon");
const ao = new portAudio.AudioIO({
outOptions: {
channelCount: 2,
sampleFormat: portAudio.SampleFormat24Bit,
sampleRate: 44100,
}
});
const name = "myfile.wav";
const file = fs.createReadStream(`./sounds/${name}`);
const reader = new wav.Reader();
reader.on("format", () => {
reader.pipe(ao);
ao.start();
});
file.pipe(reader);
process.on("SIGINT", ao.quit);
When I modify the highWaterMark option of fs.createReadStream, it slightly change the cut position in the sound but it never goes until the end of it.
I always get a portAudio status - output underflow log error.
Thanks for any help !
I have been experiencing a similar error, and my solution was to manually write to the AudioIO stream instead of using the pipe commands.
So instead of
reader.on("format", () => {
reader.pipe(ao);
ao.start();
});
You would use
ao.start();
reader.on("data",chunk=>ao.write(chunk));
Output underflow is generally not an issue, but to avoid it I initialised a new instance of PortAudio before playing every file, however that is only applicable if you don't care about slight latency.
I'm using node.js and through the socket.io library I receive chunks of data that are actually jpeg images. These images are frames of a realtime video captured from a remote webcam. I'm forced to stream the video as jpeg frames. I'm looking for a way to convert on the fly these jpeg images in a video file (mpeg 4 or mjpeg file). Does node have a library that can do this? I already took a look at the Node-fluent-FFMPEG library but the only examples given were about conversions of jpeg files to a video and not a conversion on the fly from a stream of jpeg images. Or alternatively, does ffmpeg for windows support a stream of jpeg images as input?
FFMPEG supports streams as inputs, as stated in the docs.
You can add any number of inputs to an Ffmpeg command. An input can
be [...] a readable stream
So for instance it supports using
ffmpeg().input(fs.createReadStream('/path/to/input3.avi'));
which creates a Readable stream from the file at '/path/to/input3.avi'.
I don't know anything about FFMPEG, but you may pull your messages coming from socket.io (messages may be a Buffer already) and wrap it with your own implementation of Readable stream.
I think you should look at videofy
var exec = require("child_process").exec;
var escape = require("shell-escape");
var debug = require("debug")("videofy");
var mkdirp = require("mkdirp");
var uid = require("uid2");
/*
* Expose videofy
*/
module.exports = videofy;
/**
* Convert `input` file to `output` video with the given `opts`:
*
* - `rate` frame rate [10]
* - `encoders` the video codec format, default is libx264
*
* #param {String} input
* #param {String} output
* #return
* #api public
*/
function videofy(input, output, opts, fn) {
if (!input) throw new Error('input filename required');
if (!output) throw new Error('output filename required');
var FORMAT = '-%05d';
// options
if ('function' == typeof opts) {
fn = opts;
opts = {};
} else {
opts = opts || {};
}
opts.rate = opts.rate || 10;
opts.codec = opts.codec || 'libx264';
// tmpfile(s)
var id = uid(10);
var dir = 'tmp/' + id;
var tmp = dir + '/tmp' + FORMAT + '.jpg';
function gc(err) {
debug('remove %s', dir);
exec('rm -fr ' + dir);
fn(err);
}
debug('mkdirp -p %s', dir);
mkdirp(dir, function(error) {
if (error) return fn(error);
// convert gif to tmp jpg
var cmd = ['convert', input, tmp];
cmd = escape(cmd);
debug('exec %s', cmd);
// covert jpg collection to video
exec(cmd, function(err) {
if (err) return gc(err);
var cmd = ['ffmpeg'];
cmd.push('-f', 'image2');
cmd.push('-r', String(opts.rate));
cmd.push('-i', tmp);
cmd.push('-c:v', String(opts.codec));
cmd.push(output);
cmd = escape(cmd);
debug("exec %s", cmd);
exec(cmd, gc);
});
});
}
Using require("child_process") you can use ffmpeg, or there are probably npm modules to help with this. ffmpeg will allow you to first take a list of jpegs and convert that to a video, second you can add a list (or just one) jpegs to the beginning or end of videos.