I can't forward & view the length of the file recorded in RecordRTC - mediarecorder

I can only backward/forward and view the length of the mp3 file recorded from RecordRTC when it's done playing, and also I can't even play the file in vlc. I only played it in chrome browser.
Here is my code:
$.each($('#remoteVideos video'), function(index, val) {
el[index] = val.srcObject;
});
el.push(stream);
mediaRecorder = RecordRTC(el, {
type: 'audio',
mimeType: 'audio/webm',
sampleRate: 96000,
bufferSize: 4096
});
mediaRecorder.startRecording();

Related

0xc00d36c4 while saving file to wav format nodejs amazon polly text to speech

I am using Amazon polly for text to speech.
Here is the code
static async _ttsUsingPolly(text, gender, destPath, speed) {
let params = {
'Text': 'Hi, my name is #anaptfox.',
'OutputFormat': 'pcm',
'VoiceId': 'Kimberly',
SampleRate: "8000"
}
const data = await this.Polly.synthesizeSpeech(params).promise();
if (data.AudioStream instanceof Buffer) {
console.log("buffer", data);
// Initiate the source
const bufferStream = new Stream.PassThrough();
// convert AudioStream into a readable stream
bufferStream.end(data.AudioStream);
// Pipe into Player
bufferStream.pipe(fs.createWriteStream(destPath));
}
}
this saves the file to .wav format. destPath is public\audio\abc\1212_1660649369899.wav.
But when i play this file it says
This file isn’t playable. That might be because the file type is unsupported, the file extension is incorrect, or the file is corrupt.
0xc00d36c4
what is the issue (if someone can explain)? and how can i fix this?
Update1
actually this generates the pcm format file, so i tried wav converter
var pcmData = fs.readFileSync(path.join(__dirname, './audios/audio_wav', fileName))
var wavData = wavConverter.encodeWav(pcmData, {
numChannels: 1,
sampleRate: 8000,
byteRate: 16
})
fs.writeFileSync(path.resolve(__dirname, './audios/audio_wav', './16k.wav'), wavData)
pcm generated file is of almost 67KB but converted wav file is of 1KB.
if i change pcm to mp3 in polly params it works.
Any help?
I directly passed the stream to wav-converter
._convertPcmToWav(data.AudioStream, fileName);
and
static _convertPcmToWav(stream, fileName) {
const wavData = wavConverter.encodeWav(stream, {
numChannels: 1,
sampleRate: 8000,
byteRate: 2
});
const wavFileName = path.parse(fileName).name;
fs.writeFileSync(path.resolve(__dirname, './audios/audio_wav', `./${wavFileName}.wav`), wavData)
}
now the file is generated correctly and playable.

ffmpeg detect when a mp3 remote audio changes

I'm downloading via ffmpeg a mp3 hosted on some data source:
function (fpath, opath, sampleRate) {
var self = this;
// defaults
sampleRate = sampleRate || '44100';
var loglevel = self.logger.isDebug() ? 'debug' : 'warning';
return new Promise((resolve, reject) => {
const args = [
'-y',
'-loglevel', loglevel,
'-i', fpath,
'-ar', sampleRate,
'-acodec', 'pcm_s16le',
opath
];
const opts = {
cwd: self._options.tempDir
};
cp.spawn('ffmpeg', args, opts)
.on('message', msg => self.logger.info(msg))
.on('error', reject)
.on('close', resolve);
});
}
The data source, slightly changes this audio file from time to time. Is there a way for ffmpeg - for a given url - to check if that audio file has changed without downloading the whole file?
I've trying using curl -I as basic check, but I'm not sure it's the right approach for mp3 audio files that will exist anyways.
ffprobe -print_format json URL;
adding -show_entries you can pick only desidered infos, what you think is appropriate to decide if there is a change

Speech to Text: Piping microphone stream to Watson STT with NodeJS

I am currently trying to send a microphone stream to Watson STT service but for some reason, the Watson service is not receiving the stream (I'm guessing) so I get the error "Error: No speech detected for 30s".
Note that I have streamed a .wav file to Watson and I have also tested piping micInputStream to my local files so I know both are at least set up correctly. I am fairly new to NodeJS / javascript so I'm hoping the error might be obvious.
const fs = require('fs');
const mic = require('mic');
var SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1');
var speechToText = new SpeechToTextV1({
iam_apikey: '{key_here}',
url: 'https://stream.watsonplatform.net/speech-to-text/api'
});
var params = {
content_type: 'audio/l16; rate=44100; channels=2',
interim_results: true
};
const micParams = {
rate: 44100,
channels: 2,
debug: false,
exitOnSilence: 6
}
const micInstance = mic(micParams);
const micInputStream = micInstance.getAudioStream();
micInstance.start();
console.log('Watson is listening, you may speak now.');
// Create the stream.
var recognizeStream = speechToText.recognizeUsingWebSocket(params);
// Pipe in the audio.
var textStream = micInputStream.pipe(recognizeStream).setEncoding('utf8');
textStream.on('data', user_speech_text => console.log('Watson hears:', user_speech_text));
textStream.on('error', e => console.log(`error: ${e}`));
textStream.on('close', e => console.log(`close: ${e}`));
Conclusion: In the end, I am not entirely sure what was wrong with the code. I'm guessing it had something to do with the mic package. I ended up scrapping the package and using "Node-audiorecorder" instead for my audio stream https://www.npmjs.com/package/node-audiorecorder
Note: This module requires you to install SoX and it must be available in your $PATH. http://sox.sourceforge.net/
Updated Code: For anyone wondering what my final code looks like here you go. Also a big shoutout to NikolayShmyrev for trying to help me with my code!
Sorry for the heavy comments but for new projects I like to make sure I know what every line is doing.
// Import module.
var AudioRecorder = require('node-audiorecorder');
var fs = require('fs');
var SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1');
/******************************************************************************
* Configuring STT
*******************************************************************************/
var speechToText = new SpeechToTextV1({
iam_apikey: '{your watson key here}',
url: 'https://stream.watsonplatform.net/speech-to-text/api'
});
var recognizeStream = speechToText.recognizeUsingWebSocket({
content_type: 'audio/wav',
interim_results: true
});
/******************************************************************************
* Configuring the Recording
*******************************************************************************/
// Options is an optional parameter for the constructor call.
// If an option is not given the default value, as seen below, will be used.
const options = {
program: 'rec', // Which program to use, either `arecord`, `rec`, or `sox`.
device: null, // Recording device to use.
bits: 16, // Sample size. (only for `rec` and `sox`)
channels: 2, // Channel count.
encoding: 'signed-integer', // Encoding type. (only for `rec` and `sox`)
rate: 48000, // Sample rate.
type: 'wav', // Format type.
// Following options only available when using `rec` or `sox`.
silence: 6, // Duration of silence in seconds before it stops recording.
keepSilence: true // Keep the silence in the recording.
};
const logger = console;
/******************************************************************************
* Create Streams
*******************************************************************************/
// Create an instance.
let audioRecorder = new AudioRecorder(options, logger);
//create timeout (so after 10 seconds it stops feel free to remove this)
setTimeout(function() {
audioRecorder.stop();
}, 10000);
// This line is for saving the file locally as well (Strongly encouraged for testing)
const fileStream = fs.createWriteStream("test.wav", { encoding: 'binary' });
// Start stream to Watson STT Remove .pipe(process.stdout) if you dont want translation printed to console
audioRecorder.start().stream().pipe(recognizeStream).pipe(process.stdout);
//Create another stream to save locally
audioRecorder.stream().pipe(fileStream);
//Finally pipe translation to transcription file
recognizeStream.pipe(fs.createWriteStream('./transcription.txt'));

How to convert Uint8Array video to frames in nodejs

I want to be able to extract jpegs from a Uint8 array containing the data for a mpeg or avi video.
The module ffmpeg has the function fnExtractFrameToJPG but it only accepts a filename pointing to the video file. I want to be able to extract the frames from the UInt8Array.
One way to do it is to write the UInt8Array to a tmp file and then use the tmp file with ffmpeg to extract the frames:
const tmp = require("tmp");
const ffmpeg_ = require("ffmpeg");
function convert_images(video_bytes_array){
var tmpobj = tmp.fileSync({ postfix: '.avi' })
fs.writeFileSync(tmpobj.name, video_bytes_array);
try {
var process = new ffmpeg(tmpobj.name);
console.log(tmpobj.name)
process.then(function (video) {
// Callback mode
video.fnExtractFrameToJPG('./', { // make sure you defined the directory where you want to save the images
frame_rate : 1,
number : 10,
file_name : 'my_frame_%t_%s'
}, function (error, files) {
if (!error)
tmpobj.removeCallback();
});
});
} catch (e) {
console.log(e.code);
console.log(e.msg);
}
}
Another possibitlity is to use opencv after you save the UInt8Array to a tmp file. Another solution is to use stream and ffmpeg-fluent which would not require using tmp files.

Meteor.JS CollectionFS Video to Image Thumbnails (Graphics Magick)

I am working on one Meteor App where I am using CollectionFS to upload Files.
I am able to upload and generate thumbnails for Images.
But my Issue is : How should I create thumbnails for Videos?
I can see that it is possible via command line: https://superuser.com/questions/599348/can-imagemagick-make-thumbnails-from-video
But how can I apply this to my Meteor code.
Here is what I am doing:
VideoFileCollection = new FS.Collection("VideoFileCollection", {
stores: [
new FS.Store.FileSystem("videos", {path: "/uploads/videos"}),
new FS.Store.FileSystem("videosthumbs", {path: "/uploads/videosthumbs",
beforeWrite: function(fileObj) {
// We return an object, which will change the
// filename extension and type for this store only.
return {
extension: 'png',
type: 'image/png'
};
},
transformWrite: function(fileObj, readStream, writeStream) {
gm(readStream, fileObj.name()).stream('PNG').pipe(writeStream);
}
})
]
});
What is happening here that video is getting Uploaded to "videos" folder and one PNG is created under "videosthumbs" with 0 Bytes and thumbnail is not getting generated.
I have also read at : https://github.com/aheckmann/gm#custom-arguments
that we can use : gm().command() - Custom command such as identify or convert
Can Anybody advise me on what can be done to handle this situation?
Thanks and Regards
Checked the link that you have added and here is a rough solution that might help you
ffmpeg -ss 600 -i input.mp4 -vframes 1 -s 420x270 -filter:v 'yadif' output.png
Here is a function that i have made.
var im = require('imagemagick');
var args = [
"ffmpeg", "-ss", "600", "-i", "input.mp4", "-vframes", " 1", "-s", "420x270", "-filter:v", "'yadif'", "output.png"
];
// Function to convert and
im.convert(args, function(err)
if (err) throw err;
});

Resources