Convert Kinesis Vides Stream /GetMedia result into audio file - node.js - node.js

Currently, I'm trying to convert a response of Kinesis Vides Stream GetMedia API to an audio file but have had no success in this. According to AWS documentation, - result of GetMedia request? it's recommended to use Kinesis Video Stream Parser Library. But I'd like to use js/ts implementation. Is it possible to convert this stream to an audio file using just js/ts?
Thank you for your help.

Related

how to write the csv file in S3 using expresses NodeJs

Can any one help on writing file CSV and then uploading using S3 Nodejs.
I was trying
fs.writeFileSync(path, data)
but it does not work for me
Please guide, a demo will help me a lot
Thanks
You don't upload the file directly,first you need to add AWS S3 API module to your project and use it. You can find a good example here

How can I create a MediaStream track from a continuous stream of images in node.js? (for usage with WebRTC)

I want to stream a robot cam from a web media element. I have access to the camera in node.js, which is providing a live stream of images (continually producing a new frame at ~20fps).
In this same situation in the browser, one could write the image to a canvas and capture the stream.
Is there some way to construct a MediaStreamTrack object that can be directly added to the RTCPeerConnection, without having to use browser-only captureStream or getUserMedia APIs?
I've tried the npm module canvas, which is supposed to port canvas to node -- then maybe I could captureStream the canvas after writing the image to it. But that didn't work.
I'm using the wrtc node WebRTC module with the simple-peer wrapper.
Check out the video-compositing example here.
https://github.com/node-webrtc/node-webrtc-examples

Nodejs transform image data back to actual image

My server receives a file from a HTTP request and uploads this file to IBM Cloud Object Storage.
Moreover, the server allows to recover this file. Recovery is triggered by a get http request that should return said file.
It works fine for "basic" data format, such as text files. However, I encounter problems with more complex types such as images and the "reformating".
Image is uploaded to the datastore. The element stored is the buffer itself:
req.files[0].buffer
When getting the image back from the datastore, how can I transform it back to a readable format for my computer?
The data look like this and it is, on the server, a string:
If you are using ExpressJS you can do this:
const data = req.files[0].buffer;
res.contentType('image/jpeg'); // don't know what type is
res.send(data);

Slice Audio files using AWS lambda and S3 on serverless architecture

We are using AWS serverless architecture for our Contact center. We are storing audio recordings on S3 bucket and using lambda functions to process them.
Our requirement is to remove sensitive details from audio recording such as Payment information.
So we need to fetch audio recording from S3 bucket and slice that using start time and duration for sensitive payment details and then join remaining recording clips into one.
How can we achieve this by using AWS lambda(NodeJS/Python), S3?
Thanks,
Ganesh
I did not try this myself yet, but I'd use the lambda-audio package, which contains SoX, a swiss army knife for sound files and then run the trim command as described here.
Here is some code to get you started:
lambdaAudio.sox('./input.mp3 /tmp/output.wav trim 0 10')
.then(response => {
// Do something when the first 10 seconds of the file have been extracted
})
.catch(errorResponse => {
console.log('Error from the sox command:', errorResponse)
})

IBM Watson Speech to Text Audio Conversion on Node.js Web Application

The gist of the issue is that IBM Watson Speech to Text only allows for FLAC, WAV, and OGG file formats to be uploaded and used with the API.
My solution to that would be that if the user uploads an mp3, BEFORE sending the file to Watson, a data conversion would take place. Essentially, the user uploads an mp3, then using ffmpeg or sox the audio would be converted to an OGG, after which the audio would then be uploaded to Watson.
What I am unsure about is: What exactly do I have to modify in the Node.js Watson code to allow for the audio conversion to happen? Linked below is the Watson repo which is what I am working through. I am sure that the file that will have to be changes is fileupload.js, which I have linked, but where the changes go is what I am uncertain about?
I have looked through both SO and developerWorks, the IBM SO for answers to this issue, but I have not seen any which is why I am posting here. I would be happy to clarify my question if that is necessary.
Watson Speech to Text Repo
The Speech to Text sample application you are trying to use doesn't convert MP3 files to OGG. The src folder(with fileupload.js on it) is just javascript that will be used on the client side(thanks to Browserify).
The application is basically communicating the browser with the API using CORS so the audio goes from the browser to the Watson API.
If you want to convert the audio using ffmpeg or sox you will need to install the dependencies using a custom buildpack since those modules have binary dependencies (C++ code in them)
James Thomas has a buildpack with sox on it: https://github.com/jthomas/nodejs-buildpack.
You need to update your manifest.yml to be something like:
memory: 256M
buildpack: https://github.com/jthomas/nodejs-buildpack.git
command: npm start
Node:
var sox = require('sox');
var job = sox.transcode('audio.mp3', 'audio.ogg', {
sampleRate: 16000,
format: 'ogg',
channelCount: 2,
bitRate: 192 * 1024,
compressionQuality: -1
});

Resources