How to parse video stream coming from the nodejs server - node.js

I'm creating a video streaming service, the backend code look like this:
const stream = new Readable();
stream.push(movie.data.slice(start, end + 1));
stream.push(null);
stream.pipe(res);
This when run in postman, postman automatically parses it and give me a video in the response. But when I'm using it in my code, it gives me responses in random characters
`ftypmp42isommp42îmoovlmvhdè'#trak\tkhd&è#hmdia mdhd2òUÄGhdlrvideISO Media file produced by Google Inc.°minf$dinfdrefurl pstblstsdavc1hHHÿÿ2avcCBÀÿágBÀÚ¿åÀZ  AâÅÔhÎ<sttsù4stsc
dstco
õÆSLÕÆH?ÛÔHAÍ°G.4¥¹®4³¤(ù¯- Å
GÐ
î¿østszùHÕ7ÈÕI~hS<
©ÅºÒ 3k 7 (;²w¶¸¯Ð«¥y¾mÙEòÙÕÊ®ß
nºK|'eiõE^H2_&Z£ÇVpÛË?O*z"±ÿ{¢×Õg&°]Øe¡OË¿£½í¯¥^y§t=ª$®Ü'² ²-míÜÆ£%(¶zÎ,4qj z º5ªÓ#ã§
!ßó¢uÌÎÏ£¢¿ß(u;xû/]ù%¡ErµÒà1§5&¬¤'£,i5Ó3óØ(â[¬³föçÑÀH
[`
Now the problem is that I don't know what is this, how to parse it, how to display it as a video... Need help

Related

How do I create an expressjs endpoint that uses azure tts to send audio to a web app?

I am trying to figure out how to expose an express route (ie: Get api/word/:some_word) which uses the azure tts sdk (microsoft-cognitiveservices-speech-sdk) to generate an audio version of some_word (in any format playable by a browser), and res.send()'s the resulting audio, so that a front end javascript web app could consume the api in order to play the audio pronunciation of the word.
I have the azure sdk 'working' - it is creating an 'ArrayBuffer' inside my expressjs code. However, I do not know how to send the data in this ArrayBuffer to the front end. I have been following the instructions here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=import%2Cwindowsinstall&pivots=programming-language-javascript#get-result-as-an-in-memory-stream
Another way to phrase my question would be 'in express, I have an ArrayBuffer whose contents is an .mp3/.ogg/.wav file. How do I send that file via express? Do I need to convert it into some other data type(like a Base64 encoded string? A buffer?) Do I need to set some particular response headers?
I finally figured it out seconds after asking this question 😂
I am pretty new to this area, so any pointers on how this could be improved would be appreciated.
app.get('/api/tts/word/:word', async (req, res) => {
const word = req.params.word;
const subscriptionKey = azureKey;
const serviceRegion = 'australiaeast';
const speechConfig = sdk.SpeechConfig.fromSubscription(
subscriptionKey as string,
serviceRegion
);
speechConfig.speechSynthesisOutputFormat =
SpeechSynthesisOutputFormat.Ogg24Khz16BitMonoOpus;
const synthesizer = new sdk.SpeechSynthesizer(speechConfig);
synthesizer.speakSsmlAsync(
`
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
<voice name="zh-CN-XiaoxiaoNeural">
${word}
</voice>
</speak>
`,
(resp) => {
const audio = resp.audioData;
synthesizer.close();
const buffer = Buffer.from(audio);
res.set('Content-Type', 'audio/ogg; codecs=opus; rate=24000');
res.send(buffer);
}
);
});

Discord.js sending base64 encoded image to channel

i've been at this for days now and im stuck on this final part.
as the title suggests my creation receives a base64 encoded image. it then loads it into a buffer and attempts to send it to to a channel like so:
sfbuffer = new Buffer.from(base64_img, "base64");
const finalattach = new Discord.MessageAttachment(sfbuffer);
message.channel.send(finalattach);
it does send the buffer however it always results in this
example of base64 data that gets loaded
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAQAAAAEACAIAAADTED8xAAEA and so on...
i've tried sending it with file options (didn't even send), and as an embed using both .setimage and .attachfiles and both produce the same results. please im banging my head over this. all the google links are purple and i don't know what else to do D:
image buffers should NOT contain the
data:image/png;base64, part. only the non-human readable data
split the base64 string at the comma, load it into a attachment as a buffer and send attachment e.g.
const sfbuff = new Buffer.from(base64_img.split(",")[1], "base64");
const sfattach = new Discord.MessageAttachment(sfbuff, "output.png");
message.channel.send(sfattach)
Even though i figured it out, i couldn't of done it without Aviv lo
Thank you all :)
Try something like this:
replyMessage.Attachments.Add(new Attachment()
{
ContentUrl = $"data:image/jpeg;base64,{Convert.ToBase64String(bdata)}"
});
This post is answering similar questions.

Convert a file stream to a file

I have got an API that responds with audio/video files in forms of stream.
For e.g. a typical response looks like this:
data:audio/mpeg;base64,GkXfo59ChoEBQveBA...
I use axios to call this API and get the raw stream data successfully. How do I convert this data into an usable file and also make this downloadable from the front-end?
P.S. Using React for the front end.
You can create an anchor element, add download attribute and take the data as href. For example:
<a download='hello-world.txt' href="data:text/plain;base64,SGVsbG8gd29ybGQ=">Download Data</a>
This trick also works:
const downloadData = (filename, dataURI) => {
var a = document.createElement('a')
a.setAttribute('href', dataURI)
a.setAttribute('download', filename)
a.click()
a.remove()
}
downloadData("hello-world", "data:text/plain;base64,SGVsbG8gd29ybGQ=")

Recording audio data from Flutter (16000Hz PCM data), capturing the audio to send it to backend(Node.js)

I am trying to record audio through mic_stream package in pub.dev (since I want 16000Hz PCM data), and trying to capture data and send it to Nodejs server. I am not making any changes to the mic_stream example in the provided link, but I want to send the audio data in 16000Hz PCM format to Nodejs server. I have tried using http.MultipartFile as shown below:
var url = Uri.parse("http://localhost:3000/upload");
var request = new http.MultipartRequest("POST", url);
Future<int> length = stream.length;
var stream_length = http.ByteStream(stream);
request.files.add( await http.MultipartFile('music',stream,length,{
"filename1234.pcm"), new MediaType('audio', 'x-wav')});
//This gives me error because length is not of int type
request.send().then((response) {
print("test");
if (response.statusCode == 200) print("Uploaded! ${response}");
else print("Failure");
});
I want to know if this works or if there are any other packages that I can use to record the data and send to Nodejs in PCM format 16000Hz.
Any help is highly appreciated.

HTML5 WebM streaming using chunks from FFMPEG via Socket.IO

I'm trying to make use of websockets to livestream chunks from a WebM stream. The following is some example code on the server side that I have pieced together:
const command = ffmpeg()
.input('/dev/video0')
.fps(24)
.audioCodec('libvorbis')
.videoCodec('libvpx')
.outputFormat('webm')
const ffstream = command.pipe()
ffstream.on('data', chunk => {
io.sockets.emit('Webcam', chunk)
})
I have the server code structured in this manner so ffstream.on('data', ...) can also write to a file. I am able to open the file and view the video locally, but have difficulty using the chunks to render in a <video> tag in the DOM.
const ms = new MediaSource()
const video = document.querySelector('#video')
video.src = window.URL.createObjectURL(ms)
ms.addEventListener('sourceopen', function () {
const sourceBuffer = ms.addSourceBuffer('video/webm; codecs="vorbis,vp8"')
// read socket
// ...sourceBuffer.appendBuffer(data)
})
I have something such as the above on my client side. I am able to receive the exact same chunks from my server but the sourceBuffer.appendBuffer(data) is throwing me the following error:
Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer has been removed from the parent media source.
Question: How can I display these chunks in an HTML5 video tag?
Note: From my reading, I believe this has to do with getting key-frames. I'm not able to determine how to recognize these though.

Resources