I have got an API that responds with audio/video files in forms of stream.
For e.g. a typical response looks like this:
data:audio/mpeg;base64,GkXfo59ChoEBQveBA...
I use axios to call this API and get the raw stream data successfully. How do I convert this data into an usable file and also make this downloadable from the front-end?
P.S. Using React for the front end.
You can create an anchor element, add download attribute and take the data as href. For example:
<a download='hello-world.txt' href="data:text/plain;base64,SGVsbG8gd29ybGQ=">Download Data</a>
This trick also works:
const downloadData = (filename, dataURI) => {
var a = document.createElement('a')
a.setAttribute('href', dataURI)
a.setAttribute('download', filename)
a.click()
a.remove()
}
downloadData("hello-world", "data:text/plain;base64,SGVsbG8gd29ybGQ=")
Related
I'm creating a video streaming service, the backend code look like this:
const stream = new Readable();
stream.push(movie.data.slice(start, end + 1));
stream.push(null);
stream.pipe(res);
This when run in postman, postman automatically parses it and give me a video in the response. But when I'm using it in my code, it gives me responses in random characters
`ftypmp42isommp42îmoovlmvhdè'#trak\tkhd&è#hmdia mdhd2òUÄGhdlrvideISO Media file produced by Google Inc.°minf$dinfdrefurl pstblstsdavc1hHHÿÿ2avcCBÀÿágBÀÚ¿åÀZ AâÅÔhÎ<sttsù4stsc
dstco
õÆSLÕÆH?ÛÔHAÍ°G.4¥¹®4³¤(ù¯- Å
GÐ
î¿østszùHÕ7ÈÕI~hS<
©ÅºÒ 3k 7 (;²w¶¸¯Ð«¥y¾mÙEòÙÕÊ®ß
nºK|'eiõE^H2_&Z£ÇVpÛË?O*z"±ÿ{¢×Õg&°]Øe¡OË¿£½í¯¥^y§t=ª$®Ü'² ²-míÜÆ£%(¶zÎ,4qj z º5ªÓ#ã§
!ßó¢uÌÎÏ£¢¿ß(u;xû/]ù%¡ErµÒà1§5&¬¤'£,i5Ó3óØ(â[¬³föçÑÀH
[`
Now the problem is that I don't know what is this, how to parse it, how to display it as a video... Need help
I am trying to figure out how to expose an express route (ie: Get api/word/:some_word) which uses the azure tts sdk (microsoft-cognitiveservices-speech-sdk) to generate an audio version of some_word (in any format playable by a browser), and res.send()'s the resulting audio, so that a front end javascript web app could consume the api in order to play the audio pronunciation of the word.
I have the azure sdk 'working' - it is creating an 'ArrayBuffer' inside my expressjs code. However, I do not know how to send the data in this ArrayBuffer to the front end. I have been following the instructions here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=import%2Cwindowsinstall&pivots=programming-language-javascript#get-result-as-an-in-memory-stream
Another way to phrase my question would be 'in express, I have an ArrayBuffer whose contents is an .mp3/.ogg/.wav file. How do I send that file via express? Do I need to convert it into some other data type(like a Base64 encoded string? A buffer?) Do I need to set some particular response headers?
I finally figured it out seconds after asking this question 😂
I am pretty new to this area, so any pointers on how this could be improved would be appreciated.
app.get('/api/tts/word/:word', async (req, res) => {
const word = req.params.word;
const subscriptionKey = azureKey;
const serviceRegion = 'australiaeast';
const speechConfig = sdk.SpeechConfig.fromSubscription(
subscriptionKey as string,
serviceRegion
);
speechConfig.speechSynthesisOutputFormat =
SpeechSynthesisOutputFormat.Ogg24Khz16BitMonoOpus;
const synthesizer = new sdk.SpeechSynthesizer(speechConfig);
synthesizer.speakSsmlAsync(
`
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
<voice name="zh-CN-XiaoxiaoNeural">
${word}
</voice>
</speak>
`,
(resp) => {
const audio = resp.audioData;
synthesizer.close();
const buffer = Buffer.from(audio);
res.set('Content-Type', 'audio/ogg; codecs=opus; rate=24000');
res.send(buffer);
}
);
});
I'm trying to make use of websockets to livestream chunks from a WebM stream. The following is some example code on the server side that I have pieced together:
const command = ffmpeg()
.input('/dev/video0')
.fps(24)
.audioCodec('libvorbis')
.videoCodec('libvpx')
.outputFormat('webm')
const ffstream = command.pipe()
ffstream.on('data', chunk => {
io.sockets.emit('Webcam', chunk)
})
I have the server code structured in this manner so ffstream.on('data', ...) can also write to a file. I am able to open the file and view the video locally, but have difficulty using the chunks to render in a <video> tag in the DOM.
const ms = new MediaSource()
const video = document.querySelector('#video')
video.src = window.URL.createObjectURL(ms)
ms.addEventListener('sourceopen', function () {
const sourceBuffer = ms.addSourceBuffer('video/webm; codecs="vorbis,vp8"')
// read socket
// ...sourceBuffer.appendBuffer(data)
})
I have something such as the above on my client side. I am able to receive the exact same chunks from my server but the sourceBuffer.appendBuffer(data) is throwing me the following error:
Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer has been removed from the parent media source.
Question: How can I display these chunks in an HTML5 video tag?
Note: From my reading, I believe this has to do with getting key-frames. I'm not able to determine how to recognize these though.
I have a simple uploading code by node.js.
var http = require('http')
var fs = require('fs')
var server = http.createServer(function(req, res){
if(req.url == '/upload') {
var a = fs.createWriteStream('a.jpg', { defaultEncoding: 'binary'})
req.on('data', function(chunk){
a.write(chunk)
})
.on('end', function()){
a.end()
res.end('okay')
})
}
else {
fs.createReadStream('./index.html').pipe(res);
// just show <form>
}
})
server.listen(5000)
when I upload some image, I cannot get exact same file.
Always files are broken.
When I try to do this using formidable, I can get a fine file.
So I studied formidable but I cannot understand how did it catch data and save.
I could find formidable use parser to calculate something about chunk from request but I did not get it all.
(It is definitely my brain issue :( ).
Anyway, what is the difference between my code and formidable?
What am I missing?
Is it a wrong way to just add all chunks from http request and save it by
fs.createWriteStream or fs.writeFile ?
What concepts am I missing?
First, req is a Readable stream. You can simply do:
req.pipe(fs.createWriteStream('a.jpg'))
for the upload part. This is copying all byte data from request stream to file.
This will work when you send raw file data as the request body:
curl --data-binary #"/home/user/Desktop/a.jpg" http://localhost:8080/upload
Because this sends request body exactly as image binary data, that gets streamed to a file on server.
But, there is another request format called multipart/form-data. This is what web browsers use with <form> to upload files.
curl -form "image=#/home/user1/Desktop/a.jpg" http://localhost:8080/upload
Here the request body contains multiple "parts", one for each file attachment or form field, separated by special "boundary" characters:
--------------------------e3f25f5319cd6624
Content-Disposition: form-data; name="image"; filename="a.jpg"
Content-Type: application/octet-stream
JPG IHDRH-ÑtEXtSoftwareAdobe.....raw file data
--------------------------e3f25f5319cd6624
Hence you will need much more complicated code to extract the file part data from it. NPM Modules like busboy and formidable do exactly that.
I am using pdfkit in nodejs to create pdfs. Right now, to get the data from pdfDocument, I first write it to a file using 'fs' and then read back from it.
I want to be able to use the data directly from pdfDocument object and send it as a response. How can I do that?
Each pdfDocument is a stream. You can basically pipe it to the response like this:
require('http').createserver(function (request, response) {
var pdfdocument = require('pdfkit'),
pdfdocument = new pdfdocument();
pdfdocument.text('wassup');
pdfdocument.pipe(response);
pdfdocument.end()
}).listen(1999);