this.state.videoBlob is a blob object. I used URL.createObjectURL to generate a blob URL, and passed it to fs.createReadStream, like below:
fs.createReadStream(URL.createObjectURL(this.state.videoBlob))
This blob url looks like:
blobURL: blob:http://localhost:3000/dabe5cdd-00cc-408a-9f3d-b0ba5f2b10b3
But I got an error saying:
TypeError: fs.createReadStream is not a function
The problem won't exist if I passed some online video URL. So how can I read blob from fs.createReadStream? Thanks!
When I look at the code behind fs.createReadStream(), it calls new ReadStream() and passes the path/url to that. On this line of code, it appears that the only type of url that is supported is a file URL. The doc is silent on that topic which is why I went and looked at the code. So, it does not appear to me that fs.createReadStream() supports that type of pseudo-URL.
Since you just want a readstream from that URL and you have the actual URL of the remote resource, I would suggest you just use either http.get() or request() or something similar as those will all contact the remote host and return to you a readStream. Since your objective was to get a readStream, this is one way to achieve that.
http.get('http://localhost:3000/dabe5cdd-00cc-408a-9f3d-b0ba5f2b10b3', (res) => {
// res is a readstream here
}).on('error', (err) => {
// error on the request here
});
FYI, you may find this answer on Blob URLs to be useful. I don't see any evidence that fs.createReadStream() supports blob URLs. In the browser, they are something created only by the internals of the browser and are only useful within that specific web page context (they refer indirectly to some internal storage) and can't be passed outside the web page or even preserved from one web page to the next. If you wanted your server to have access to the actual data from a blob URL created in the browser, you'd have to upload the actual data to your server. Your server can't access a blob URL created in the browser.
Related
I want to access an image from a remote URL and pass it into another API request as request body.
I am using got stream API library to stream the data from the external url.
const url = "https://media0.giphy.com/media/4SS0kfzRqfBf2/giphy.gif";
got.stream(url).pipe(createWriteStream('image.gif'));
const response= await axsios.post('post_url',fs.createReadStream('image.gif');
The download operation is working as expected, but I don't want write the data to the local system.Instead, I woud like pass the response from the got stream API to another API as a request body.
I have a video stored in amazon s3.
Now I'm serving it to the client with node.js stream
return request(content.url).pipe(res)
But, the following format is not working with safari.
Safari is unable to play streamed data. But, the same code works for chrome and firefox.
I did some research and found out
chrome's request content-range param looks like
[0-]
But, safari does the same with content ranges
[0-10][11-20][21-30]
Now if the content was stored in my server, I could have break the file in chucks with
fs.createReadStream(path).pipe(res)
to serve safari with it's requested content range
As mentioned in this blog https://medium.com/better-programming/video-stream-with-node-js-and-html5-320b3191a6b6
How can I do the same with remote url stored in s3?
FYI, It's not feasible to download the content temporarily on server and delete it after serving. As, the website is supposed to receive good traffic.
How can I do the same with remote url stored in s3?
Don't.
Let S3 serve the data. Sign a URL to temporarily allow access to the client. Then, you don't have to serve or proxy anything and you save a lot of bandwidth. An example from the documentation:
var params = {Bucket: 'bucket', Key: 'key'};
var url = s3.getSignedUrl('getObject', params);
console.log('The URL is', url);
...As, the website is supposed to receive good traffic.
You'll probably also want to use a CDN to further reduce your costs and enhance the performance. If you're already using AWS, CloudFront is a good choice.
I'm trying to run this code
module.exports = async (req, res, next) => {
res.set('Content-Type', 'text/javascript');
const response = {};
res.status(200).render('/default.js', { response });
await fn(response);
};
fn is a function that calls an api to a service that will output to the client something. but its dependent on the default.js file to be loaded first. How can do something like
res.render('/default.js', { response }).then(async() => {
await fn(response);
};
tried it, but doesn't seem to like the then()
also, fn doesn't return data to the client, it calls an api service that is connected with the web sockets opened by the code from default.js that is rendered.
do i have to make an ajax request for the fn call and not call it internally?
any ideas?
Once you call res.render(), you can send no more data to the client, the http response has been sent and the http connection is done - you can't send any more to it. So, it does you no good to try to add something more to the response after you call res.render().
It sounds like you're trying to put some data INTO the script that you send to the browser. Your choices for that are to either:
Get the data you need to with let data = await fn() before you call res.render() and then pass that to res.render() so your template engine can put that data into the script file that you send the server (before you send it).
You will need to change the script file template to be able to do this so it has appropriate directives to insert data into the script file and you will have to be very careful to format the data as Javascript data structures.
Have a script in the page make an ajax call to get the desired data and then do your task in client-side Javascript after the page is already up and running.
It looks like it might be helpful for you to understand the exact sequence of things between browser and server.
Browser is displaying some web page.
User clicks on a link to a new web page.
Browser requests new web page from the server for a particular URL.
Server delivers HTML page for that URL.
Browser parses that HTML page and discovers some other resources required to render the page (script files, CSS files, images, fonts, etc...)
Browser requests each of those other resources from the server
Server gets a request for each separate resource and returns each one of them to the browser.
Browser incorporates those resources into the HTML page it previously downloaded and parsed.
Any client side scripts it retrieved for that page are then run.
So, the code you show appears to be a route for one of script files (in step 5 above). This is where it fits into the overall scheme of loading a page. Once you've returned the script file to the client with res.render(), it has been sent and that request is done. The browser isn't connected to your server anymore for that resource so you can't send anything else on that same request.
How can I get uploaded file content using multiparty in node.js? I don't need temp file, I need to redirect all stream into Google Cloud Storage in order to save the file content, but I can't find the way to get this content with events.
Found the answer. We need to use part stream a subscribe to standard streams events, like, data and end in order to receive file's data.
part.on("data", chunk => {
writeStream.write(chunk);
});
part.on("end", chunk => {
writeStream.end(chunk);
});
writeStream - is a another stream where you want to put your data. In my case that was Google Cloud Storage file PUT request via signedUrl.
part - is a part object of a form part event
I'm working on a project using Google Cloud Storage to allow users to upload media files into a predefined bucket using Node.js. I've been testing with small .jpg files. I also used gsutil to set bucket permissions to public.
At first, all files generated links that downloaded the file. Upon investigation of the docs, I learned that I could explicitly set the Content-Type of each file after upload using the gsutil CLI. When I used this procedure to set the filetype to 'image/jpeg', the link behavior changed to display the image in the browser. But this only worked if the link had not been previously clicked prior to updating the metadata with gsutil. I thought that this might be due to browser caching, but the behavior was duplicated in an incognito browser.
Using gsutil to set the mime type would be impractical at any rate, so I modified the code in my node server POST function to set the metadata at upload time using an npm module called mime. Here is the code:
app.post('/api/assets', multer.single('qqfile'), function (req, res, next) {
console.log(req.file);
if (!req.file) {
return ('400 - No file uploaded.');
}
// Create a new blob in the bucket and upload the file data.
var blob = bucket.file(req.file.originalname);
var blobStream = blob.createWriteStream();
var metadata = {
contentType: mime.lookup(req.file.originalname)
};
blobStream.on('error', function (err) {
return next(err);
});
blobStream.on('finish', function () {
blob.setMetadata(metadata, function(err, response){
console.log(response);
// The public URL can be used to directly access the file via HTTP.
var publicUrl = format(
'https://storage.googleapis.com/%s/%s',
bucket.name, blob.name);
res.status(200).send(
{
'success': true,
'publicUrl': publicUrl,
'mediaLink': response.mediaLink
});
});
});
blobStream.end(req.file.buffer);
});
This seems to work, from the standpoint that it does actually set the Content-Type on upload, and that is correctly reflected in the response object as well as the Cloud Storage console. The issue is that some of the links returned as publicUrl cause a file download, and others cause a browser load of the image. Ideally I would like to have both options available, but I am unable to see any difference in the stored files or their metadata.
What am I missing here?
Google Cloud Storage makes no assumptions about the content-type of uploaded objects. If you don't specify, GCS will simply assign a type of "application/octet-stream".
The command-line tool gsutil, however, is smarter, and will attach the right Content-Type to files being uploaded in most cases, JPEGs included.
Now, there are two reasons why your browser is likely to download images rather than display them. First, if the Content-Type is set to "application/octet-stream", most browsers will download the results as a file rather than display them. This was likely happening in your case.
The second reason is if the server responds with a 'Content-Disposition: attachment' header. This doesn't generally happen when you fetch GCS objects from the host "storage.googleapis.com" as you are doing above, but it can if you, for instance, explicitly specified a contentDisposition for the object that you've uploaded.
For this reason I suspect that some of your objects don't have an "image/jpeg" content type. You could go through and set them all with gsutil like so: gsutil -m setmeta 'Content-Type:image/jpeg' gs://myBucketName/**