I followed the docs for the vimeo node.js api to upload a file. It's quite simple and I have it working by running it directly in node, with the exception that it requires me to pass the full path of the file I want to upload. Code is here:
function uploadFile() {
let file = '/Users/full/path/to/file/bulls.mp4';
let video_id; //the eventual end URI of the uploaded video
lib.streamingUpload(file, function(error, body, status_code, headers) {
if (error) {
throw error;
}
lib.request(headers.location, function (error, body, status_code, headers) {
console.log(body);
video_id = body.uri;
//after it's done uploading, and the result is returned, update info
updateVideoInfo(video_id);
});
}, function(upload_size, file_size) {
console.log("You have uploaded " +
Math.round((upload_size/file_size) * 100) + "% of the video");
});
}
Now I want to integrate into a form generated in my react app, except that the result of evt.target.files[0] is not a full path, the result is this:
File {name: "bulls.mp4", lastModified: 1492637558000, lastModifiedDate: Wed Apr 19 2017 14:32:38 GMT-0700 (PDT), webkitRelativePath: "", size: 1359013595…}
Just for the sake of it, I piped that into my already working upload function and it didn't work for the reasons specified. Am I missing something? If not I just want to clarify what I actually have to do then. So now I'm looking at the official Vimeo guide and wanted to make sure that was the right road to go down. See: https://developer.vimeo.com/api/upload/videos
So if I'm reading it right, you do several requests to achieve the same goal?
1) Do a GET to https://api.vimeo.com/me to find out the remaining upload data they have.
2) Do a POST to https://api.vimeo.com/me/videos to get an upload ticket. Use type: streaming if I want a resumable upload such as those provided by the vimeo streamingUpload() function
3) Do a PUT to https://1234.cloud.vimeo.com/upload?ticket_id=abcdef124567890
4) Do a PUT to https://1234.cloud.vimeo.com/upload?ticket_id=abcdef124567890 but without file data and the header Content-Range: bytes */* anytime I want to check the bytes uploaded.
Sound right? Or can you simply use a form and I got it wrong somewhere. Let me know. Thanks.
There's some example code in this project that might be worth checking out: https://github.com/websemantics/vimeo-upload.
Your description is mostly correct for the streaming system, but I want to clarify the last two points.
3) In this step, you should make a PUT request to that url with a Content-Length header describing the full size of the file (as described here: https://developer.vimeo.com/api/upload/videos#upload-your-video)
4) In this step, the reason you are checking bytes uploaded is if you have completed the upload, or if your connection in the PUT request dies. We save as many bytes possible, and we will respond to the request in step 4. with how many bytes we received. This lets you resume step 3 where you left off instead of at the very beginning.
For stability we highly recommend the resumable uploader, but if you are looking for simplicity we do offer a simple POST uploader that uses an HTML form. The docs for that are here: https://developer.vimeo.com/api/upload/videos#simple-upload
Related
I'm trying to save a remote image file into a database, but I'm having some issues with it since I've never done it before.
I need to download the image and pass it along (with node-request) with a few other properties to another node api that saves it into a mysql database (using sequelize). I've managed to get some data to save, but when I download it manually and try to open it, it's not really usable and no image shows up.
I've tried a few things: getting the image with node-request, converting it to a base64 string (read about that somewhere) and passing it along in a json payload, but that didn't work. Tried sending it as a multipart, but that didn't work either. Haven't worked with streams/buffers/multipart all that much before and never in node. I've tried looking into node-request pipes, but I couldn't really figure out how possibly apply them to this context.
Here's what I currently have (it's a part es6 class so there's no 'function' keywords; also, request is promisified):
function getImageData(imageUrl) {
return request({
url: imageUrl,
encoding: null,
json: false
});
}
function createEntry(entry) {
return getImageData(entry.image)
.then((imageData) => {
entry.image_src = imageData.toString('base64');
var requestObject = {
url: 'http://localhost:3000/api/entry',
method: 'post',
json: false,
formData: entry
};
return request(requestObject);
});
}
I'm almost 100% certain the problem is in this part because the api just takes what it gets and gives it to sequelize to put into the table, but I could be wrong. Image field is set as longblob.
I'm sure it's something simple once I figure it out, but so far I'm stumped.
This is not a direct answer to your question but it is rarely needed to actually store an image in the database. What is usually done is storing an image on storage like S3, a CDN like CloudFront or even just in a file system of a static file server, and then storing only the file name or some ID of the image in the actual database.
If there is any chance that you are going to serve those images to some clients then serving them from the database instead of a CDN or file system will be very inefficient. If you're not going to serve those images then there is still very little reason to actually put them in the database. It's not like you're going to query the database for specific contents of the image or sort the results on the particular serialization of an image format that you use.
The simplest thing you can do is save the images with a unique filename (either a random string, UUID or a key from your database) and keep the ID or filename in the database with other data that you need. If you need to serve it efficiently then consider using S3 or some CDN for that.
I'm having a really tough time getting my files to upload to box using Node.js.
Every single time I attempt to, I get the following error:
Error: cannot POST /api/2.0/files/content (400)
Here is the relevant code. I've already double checked that this.options.auth contains the required tokens, etc. The parent_id folder is the root folder, so '0'. The filepath is a stream, which is totally fine.
request.post('https://upload.box.com/api/2.0/files/content')
.set('Authorization', this.options.auth)
.field('parent_id', folder)
.attach('filename', filepath)
.end(function (res) {
if (res.error) {
return callback('Error: '+res.error.message);
}
callback(null, res.body);
});
Any ideas?
HTTP status code 400 is used for a bad request. One thing to check is that the parameters you are supplying are all valid and that you haven't forgotten any required parameters. Looking at the Box API getting-started doc, it appears that what you are calling parent_id should be just parent. If it still doesn't work, check for other similar issues too, of course.
I'm attempting to return generated files to the front end through Express' res.download function. I'm using chrome, but whenever I call that API that executes the following code all that is returned is the same values returned from the Express res.sendFile() function.
I know that res.download uses res.sendFile, but I would like the download function to actually save to the file system instead of just returning the file in the body of the response.
This is my code.
exports.download = function(req,res) {
var filePath = //somefile that I want to download
res.download(filePath, 'response.txt', function(err) {
throw err;
}
}
I know that the above code at least partly works because I'm getting back, in the response, the contents of the file. However, I want it to be saved onto the file system.
Am I misunderstanding what the download function is supposed to do? Do I just need to take the response data and write it to the file system manually?
res.download adds headers that suggest to the browser that the file should be downloaded rather than opened. However, there's no way to force the browser to do this; it's ultimately the user's choice whether to download a particular file, typically.
If you're triggering this request with AJAX, well, that's not going to cause it to be downloaded, because your JavaScript is requesting that it get the data.
Do I just need to take the response data and write it to the file system manually?
You don't have file system access in browser-side JavaScript. I'm not sure how you intend to do this.
I have a file upload system in sails.js app. I want to process the uploads before saving them in the server. My form on the client side allows multiple file uploads. Now on the server side how do I know how many files were sent?
For example I can find the total bytes to be expected from the upload using the following:
req._fileparser.form.bytesExpected
However, I couldn't find something similar that helps me find the total number of files sent to the server.
Also the above code req._fileparser.form.bytesExpected, is there a better way to get total combined file size of the files sent through the upload form by the client?
In the github repository for Skipper there is a file: index.js
Line 92 from the above file, which appears to deal with multipart file uploads, contains the following:
var hasUpstreams = req._fileparser && req._fileparser.upstreams.length;
You should check the length of upstreams in your code, and see if that contains the number of files you sent.
Another option: send a parameter in your request from the client with the number of files uploaded.
See the skipper ReadMe section about Text Parameters.
Skipper allows you to access the other non-file metadata parameters (e.g "photoCaption" or
"commentId") in the conventional way. That includes url/JSON-encoded HTTP body parameters
(req.body), querystring parameters (req.query), or "route" parameters (req.params); in other words,
all the standard stuff sent in standard AJAX uploads or HTML form submissions. And helper methods
like req.param() and req.allParams() work too.
I've just found a previous question/answer on stackoverflow.
You might try using var upload = req.file('file')._files[0].stream to access and validate, as shown in the above answer.
I am trying to write an app that will allow my users to upload files to my Google Cloud Storage account. In order to prevent overwrites and to do some custom handling and logging on my side, I'm using a Node.js server as a middleman for the upload. So the process is:
User uploads file to Node.js Server
Node.js server parses file, checks file type, stores some data in DB
Node.js server uploads file to GCS
Node.js server response to user's request with a pass/fail remark
I'm getting a little lost on step 3, of exactly how to send that file to GCS. This question gives some helpful insight, as well as a nice example, but I'm still confused.
I understand that I can open a ReadStream for the temporary upload file and pipe that to the http.request() object. What I'm confused about is how do I signify in my POST request that the piped data is the file variable. According to the GCS API Docs, there needs to be a file variable, and it needs to be the last one.
So, how do I specify a POST variable name for the piped data?
Bonus points if you can tell me how to pipe it directly from my user's upload, rather than storing it in a temporary file
I believe that if you want to do POST, you have to use a Content-Type: multipart/form-data;boundary=myboundary header. And then, in the body, write() something like this for each string field (linebreaks should be \r\n):
--myboundary
Content-Disposition: form-data; name="field_name"
field_value
And then for the file itself, write() something like this to the body:
--myboundary
Content-Disposition: form-data; name="file"; filename="urlencoded_filename.jpg"
Content-Type: image/jpeg
Content-Transfer-Encoding: binary
binary_file_data
The binary_file_data is where you use pipe():
var fileStream = fs.createReadStream("path/to/my/file.jpg");
fileStream.pipe(requestToGoogle, {end: false});
fileStream.on('end, function() {
req.end("--myboundary--\r\n\r\n");
});
The {end: false} prevents pipe() from automatically closing the request because you need to write one more boundary after you're finished sending the file. Note the extra -- on the end of the boundary.
The big gotcha is that Google may require a content-length header (very likely). If that is the case, then you cannot stream a POST from your user to a POST to Google because you won't reliably know what what the content-length is until you've received the entire file.
The content-length header's value should be a single number for the entire body. The simple way to do this is to call Buffer.byteLength(body) on the entire body, but that gets ugly quickly if you have large files, and it also kills the streaming. An alternative would be to calculate it like so:
var body_before_file = "..."; // string fields + boundary and metadata for the file
var body_after_file = "--myboundary--\r\n\r\n";
var fs = require('fs');
fs.stat(local_path_to_file, function(err, file_info) {
var content_length = Buffer.byteLength(body_before_file) +
file_info.size +
Buffer.byteLength(body_after_file);
// create request to google, write content-length and other headers
// write() the body_before_file part,
// and then pipe the file and end the request like we did above
But, that still kills your ability to stream from the user to google, the file has to be downloaded to the local disk to determine it's length.
Alternate option
...now, after going through all of that, PUT might be your friend here. According to https://developers.google.com/storage/docs/reference-methods#putobject you can use a transfer-encoding: chunked header so you don't need to find the files length. And, I believe that the entire body of the request is just the file, so you can use pipe() and just let it end the request when it's done. If you're using https://github.com/felixge/node-formidable to handle uploads, then you can do something like this:
incomingForm.onPart = function(part) {
if (part.filename) {
var req = ... // create a PUT request to google and set the headers
part.pipe(req);
} else {
// let formidable handle all non-file parts
incomingForm.handlePart(part);
}
}