I'm using node to respond clients with two files. For now, i'm using a endpoint for each file, cause i can't figure out how pass more than one in a row.
Here's the function that responds with the file:
exports.chartBySHA1 = function (req, res, next, id) {
var dir = './curvas/' + id + '/curva.txt'; // id = 1e4cf04ad583e483c27b40750e6d1e0302aff058
fs.readFile(dir, function read(err, data) {
if (err) {
res.status(400).send("Não foi possível buscar a curva.");
}
content = data;
res.status(200).send(content);
});
};
Besides that, i need to change the default name of the file, when i reach that endpoint, the name brings 1e4cf04ad583e483c27b40750e6d1e0302aff058, but i'm passing the content of 'curva.txt'.
Someone has any tips?
Q: How do I pass back contents of more than one file back to a user without having to create individual endpoints.
A: There are a few ways you can do this.
If the content of each file is not huge then the easiest way out is to read in all of the contents and then transmit them back as a javascript key-value object. E.g.
let data = {
file1: "This is some text from file 1",
file2: "Text for second file"
}
res.send(data);
res.end();
If the content is particularly large then you can stream the data across to the client, while doing so you could add some metadata or hints to tell the client what they are going to receive in the next moment and when is the end of file.
There is probably some libraries which can do the latter for you already, so I would suggest you shop around in github before designing/writing your own.
The former method is the easiest.
Related
We work with very large files so we devide them into chunk of 20mb then call the upload function to upload them to blob storage.I am calling upload() in node js where I found that I am missing something while upload. Only 20mb is getting uploaded each time , I doubt nodejs is overriding the content rather than appending the stream.
can somebody help to fix it?
const chunkSize = Number(request.headers["x-content-length"]);
const userrole = request.headers["x-userrole"];
const pathname = request.headers["x-pathname"];
var form = new multiparty.Form();
form.parse(request, function (err, fields, files) {
if (files && files["payload"] && files["payload"].length > 0) {
var fileContent = fs.readFileSync(files["payload"][0].path);
// log.error('fields',fields['Content-Type'])
fs.unlink(files["payload"][0].path, function (err) {
if (err) {
log.error("Error in unlink payload:" + err);
}
});
var size = fileContent.length;
if (size !== chunkSize) {
sendBadRequest(response, "Chunk uploading was not completed");
return;
}
//converting chunk[buffers] to readable stream
const stream = Readable.from(fileContent);
var options = {
contentSettings: {
contentType: fields['Content-Type']
}
}
blobService.createBlockBlobFromStream(containerName, pathname, stream, size, options, error => {
});
Headers:
X-id: 6023f6f53601233c080b1369
X-Chunk-Id: 38
X-Content-Id: 43bfdbf4ddd1d7b5cd787dc212be8691d8dd147017a2344cb0851978b2d983c075c26c6082fd27a5147742b030856b0d71549e4d208d1c3c9d94f957e7ed1c92
X-pathname: 6023f6ae3601233c080b1365/spe10_lgr311_2021-02-10_09-08-37/output/800mb.inc
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryqodrlQNFytNS9wAc
X-Content-Name: 800mb.inc
The issue is that you're using createBlockBlobFromStream which will overwrite the contents of a blob. This method is used to create a blob in a single request (i.e. complete blob data is passed as input). From the documentation here:
Uploads a block blob from a stream. If the blob already exists on the
service, it will be overwritten. To avoid overwriting and instead
throw an error if the blob exists, please pass in an accessConditions
parameter in the options object.
In your case, you're uploading chunks of the data. What you would need to do is use createBlockFromStream method for each chunk that you're uploading.
Once all chunks are uploaded, you would need to call commitBlocks method to create the blob.
UPDATE
How can we generate the blockId?
A block id is simply a string. Key thing to remember is that when you're calling createBlockFromStream, the length of block id of each block you're sending must be the same. You can't use 1,2,3,4,5,6,7,8,9,10,11... for example. You will have to use something like 01,02,03,04,05,06,07,08,09,10,11... so that they're of same length. You can use GUID for that purpose. Also, the maximum length of the block id is 50 characters.
should it be unique for all?
Yes. For a blob, the block ids must be unique otherwise it will overwrite the content of a previous block uploaded with the same id.
can u show the example code so that i can try similar way to
implement?
Please take a look here.
Basically the idea is very simple: On your client side, you're chunking the file and sending each chunk separately. What you will do is apart from sending that data, you will also send a block id string. You will also need to keep that block id on your client side. You will repeat the same process for all the chunks of your file.
Once all chunks are uploaded successfully, you will make one more request to your server and send the list of all the block ids. Your server at that time will call commitBlocks to create the blob.
Please note that the order of block ids in your last request is important. Azure Storage Service will use this information to stitch the blob together.
I have been googling around but cannot find a clear answer to this.
I am making a chrome extension which records tabs. The idea is to stream getUserMedia to the backend using Websockets (specifically Socket.io) where the backend writes to a file until a specific condition (a value in the backend) is set.
The problem is, I do not know how I would call the backend with a specific ID and also, how would I correctly write to the file without corrupting it?
You are sending the output from MediaRecorder, via a websocket, to your back end.
Presumably you do this from within MediaRecorder's ondataavailable handler.
It's quite easy to stuff that data into a websocket:
function ondataavailable ( event ) {
event.data.arrayBuffer().then ( buf => {socket.emit('media', buf) })
}
In the backend you must concatenate all the data you receive, in the order of reception, to a media file. This media file likely has a video/webm MIME type; if you give it the .webm filename extension most media players will handle it correctly. Notice that the media file will be useless if it doesn't contain all the Blobs in order. The first few Blobs contain metadata needed to make sense of the stream. This is easy to do by appending each received data item to the file.
Server-side you can use the socket.id attribute to make up a file name; that gives you a unique file for each socket connection. Something like this sleazy poorly-performing non-debugged not-production-ready code will do that.
io.on("connection", (socket) => {
if (!socket.filename)
socket.filename = path.join(__dirname, 'media', socket.id + '.webm))
socket.on('filename', (name) => {
socket.filename = path.join(__dirname, 'media', name + '.webm))
})
socket.on('media', (buf) => {
fs.appendFile(buf, filename)
})
})
On the client side you could set the filename with this.
socket.emit('filename', 'myFavoriteScreencast')
I have create a zip file in server side, then I would like to pass the file to client side so that I can download it with the saveAs() function and put it into a new Blob() function. How can I do that?
const blob = new Blob([res.file], { type: 'application/zip' });
saveAs(blob, res.filename);
I create a code like that, but I cant convert a right type of buffer file for the zip in server.
How should I convert the zip file so that the client side can receive a right file type input in Blob function.
Once you get your zip ready, you can serve the file using download() method to achieve that
Below snippet will help you
res.download('/report-12345.pdf', 'report.pdf', function (err) {
if (err) {
// Handle error, but keep in mind the response may be partially-sent
// so check res.headersSent
} else {
// decrement a download credit, etc.
}
})
You can read more details here
http://expressjs.com/en/5x/api.html#res.download
Hope that will help you :)
I have an export function that read the entire database and create a .xls file with all the records. Then the file is sent to the client.
Of course, the time of export the full database requires a lot of time and the request will soon end in a timeout error.
What is the best solution to handle this case?
I heard something about making a queue with Redis for example but this will require two requests: one for starting the job that will generate the file and the second to download the generated file.
Is this possible with a single request from the client?
Excel Export:
Use Streams. Following is a rough idea of what might be done:
Use exceljs module. Because it has a streaming API aimed towards this exact problem.
var Excel = require('exceljs')
Since we are trying to initiate a download. Write appropriate headers to response.
res.status(200);
res.setHeader('Content-disposition', 'attachment; filename=db_dump.xls');
res.setHeader('Content-type', 'application/vnd.ms-excel');
Create a workbook backed by Streaming Excel writer. The stream given to writer is server response.
var options = {
stream: res, // write to server response
useStyles: false,
useSharedStrings: false
};
var workbook = new Excel.stream.xlsx.WorkbookWriter(options);
Now, the output streaming flow is all set up. for the input streaming, prefer a DB driver that gives query results/cursor as a stream.
Define an async function that dumps 1 table to 1 worksheet.
var tableToSheet = function (name, done) {
var str = dbDriver.query('SELECT * FROM ' + name).stream();
var sheet = workbook.addWorksheet(name);
str.on('data', function (d) {
sheet.addRow(d).commit(); // format object if required
});
str.on('end', function () {
sheet.commit();
done();
});
str.on('error', function (err) {
done(err);
});
}
Now, lets export some db tables, using async module's mapSeries:
async.mapSeries(['cars','planes','trucks'],tableToSheet,function(err){
if(err){
// log error
}
res.end();
})
CSV Export:
For CSV export of a single table/collection module fast-csv can be used:
// response headers as usual
res.status(200);
res.setHeader('Content-disposition', 'attachment; filename=mytable_dump.csv');
res.setHeader('Content-type', 'text/csv');
// create csv stream
var csv = require('fast-csv');
var csvStr = csv.createWriteStream({headers: true});
// open database stream
var dbStr = dbDriver.query('SELECT * from mytable').stream();
// connect the streams
dbStr.pipe(csvStr).pipe(res);
You are now streaming data from DB to HTTP response, converting it into xls/csv format on the fly. No need to buffer or store the entire data in memory or in a file.
You do not have to send the whole file once, you can send this file by chunks (line by line for example), just use res.write(chunk) and res.end() at finish to mark it as completed.
You can either send the file information as a stream, sending each individual chunk as it gets created via res.write(chunk), or, if sending the file chunk by chunk is not an option, and you have to wait for the entire file before sending any information, you can always keep the connection open by setting the timeout duration to Infinity or any value you think will be high enough to allow the file to be created. Then set up a function that creates the .xls file and either:
1) Accepts a callback that receives the data output as an argument once ready, sends that data, and then closes the connection, or;
2) Returns a promise that resolves with the data output once its ready, allowing you to send the resolved value and close the connection just like with the callback version.
It would look something like this:
function xlsRouteHandler(req, res){
res.setTimeout(Infinity) || res.socket.setTimeout(Infinity)
//callback version
createXLSFile(...fileCreationArguments, function(finishedFile){
res.end(finishedFile)
})
//promise version
createXLSFile(...fileCreationArguments)
.then(finishedFile => res.end(finishedFile))
}
If you still find yourself concerned about timing out, you can always set an interval timer to dispatch an occasional res.write() message to prevent a timeout on the server connection and then cancel that interval once the final file content is ready to be sent.
Refer to this link which uses jedis (redis java client)
The key to this is the LPOPRPUSH command
https://blog.logentries.com/2016/05/queuing-tasks-with-redis/
So i see this code on the Docs
Template.myForm.events({
'change .myFileInput': function(event, template) {
FS.Utility.eachFile(event, function(file) {
Images.insert(file, function (err, fileObj) {
//Inserted new doc with ID fileObj._id, and kicked off the data upload using HTTP
});
});
}
});
But i dont want the file upload inmediatly when i click "myFileInptu" , i want to store that value (from the input), and insert lately with a button, so there is some way to do this?
Also its there a way to upload a FSCollection without a file? just metadata
Sorry for bad english hope you can help me
Achieving what you want to requires a trivial change of the event, i.e switching from change .myFileInput to submit .myForm. In the submit event, you can get the value of the file by selecting the file input, and then storing it as a FS File manually. Something like:
'submit .myForm': function (event, template) {
event.preventDefault();
var file = template.find('#input').files[0];
file = new FS.File(file);
// set metadata
file.metadata = { 'caption': 'wow' };
Images.insert(file, function (error, file) {
if (!error)
// do something with file._id
});
}
If you're using autoform with CollectionFS, you can put that code inside the onSubmit hook. The loop you provided in your question works also.
As for your second question, I don't think FS.Files can be created without a size, so my guess is no, you can't just store metadata without attaching it to a file. Anyways, it seems to me kind of counterintuitive to store just metadata when the metadata is supposed to describe the associated image. You would be better off using a separate collection for that.
Hope that helped :)