Google Cloud Storage "invalid upload request" error. Bad request - node.js

I have been uploading files to google cloud storage from my node js server for quite a long time now. But sometimes the upload fails. Error message returned is something like:
{
reason:"badRequest",
code: 400,
message: "Invalid upload request"
}
It happens randomly, means once in around 25-30 days for some time and then is resolved automatically.
It's kind of weird and searching for it didn't give any solution or the reason.
The upload request is sent for two files in parallel having exactly the same data.
one was uploaded successfully and other failed.
code used:
const file = bucket.file(`data/${id}/${version}/abc.json`);
const dataBuffer = Buffer.from(JSON.stringify(dataToUpload));
file.save(dataBuffer, storageConfig)
.then(() => callback(null, true))
.catch(err => callback(err, null));
where storageConfig is
{
"contentType": "application/json",
"cacheControl": "public, max-age=600, s-maxage=3, no-transform"
}
and the second file name which is stored is
const file = bucket.file(`data/${id}/latest/abc.json`);
I am not able to find any reason for it and unable to handle it.
It crashed my related systems as they require that second file.

Setting resumable: false in the upload options solved the same error for me. For example : bucket.upload(pathToUpload, { destination: bucketPath, resumable: false })

Related

busboy wait for field data before accepting file upload

Is something like this even possible, or are there better ways to do this? Is what Im doing even a good idea, or is this a bad approach?
What I want to do is upload a file to my nodejs server. Along with the file I want to send some meta data. The meta data will determine if the file can be saved and the upload accepted, or if it should be rejected and sending a 403 response.
I am using busboy and I am sending FormData from my client side.
The example below is very much simplified:
Here is a snippet of the client side code.
I am appending the file as well as the meta data to the form
const formData = new FormData();
formData.append('name', JSON.stringify({name: "John Doe"}));
formData.append('file', this.selectedFile, this.selectedFile.name);
Here is the nodejs side:
exports.Upload = async (req, res) => {
try {
var acceptUpload = false;
const bb = busboy({ headers: req.headers });
bb.on('field', (fieldname, val) => {
//Verify data here before accepting file upload
var data = JSON.parse(val);
if (val.name === 'John Doe') {
acceptUpload = true;
} else {
acceptUpload = false;
}
});
bb.on('file', (fieldname, file, filename, encoding, mimetype) => {
if (acceptUpload) {
const saveTo = '/upload/file.txt'
file.pipe(fs.createWriteStream(saveTo));
}else{
response = {
message: 'Not Authorized'
}
res.status(403).json(response);
}
});
bb.on('finish', () => {
response = {
message: 'Upload Successful'
}
res.status(200).json(response);
});
req.pipe(bb);
} catch (error) {
console.log(error)
response = {
message: error.message
}
res.status(500).json(response);
}
}
So basically, is it even possible for the 'field' event-handler to wait for the 'file' event handler? How could one verify some meta data before accepting a file upload?
How can I do validation of all data in the form data object, before accepting the file upload? Is this even possible, or are there other ways of uploading files with this kind of behaviour? I am considering even adding data to the request header, but this does not seem like the ideal solution.
Update
As I suspected, nothing is waiting. Which ever way I try, the upload first has to be completed, only then after is it rejected with a 403
Another Update
Ive tried the same thing with multer and have similar results. Even when I can do the validation, the file is completely uploaded from the client side. Once the upload is complete, only then the request is rejected. The file, however, never gets stored, even though it is uploaded in its entirety.
With busboy, nothing is written to the server if you do not execute the statement file.pipe(fs.createWriteStream(saveTo));
You can prevent more data from even being uploaded to the server by executing the statement req.destroy() in the .on("field", ...) or the .on("file", ...) event handler, even after you have already evaluated some of the fields. Note however, that req.destroy() destroys not only the current HTTP request but the entire TCP connection, which might otherwise have been reused for subsequent HTTP requests. (This applies to HTTP/1.1, in HTTP/2 the relationship between connections and requests is different.)
At any rate, it has no effect on the current HTTP request if everything has already been uploaded. Therefore, whether this saves any network traffic depends on the size of the file. And if the decision whether to req.destroy() involves an asynchronous operation, such as a database lookup, then it may also come too late.
Compare
> curl -v -F name=XXX -F file=#<small file> http://your.server
* We are completely uploaded and fine
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
with
> curl -v -F name=XXX -F file=#<large file> http://your.server
> Expect: 100-continue
< HTTP/1.1 100 Continue
* Send failure: Connection was reset
* Closing connection 0
curl: (55) Send failure: Connection was reset
Note that the client sets the Expect header before uploading a large file. You can use that fact in connection with a special request header name in order to block the upload completely:
http.createServer(app)
.on("checkContinue", function(req, res) {
if (req.headers["name"] === "John Doe") {
res.writeContinue(); // sends HTTP/1.1 100 Continue
app(req, res);
} else {
res.statusCode = 403;
res.end("Not authorized");
}
})
.listen(...);
But for small files, which are uploaded without the Expect request header, you still need to check the name header in the app itself.

AWS lambda function issue with FormData file upload

I have a nodejs code which uploads files to S3 bucket.
I have used koa web framework and following are the dependencies:
"#types/koa": "^2.0.48",
"#types/koa-router": "^7.0.40",
"koa": "^2.7.0",
"koa-body": "^4.1.0",
"koa-router": "^7.4.0",
following is my sample router code:
import Router from "koa-router";
const router = new Router({ prefix: '/' })
router.post('file/upload', upload)
async function upload(ctx: any, next: any) {
const files = ctx.request.files
if(files && files.file) {
const extension = path.extname(files.file.name)
const type = files.file.type
const size = files.file.size
console.log("file Size--------->:: " + size);
sendToS3();
}
}
function sendToS3() {
const params = {
Bucket: bName,
Key: kName,
Body: imageBody,
ACL: 'public-read',
ContentType: fileType
};
s3.upload(params, function (error: any, data: any) {
if (error) {
console.log("error", error);
return;
}
console.log('s3Response', data);
return;
});
}
The request body is sent as FormData.
Now when I run this code locally and hit the request, the file gets uploaded to my S3 bucket and can be viewed.
In Console the file size is displayed as follows:
which is the correct actual size of the file.
But when I deploy this code as lambda function and hit the request then I see that the file size has suddenly increased(cloudwatch log screenshot below).
Still that file gets uploaded to S3 but the issue is when I open the file it show following error.
I further tried to find whether this behaviour persisted on standalone instance on aws. But it did not. So the problem occurs only when the code is deployed as a serverless lambda function.
I tried with postman as well as my own front end app. But the issue remains.
I don't know whether I have overlooked any configuration when setting up the lambda function that handles such scenarios.
This is an unprecedented issue I have encountered, and really would want to know if any one else encountered same before. Also I am not able to debug and find why the file size is increasing. I can only assume that when the file reaches the service, some kind of encoding/padding is being done on the file.
Finally was able to fix this issue. Had to add "Binary Media Type" in AWS API Gateway
Following steps helped.
AWS API Gateway console -> "API" -> "Settings" -> "Binary Media Types" section.
Added following media type:
multipart/form-data
Save changes.
Deploy api.
Info location: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html

Internal server error om Azure when writing file from buffer to filesystem

Context
I am working on a Proof of Concept for an accounting bot. Part of the solution is the processing of receipts. User makes picture of receipt, bot asks some questions about it and stores it in the accounting solution.
Approach
I am using the BotFramework nodejs example 15.handling attachments that loads the attachment into an arraybuffer and stores it on the local filesystem. Ready to be picked up and send to the accounting software's api.
async function handleReceipts(attachments) {
const attachment = attachments[0];
const url = attachment.contentUrl;
const localFileName = path.join(__dirname, attachment.name);
try {
const response = await axios.get(url, { responseType: 'arraybuffer' });
if (response.headers['content-type'] === 'application/json') {
response.data = JSON.parse(response.data, (key, value) => {
return value && value.type === 'Buffer' ? Buffer.from(value.data) : value;
});
}
fs.writeFile(localFileName, response.data, (fsError) => {
if (fsError) {
throw fsError;
}
});
} catch (error) {
console.error(error);
return undefined;
}
return (`success`);
}
Running locally it all works like a charm (also thanks to mdrichardson - MSFT). Stored on Azure, I get
There was an error sending this message to your bot: HTTP status code InternalServerError
I narrowed the problem down to the second part of the code. The part that write to the local filesystem (fs.writefile). Small files and big files result in the same error on Azure.fs.writefile seams unable to find the file
What is happpening according to stream logs:
Attachment uploaded by user is saved on Azure
{ contentType: 'image/png',contentUrl:
'https://webchat.botframework.com/attachments//0000004/0/25753007.png?t=< a very long string>',name: 'fromClient::25753007.png' }
localFilename (the destination of the attachment) resolves into
localFileName: D:\home\site\wwwroot\dialogs\fromClient::25753007.png
Axios loads the attachment into an arraybuffer. Its response:
response.headers.content-type: image/png
This is interesting because locally it is 'application/octet-stream'
fs throws an error:
fsError: Error: ENOENT: no such file or directory, open 'D:\home\site\wwwroot\dialogs\fromClient::25753007.png
Some assistance really appreciated.
Removing ::fromClient prefix from attachment.name solved it. As #Sandeep mentioned in the comments, the special characters where probably the issue. Not sure what its purpose is. Will mention it in the Botframework sample library github repository.
[update] team will fix this. Was caused by directline service.

Streaming a zip download from cloud functions

I have a firebase cloud function that uses express to streams a zip file of images to the client. When I test the cloud function locally it works fine. When I upload to firebase I get this error:
Error: Can't set headers after they are sent.
What could be causing this error? Memory limit?
export const zipFiles = async(name, params, response) => {
const zip = archiver('zip', {zlib: { level: 9 }});
const [files] = await storage.bucket(bucketName).getFiles({prefix:`${params.agent}/${params.id}/deliverables`});
if(files.length){
response.attachment(`${name}.zip`);
response.setHeader('Content-Type', 'application/zip');
response.setHeader('Access-Control-Allow-Origin', '*')
zip.pipe(output);
response.on('close', function() {
return output.send('OK').end(); // <--this is the line that fails
});
files.forEach((file, i) => {
const reader = storage.bucket(bucketName).file(file.name).createReadStream();
zip.append(reader, {name: `${name}-${i+1}.jpg`});
});
zip.finalize();
}else{
output.status(404).send('Not Found');
}
What Frank said in comments is true. You need to decide all your headers, including the HTTP status response, before you start sending any of the content body.
If you intend to express that you're sending a successful response, simply say output.status(200) in the same way that you did for your 404 error. Do that up front. When you're piping a response, you don't need to do anything to close the response in the end. When the pipe is done, the response will automatically be flushed and finalized. You're only supposed to call end() when you want to bail out early without sending a response at all.
Bear in mind that Cloud Functions only supports a maximum payload of 10MB (read more about limits), so if you're trying to zip up more than that total, it won't work. In fact, there is no "streaming" or chunked responses at all. The entire payload is being built in memory and transferred out as a unit.

Failing to get a response back from a web api call in node

this particular Node issue has been driving me crazy for going on a week.
I have to create a layer of friction (a modal that asks the user if they're sure) in the process of a csv file upload. Essentially, the flow will be:
User Clicks 'UPLOAD SPREAD SHEET' > File uploads to s3 > S3 returns a reference key > Pass reference key into micro service web api to evaluate > if true => ask user 'if they're sure' > If user is sure continue uploading > pass reference key onward to another endpoint, same service, to finish the upload. false return would continue on to the upload with no modal.
its kind of a silly product-based functionality that makes a show of alerting the user to potential duplicate entries in their spreadsheet since we can't currently detect duplicate entries ourselves.
Problem is, I can't get a response to return from the evaluation to save my life. If I console.log the response, I can see it in Node's terminal window but nothing comes back in the network tab for the response. I'm not sure if it's because it's a file upload, if it's busyboy, if it's just not the right syntax for the response type but endless googling has brought me no answers and I'd love it if someone more experienced with Node and Express could take a look.
router.post('/import/csv',
// a bunch of aws s3 stuff to upload the file and return the key
s3.upload(uploadParams, (err, data) => {
if (err) {
res.status(500).send({
error_message: 'Unable to upload csv. Please try again.',
error_data: err
});
} else if (data) {
// creating the key object to pass in
const defaultImportCheck = {
body: data.Key
};
// endpoint that will evaluate the s3 reference key
SvcWebApiClient.guestGroup.defaultImportCheck(defaultImportCheck)
.then((response) => {
if (response.status === 'success') {
// where the response should be. this works but doesn't actually send anything.
res.send(response);
} else {
const errorJson = {
message: response.message,
category: response.category,
trigger: response.trigger,
errors: response.errors
};
res.status(500).send(errorJson);
}
})
.catch((error) => {
res.status(500).send({
error_message: 'Unable to upload csv. Please try again.',
error_data: error
});
});
}
});
});
req.pipe(busboy);
}
);
Got it, for anyone that ends up having my kind of problem. It's a two parter so buckle up.
1) the action function that handles the response on the react side didn't convert the response into json. Apparently, what would get returned is a "readable stream" which should have then converted to json. it didn't.
2) the response itself needed to be in json as well.
so from the action function:
export function csvUpload(file) {
do some stuff
return fetch(fetch some stuff) { with some parameters }
.then(some error stuff)
.then(response => response.response.json())
}
then from the post request:
if (response.status === "success") {
res.json({ valid: response.data, token: data.Key)};
}
this returns an object with what I need back to the client. hope this helps someone else.

Resources