Azure functions messing up gzipped POST data - node.js

Currently i'm implementing a webhook which states that the request sent to the configured endpoint will be gzipped, and i'm experiencing a weird bug with that.
I created a middleware to handle de gunzip of the request data:
const buffer: Buffer[] = [];
request
.on("data", (chunk) => {
buffer.push(Buffer.from(chunk));
})
.on("end", () => {
const concatBuff: Buffer = Buffer.concat(buffer);
zlib.gunzip(concatBuff, (err, buff) => {
if (err) {
console.log("gunzip err", err);
return next(err);
}
request.body = buff.toString();
next();
});
});
I added this middleware before all the other body parser middlewares to avoid any incompatibility with that.
So i'm testing it with this curl command:
cat webhook.txt | gzip | curl -v -i --data-binary #- -H "Content-Encoding: gzip" http://localhost:3334
In this server, which uses azure-function-express, i'm getting this error:
[1/9/2020 22:36:21] gunzip err Error: incorrect header check
[1/9/2020 22:36:21] at Zlib.zlibOnError [as onerror] (zlib.js:170:17) {
[1/9/2020 22:36:21] errno: -3,
[1/9/2020 22:36:21] code: 'Z_DATA_ERROR'
[1/9/2020 22:36:21] }
[1/9/2020 22:36:21]
it seems that the error is caused because the header is not the "magical number" of a gzip file:
<Buffer 1f ef bf bd 08 00 ef bf bd ef bf bd 4e 5f 00 03 ef bf bd 5d 6d 73 db b8 11 ef bf bd ef b
f bd 5f ef bf bd e1 97 bb 6b 7d 16 ef bf bd 77 ef bf bd 73 ef ... 4589 more bytes>
But here is the weird thing, i created a new express application to test this using the exactly same curl, and it works perfectly in there, so it seems that there is some problem with the createAzureFunctionHandler, or i'm missing out something.
Have you guys experienced any of those problems using Azure functions??
Any idea of what is Azure messing up with the gzip data??

I just got an answer from the Azure team, they recommend me to set a proxy inside proxies.json as a workaround, so if anyone is having the same issue you can just set a new proxy to override the Content-Type.
In my case i was always expecting a gzipped json, so maybe if you don't know beforehand which type is this wouldn't work for you.
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"RequireContentType": {
"matchCondition": {
"route": "/api/HttpTrigger"
},
"backendUri": "https://proxy-example.azurewebsites.net/api/HttpTrigger",
"requestOverrides": {
"backend.request.headers.content-type": "application/octet-stream",
"backend.request.headers.request-content-type": "'{request.headers.content-type}'"
}
}
}
}

Related

Gulp image-min - How to continue after an error?

I am trying to run an image optimisation task on a folder with 1000s of images, unfortunately some of them are corrupted so the task keeps failing
I have tried to capture the error and continue, but its not working once it hits a corrupted image the task then aborts. Is there a way for this to continue running?
gulp.task('imgCompress', (done) => {
gulp.src(imgMedia)
.pipe(imagemin([
mozjpeg({
quality: 75
}),
pngquant({
quality: [0.65, 0.80]
})
]))
.on('error', gutil.log)
.pipe(gulp.dest(imgMediaDest))
done();
});
Error:
{ Error: write EPIPE
at WriteWrap.afterWrite [as oncomplete] (net.js:789:14)
errno: 'EPIPE',
code: 'EPIPE',
syscall: 'write',
stdout:
<Buffer ff d8 ff e0 00 10 4a 46 49 46 00 01 01 00 00 01 00 01 00 00 ff e1 2b 18 45 78 69 66 00 00 4d 4d 00 2a 00 00 00 08 00 0b 01 0f 00 02 00 00 00 06 00 00 ... >,
stderr: <Buffer >,
failed: true,
signal: null,
cmd:
'/private/var/www/website/m2/tools-media/node_modules/mozjpeg/vendor/cjpeg -quality 75',
timedOut: false,
killed: false,
message: 'write EPIPE',
name: 'Error',
stack:
'Error: write EPIPE\n at WriteWrap.afterWrite [as oncomplete] (net.js:789:14)',
__safety: undefined,
_stack: undefined,
plugin: 'gulp-imagemin',
showProperties: true,
showStack: false,
fileName:
'/private/var/www/website/m2/pub/media/catalog/product/1/i/1img_5232.jpg' }
You should be fine with gulp-plumber, which lets the stream continue after an error and even has an option to log error messages automatically without handling error events.
Also note that you should not call done after creating a stream, but after the stream has terminated (you could for example listen to finish events). But it's easier to simply return the stream from the task function.
const plumber = require('gulp-plumber');
gulp.task('imgCompress', () => {
return gulp.src(imgMedia)
.pipe(plumber({ errorHandler: true }))
.pipe(imagemin([
mozjpeg({
quality: 75
}),
pngquant({
quality: [0.65, 0.80]
})
]))
.pipe(gulp.dest(imgMediaDest))
});

How to save a FormData image file locally with "fs" in Node

I have a React front-end that allows user to send an image using FormData() to the node backend.
In the Express backend's controller, I am using multer as a middleware to grab the image into the files variable:
private initializeRoutes() {
this.router.post(`${this.path}/image`, multerUpload.array("filesToUpload[]"), this.saveImage);
}
In the backend's service, I am trying to save files[0] into the uploads folder:
public saveImage(files: any) {
console.log("OtherServiceLocal saveImage - files :");
console.log(files[0]); // see screenshot at bottom for output
let localPath = fs.createWriteStream("./uploads/image.png");
files[0].buffer.pipe(localPath);
}
But I am getting the error:
I tried piping file[0] and file[0].buffer to no avail, and I had difficulty understanding how to transform it into a stream even after some research.
This is the outputs for console.log(files[0]);
{
fieldname: 'filesToUpload[]',
originalname: 'Screen Shot 2021-09-08 at 3.42.48 PM.png',
encoding: '7bit',
mimetype: 'image/png',
buffer: <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 03 85 00 00 02 c5 08 06 00 00 00 74 ff 46 78 00 00 00 01 73 52 47 42 00 ae ce 1c e9 00 00 00 62 ... 249405 more bytes>,
size: 249455
}
Please note that I am aware that you can use multer's upload method to save the image as a middleware directly in the router, but I can't to do this due to some requirement of my app.
Thanks,
file[0].buffer is an instance of Buffer.
So you can use fs.writeFile directly
fs.writeFile("./uploads/image.png", file[0].buffer, (err) => {
console.error(error)
})

How to use Nginx NJS with native nodejs modules and webpack?

There is a guide https://nginx.org/en/docs/njs/node_modules.html guide that describes how to use 'native' nodejs modules with njs.
I followed the guide until I did not understand what it means in bold:
Note that in this example generated code is not wrapped into function
and we do not need to call it explicitly. The result is in the "dist" directory:
$ cat dist/wp_out.js code.js > njs_dns_bundle.js
Let's call our code at the end of a file: <<<--- HERE
var b = set_buffer(global.dns);
console.log(b);
And execute it using node:
$ node ./njs_dns_bundle_final.js
Question is how do I include / require / import the webpack generated njs_dns_bundle.js in njs_dns_bundle_final.js which is the name of the Let's call our code at the end of a file since without it I get the error:
njs_dns_bundle_final.js:1
var b = set_buffer(global.dns);
ReferenceError: set_buffer is not defined
My code.js:
module.exports = {
hello: function set_buffer(dnsPacket) {
// create DNS packet bytes
var buf = dnsPacket.encode({
type: 'query',
id: 1,
flags: dnsPacket.RECURSION_DESIRED,
questions: [{
type: 'A',
name: 'google.com'
}]
})
return buf;
}
}
My njs_dns_bundle_final.js:
var myModule = require('./njs_dns_bundle');
var b = myModule.hello(global.dns);
console.log(b);
Node runs fine I think?!:
node ./njs_dns_bundle_final.js
<Buffer 00 01 01 00 00 01 00 00 00 00 00 00 06 67 6f 6f 67 6c 65 03 63 6f 6d 00 00 01 00 01>
NJS does not:
njs ./njs_dns_bundle_final.js
Thrown:
Error: Cannot find module "./njs_dns_bundle"
at require (native)
at main (njs_dns_bundle_final.js:1)
Thanks

Node.js: Download file from s3 and unzip it to a string

I am writing an AWS Lambda function which needs to download files from AWS S3, unzips the file and returns the content in the form of a string.
I am trying this
function getObject(key){
var params = {
Bucket: "my-bucket",
Key: key
}
return new Promise(function (resolve, reject){
s3.getObject(params, function (err, data){
if(err){
reject(err);
}
resolve(zlib.unzipSync(data.Body))
})
})
}
But getting the error
Error: incorrect header check
at Zlib._handle.onerror (zlib.js:363:17)
at Unzip.Zlib._processChunk (zlib.js:524:30)
at zlibBufferSync (zlib.js:239:17)
The data looks like this
{ AcceptRanges: 'bytes',
LastModified: 'Wed, 16 Mar 2016 04:47:10 GMT',
ContentLength: '318',
ETag: '"c3xxxxxxxxxxxxxxxxxxxxxxxxx"',
ContentType: 'binary/octet-stream',
Metadata: {},
Body: <Buffer 50 4b 03 04 14 00 00 08 08 00 f0 ad 6f 48 95 05 00 50 84 00 00 00 b4 00 00 00 2c 00 00 00 30 30 33 32 35 2d 39 31 38 30 34 2d 37 34 33 30 39 2d 41 41 ... >
}
The Body buffer contains zip-compressed data (this is identified by the first few bytes), which is not just plain zlib.
You will need to use some zip module to parse the data and extract the files within. One such library is yauzl which has a fromBuffer() method that you can pass your buffer to and get the file entries.

How to replicate a curl command with the nodejs request module?

How can I replicate this curl request:
$ curl "https://s3-external-1.amazonaws.com/herokusources/..." \
-X PUT -H 'Content-Type:' --data-binary #temp/archive.tar.gz
With the node request module?
I need to do this to PUT a file up on AWS S3 and to match the signature provided by Heroku in the put_url from Heroku's sources endpoint API output.
I have tried this (where source is the Heroku sources endpoint API output):
// PUT tarball
function(source, cb){
putUrl = source.source_blob.put_url;
urlObj = url.parse(putUrl);
var options = {
headers: {},
method : 'PUT',
url : urlObj
}
fs.createReadStream('temp/archive.tar.gz')
.pipe(request(
options,
function(err, incoming, response){
if (err){
cb(err);
} else {
cb(null, source);
}
}
));
}
But I get the following SignatureDoesNotMatch error.
<?xml version="1.0"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>AKIAJURUZ6XB34ESX54A</AWSAccessKeyId>
<StringToSign>PUT\n\nfalse\n1424204099\n/heroku-sources-production/heroku.com/d1ed2f1f-4c81-43c8-9997-01706805fab8</StringToSign>
<SignatureProvided>DKh8Y+c7nM/6vJr2pabvis3Gtsc=</SignatureProvided>
<StringToSignBytes>50 55 54 0a 0a 66 61 6c 73 65 0a 31 34 32 34 32 30 34 30 39 39 0a 2f 68 65 72 6f 6b 75 2d 73 6f 75 72 63 65 73 2d 70 72 6f 64 75 63 74 69 6f 6e 2f 68 65 72 6f 6b 75 2e 63 6f 6d 2f 64 31 65 64 32 66 31 66 2d 34 63 38 31 2d 34 33 63 38 2d 39 39 39 37 2d 30 31 37 30 36 38 30 35 66 61 62 38</StringToSignBytes>
<RequestId>A7F1C5F7A68613A9</RequestId>
<HostId>JGW6l8G9kFNfPgSuecFb6y9mh7IgJh28c5HKJbiP6qLLwvrHmESF1H5Y1PbFPAdv</HostId>
</Error>
Here is an example of what the Heroku sources endpoint API output looks like:
{ source_blob:
{ get_url: 'https://s3-external-1.amazonaws.com/heroku-sources-production/heroku.com/2c6641c3-af40-4d44-8cdb-c44ee5f670c2?AWSAccessKeyId=AKIAJURUZ6XB34ESX54A&Signature=hYYNQ1WjwHqyyO0QMtjVXYBvsJg%3D&Expires=1424156543',
put_url: 'https://s3-external-1.amazonaws.com/heroku-sources-production/heroku.com/2c6641c3-af40-4d44-8cdb-c44ee5f670c2?AWSAccessKeyId=AKIAJURUZ6XB34ESX54A&Signature=ecj4bxLnQL%2FZr%2FSKx6URJMr6hPk%3D&Expires=1424156543'
}
}
Update
The key issue here is that the PUT request I send with the request module should be the same as the one sent with curl because I know that the curl request matches the expectations of the AWS S3 Uploading Objects Using Pre-Signed URLs API. Heroku generates the PUT url so I have no control over its creation. I do know that the curl command works as I have tested it -- which is good since it is the example provided by Heroku.
I am using curl 7.35.0 and request 2.53.0.
The Amazon API doesn't like chunked uploads. The file needs to be sent unchunked. So here is the code that works:
// PUT tarball
function(source, cb){
console.log('Uploading tarball...');
putUrl = source.source_blob.put_url;
urlObj = url.parse(putUrl);
fs.readFile(config.build.temp + 'archive.tar.gz', function(err, data){
if (err){ cb(err); }
else {
var options = {
body : data,
method : 'PUT',
url : urlObj
};
request(options, function(err, incoming, response){
if (err){ cb(err); } else { cb(null, source); }
});
}
});
},

Resources