Reading and sending a pdf from node to client - blank pages - node.js

I'm currently reading a PDF via a node backend, sending it through an API gateway layer and back to the client - when the response hits the client however, the pdf is downloaded with the correct number of pages but is completely blank. I've tried setting the encoding in a number of ways but with no luck. When setting the encoding to binary and running a diff of the downloaded PDF vs the original PDF, there are no visible differences even though the filesizes differ.
Node backend:
`
export async function generatePDF (req, res, next) {
try {
const fStream = fs.createReadStream(path.join(__dirname, 'businesscard.pdf'), { encoding: 'binary' }) // have also tried without the binary encoding
return fStream.pipe(res)
} catch (err) {
res.send(err)
}
}
`
The API Gateway simply sends a request to the node backend and sets the content type before sending it on:
`
res.setHeader('Content-Type', 'application/pdf')
`
Frontend:
`
function retrievePDF () {
return fetch('backendurlhere', {
method: 'GET',
headers: { 'Content-Type': 'application/pdf' },
credentials: 'include'
})
.then(response => {
return response.text()
})
.catch(err => {
console.log('ERR', err)
})
`
retrievePDF is called and then the following is performed via a React component:
`
generatePDF () {
this.props.retrievePDF()
.then(pdfString => {
const blob = new Blob([pdfString], { type: 'application/pdf' })
const objectUrl = window.URL.createObjectURL(blob)
window.open(objectUrl)
})
}
`
The string representation of the response looks a bit like this (simply a sample):
`
%PDF-1.4
1 0 obj
<<
/Title (þÿ)
/Creator (þÿ)
/Producer (þÿQt 5.5.1)
/CreationDate (D:20171003224921)
>>
endobj
2 0 obj
<<
/Type /Catalog
/Pages 3 0 R
>>
endobj
4 0 obj
<<
/Type /ExtGState
/SA true
/SM 0.02
/ca 1.0
/CA 1.0
/AIS false
/SMask /None>>
endobj
5 0 obj
[/Pattern /DeviceRGB]
endobj
6 0 obj
<<
/Type /Page
/Parent 3 0 R
/Contents 8 0 R
/Resources 10 0 R
/Annots 11 0 R
/MediaBox [0 0 142 256]
>>
endobj
10 0 obj
<<
/ColorSpace <<
/PCSp 5 0 R
/CSp /DeviceRGB
/CSpg /DeviceGray
>>
/ExtGState <<
/GSa 4 0 R
>>
/Pattern <<
>>
/Font <<
/F7 7 0 R
>>
/XObject <<
>>
>>
endobj
11 0 obj
[ ]
endobj
8 0 obj
<<
/Length 9 0 R
/Filter /FlateDecode
>>
stream
xåW]kÂ0}ϯ¸ÏÕ$mÆ`V6{{ºÊûûKÓ´vS¥N_f°WsÒ{ÏýÈMÛ»<ÑëzÙä¦Af&»q^©4MlE+6fcw-äUwp?ÖÓ%ëºX93Éî/tã¾·næ5Å¢trîeaiÎx-ù7vFËCí5nl¢¸Myláïmå·Ïgö²G±T ¹ïÒZk¢ð£¹¼)<äµµwm7ösÖ2¿P#¥ryëþèò]pÎÅ%åïÌDRqÿ)ôHTxpÄQOtjTI"ØBGd¤º
¢=¢£8Ú¶c¢téÑIþ¶c¡¶æ.ÇK»¾
ä¥.Inþ)(ÚbX¹Mqs«b²5B¡vÚ ò·ÚNeçmÇ.![¨±87¿ÜÂõ[H ¢à>ëRÄ]ZNæÚÂú¿·PWÒU4¢ØR]Ê®Kj±6\\ÐNØFG¬Ô;ÝRLüݱP[>·~'½%ä8M8丸0ýiiÕ}ت³S$=N*s'>¹³§VùGfûÉU`ËÁ¥wú®FéC^½"òºBcö
Ùå#endstream
endobj
`
The HTTP response looks as follows:
`
access-control-allow-credentials: true
access-control-allow-origin: http://frontend.dev.com
access-control-expose-headers: api-version, content-length, content-md5, content-type, date, request-id, response-time
Connection: keep-alive
Content-Encoding: gzip
Content-Type: application/octet-stream
Date: Wed, 09 May 2018 09:37:22 GMT
Server: nginx/1.13.3
Transfer-Encoding: chunked
vary: origin
`
I've also tried other methods of reading the file, such as readFileSync, and constructing chunks via fStream.on('data') and sending back as a Buffer. Nothing seems to work.
Note: I'm using Restify (not express)
Edit:
Running the file through a validator shows the following:
`
File teststring.pdf
Compliance pdf1.4
Result Document does not conform to PDF/A.
Details
Validating file "teststring.pdf" for conformance level pdf1.4
The 'xref' keyword was not found or the xref table is malformed.
The file trailer dictionary is missing or invalid.
The "Length" key of the stream object is wrong.
Error in Flate stream: data error.
The "Length" key of the stream object is wrong.
Error in Flate stream: data error.
The document does not conform to the requested standard.
The file format (header, trailer, objects, xref, streams) is corrupted.
The document does not conform to the PDF 1.4 standard.
Done.
`

For anyone having issues, I found out that in my gateway layer, the request was wrapped around a utility function that performed a text read on the response, i.e.
return response.text()
I removed this and instead piped the response from the backend:
fetch('backendurl') .then(({ body }) => { body.pipe(res) })
Hopefully this helps anyone employing the gateway pattern with similar issues

Related

How to customize the format of the test output in Node.js native testing module with existing Node tap formatters?

I read here an intriguing snippet:
Back to the Node.js test runner, it outputs in TAP, which is the “test anything protocol”, here’s the specification: testanything.org/tap-specification.html.
That means you can take your output and pipe it into existing formatters and there was already node-tap as a userland runner implementation.
The default native Node.js test output is terrible, how can I wire up a basic hello-world test using a community TAP formatter?
This tap-parser seems like a good candidate, but what I'm missing is how to connect that around the Node.js node:test module. Can you show a quick hello world script on how to write a test and customize the output formatting using something like this parser, or any other mechanism?
I have this in a test file:
import { run } from 'node:test'
const files = ['./example.test.js']
// --test-name-pattern="test [1-3]"
run({ files }).pipe(process.stdout)
And I have in example.test.js:
import test from 'node:test'
test('foo', () => {
console.log('start')
})
However, I am getting empty output:
$ node ./test.js
TAP version 13
# Subtest: ./example.test.js
ok 1 - ./example.test.js
---
duration_ms: 415.054557
...
1..1
# tests 1
# pass 1
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 417.190938
Any ideas what I'm missing? When I run the file directly, it works:
$ node ./example.test.js
TAP version 13
# Subtest: foo
ok 1 - foo
---
duration_ms: 1.786378
...
1
# tests 1
# pass 1
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 210.586371
It looks like if I throw an error in the nested example.test.js test, the error will surface in the test.js file. However, how do I get access to the underlying test data?
If anyone else is interested, I found another (a bit hacky) solution, to get all the Tap data as string (and then do whatever withit):
// getTapData.js
import {run} from 'node:test';
const getTapDataAsync = (testFiles) => {
let allData = '';
return new Promise((resolve, reject) => {
const stream = run({
files: testFiles,
});
stream.on('data', (data) => (allData += data.toString()));
stream.on('close', (data) => resolve(allData));
stream.on('error', (err) => reject(err));
});
};
Then, you can call that function and get all of the Tap/Node unit test data as string:
// testRunner.js
const tapAsString = await getTapDataAsync(testFiles);
console.log(tapAsString);
Seems the best way so far is to just go low-level like this:
import cp from 'child_process'
import { Parser } from 'tap-parser'
const p = new Parser(results => console.dir(results))
p.on('pass', data => console.log('pass', data))
p.on('fail', data => console.log('fail', data))
const child = cp.spawn(
'node',
['./host/link/test/parser/index.test.js'],
{
stdio: [null, 'pipe', 'inherit'],
},
)
child.stdout.pipe(p)

node js - Download pdf via HTTP post and upload to azure blob storage

I've an online API that returns PDF files via HTTP post requests.
Once I get back the response with the file I would like to upload the file in an azure blob storage.
I've tried everything on here and I'm unable to get it work.
Where I'm now:
async function getPDFfile(idSession) {
let connectionJson = {
"DeviceId": "device-id",
"SessionId": idSession,
"ContainerId": contaierID
}
axios({
url: 'https://URL/exportPDF/',
method: 'POST',
Headers: connectionJson,
responseType: "arraybuffer",
responseEncoding: "binary"
}).then((response) => {
console.log(response.data)
});
If I print this I will get a <Buffer but as I cannot really see what's Inside I cannot use it.
If I use axios without params:
const response = await axios
.post('https://URL/exportPDF/', connectionJson)
return response.data
Here I get loads of unicode characers with some information, I've tried to upload this but the PDF file is only few bytes and obviously does not work.
I've tried to get the response as blob but same, was not working.
Could you please help me with figure this out?
this should run on azure function. Thank you for your time.
---- Edit
While the first method doesn't return anything in the data, the second without parameters returns a very long string with:
%PDF-1.4
%´┐¢´┐¢´┐¢´┐¢
1 0 obj
<<
/CreationDate(D:20220423222622+01'00')
/Creator(empira MigraDoc 1.50.5147 \(www.migradoc.com\))
/Title(TrustID)
/Producer(PDFsharp 1.50.5147-gdi \(www.pdfsharp.com\))
>>
endobj
2 0 obj
<<
/Type/Catalog
/Pages 3 0 R
>>
endobj
3 0 obj
<<
/Type/Pages
/Count 3
/Kids[4 0 R 23 0 R 28 0 R]
>>
endobj
4 0 obj
<<
/Type/Page
/MediaBox[0 0 841.89 595.276]
/Parent 3 0 R
/Contents 5 0 R
/Resources
<<
/ProcSet [/PDF/Text/ImageB/ImageC/ImageI]
/ExtGState
<<
/GS0 6 0 R
/GS1 10 0 R
>>
/XObject
<<
/I0 9 0 R
>>
/Font
<<
/F0 14 0 R
/F1 18 0 R
/F2 22 0 R
>>
>>
/Group
<<
/CS/DeviceRGB
/S/Transparency
>>
>>
endobj
5 0 obj
<<
/Length 2094
/Filter/FlateDecode
>>
stream
x´┐¢´┐¢Y╦Ä´┐¢´┐¢´┐¢W´┐¢L:##´┐¢B´┐¢´┐¢´┐¢´┐¢Cw$´┐¢´┐¢~´┐¢´┐¢´┐¢╚¬´┐¢v´┐¢´┐¢´┐¢´┐¢3´┐¢*´┐¢8q"´┐¢´┐¢´┐¢´┐¢Io´┐¢´┐¢´┐¢´┐¢O´┐¢´┐¢´┐¢No?´┐¢qz´┐¢E´┐¢4'[^´┐¢´┐¢(Ì£W>´┐¢´┐¢9&´┐¢´┐¢´┐¢w█½´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢█ôVA´┐¢´┐¢O´┐¢´┐¢
>´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢~´┐¢S´┐¢´┐¢ ´┐¢´┐¢´┐¢█À_´┐¢K(_bt┬╣´┐¢´┐¢´┐¢!´┐¢´┐¢;9V&´┐¢G´┐¢´┐¢´┐¢;=´┐¢´┐¢´┐¢G´┐¢═┐]´┐¢E┌│Ì░´┐¢´┐¢[´┐¢´┐¢´┐¢fs´┐¢(p´┐¢╦Å´┐¢Z´┐¢´┐¢´┐¢>´┐¢´┐¢¤ô´┐¢.|´┐¢´┐¢´┐¢r´┐¢NQ ´┐¢Èû´┐¢´┐¢´┐¢)´┐¢d5´┐¢?´┐¢W´┐¢´┐¢´┐¢´┐¢´┐¢.:´┐¢$´┐¢´┐¢X´┐¢si´┐¢´┐¢´┐¢´┐¢´┐¢C´┐¢´┐¢´┐¢´┐¢Z5´┐¢j´┐¢U´┐¢Lr´┐¢qw3╚½3m)´┐¢>´┐¢´┐¢´┐¢Ðí´┐¢´┐¢´┐¢Yy´┐¢,/B´┐¢´┐¢´┐¢u´┐¢qW´┐¢ÐÅ´┐¢Ôö¿´┐¢E;´┐¢´┐¢ k´┐¢m=´┐¢´┐¢"´┐¢k´┐¢´┐¢´┐¢}´┐¢´┐¢m´┐¢´┐¢´┐¢\´┐¢*´┐¢y{´┐¢V=´┐¢%´┐¢8´┐¢´┐¢´┐¢k !D´┐¢´┐¢KGM´┐¢´┐¢´┐¢i´┐¢&´┐¢´┐¢
´┐¢´┐¢´┐¢´┐¢J*Y´¢ºX"6´┐¢|[j´┐¢´┐¢´┐¢´┐¢*´┐¢z)´┐¢´┐¢´┐¢>n&,w6´┐¢e´┐¢´┐¢5`Av´┐¢´┐¢´┐¢\ZD´┐¢e´┐¢I´┐¢´┐¢´┐¢´┐¢#´┐¢R ´┐¢a´┐¢´┐¢h2W´┐¢´┐¢´┐¢´┐¢´┐¢a´┐¢´┐¢´┐¢diT}T 3´┐¢,´┐¢il´┐¢´┐¢L´┐¢t}´┐¢E´┐¢´┐¢´┐¢v´┐¢´┐¢~f´┐¢R╚ôF0´┐¢´┐¢╔û
Su´┐¢´┐¢´┐¢36´┐¢´┐¢x´┐¢b´┐¢´┐¢´┐¢,´┐¢j´┐¢´┐¢v2R┘ÉÍô}W%´┐¢`F)´┐¢´┐¢c%´┐¢b´┐¢´┐¢´┐¢´┐¢)´┐¢6´┐¢/my$"5´┐¢\▄ƒ´┐¢T<´┐¢EN´┐¢´┐¢´┐¢´┐¢gXo7´┐¢´┐¢f´┐¢´┐¢
ng´┐¢╬ɦñnD|b}´┐¢¦░P´┐¢´┐¢´┐¢$($´┐¢Èæ´┐¢´┐¢r1
´┐¢═Æ´┐¢`´┐¢´┐¢c´┐¢´┐¢h´┐¢ ](9´┐¢´┐¢´┐¢´┐¢´┐¢AßÜé´┐¢J>:´┐¢´┐¢u$`´┐¢E´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢P´┐¢^0´┐¢´┐¢8h´┐¢´┐¢´┐¢8 g´┐¢´┐¢%zD´┐¢´┐¢´┐¢´┐¢7yRib´┐¢S´┐¢]´┐¢´┐¢´┐¢´┐¢´┐¢k´┐¢´┐¢A#´┐¢´┐¢´┐¢´┐¢-µ▒╣´┐¢´┐¢k´┐¢B´┐¢(:.´┐¢´┐¢)´┐¢po<´┐¢´┐¢Q´┐¢´┐¢0 ´┐¢´┐¢=*#Z´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢<Bs╩ö´┐¢_r ´┐¢g´┐¢G´┐¢db´┐¢6GT´┐¢´┐¢´┐¢´┐¢,´┐¢┘ÿ%´┐¢zy´┐¢´┐¢UD´┐¢e´┐¢1l´┐¢´┐¢q╬è´┐¢´┐¢9p´┐¢j´┐¢´┐¢´┐¢´┐¢]D´┐¢a)´┐¢´┐¢j´┐¢´┐¢9,´┐¢_9´┐¢%´┐¢c&´┐¢´┐¢´┐¢"z´┐¢´┐¢S=$´┐¢´┐¢´┐¢\▄ñ35´┐¢´┐¢´┐¢i´┐¢9Q´┐¢▀¢´┐¢´┐¢~´┐¢´┐¢_´┐¢´┐¢.´┐¢þ║║´┐¢´┐¢´┐¢^¤¡y´┐¢´┐¢iC´┐¢´┐¢´┐¢´┐¢´┐¢O´┐¢ÌÀn´┐¢m* ´┐¢´┐¢´┐¢u´┐¢kk´┐¢#´┐¢R´┐¢´┐¢tÈï W´┐¢´┐¢G(Í«h´┐¢´┐¢´┐¢D´┐¢´┐¢k>khX´┐¢%#´┐¢J]p´┐¢#´┐¢1´┐¢´┐¢ÈÑ´┐¢O´┐¢f´┐¢´┐¢´┐¢´┐¢gl´┐¢´┐¢´┐¢T~LF´┐¢´┐¢yG´┐¢=´┐¢-´┐¢´┐¢´┐¢╔×3A─¬L´┐¢´┐¢Zx´┐¢Jf´┐¢v´┐¢´┐¢´┐¢´┐¢:´┐¢´┐¢FH´┐¢nW1{lX´┐¢´┐¢ZYlF´┐¢´┐¢tPm:´┐¢*y´┐¢´┐¢.a´┐¢═×=´┐¢.´┐¢´┐¢´┐¢ ÃÄ{q´┐¢v´┐¢Y´┐¢0LE´┐¢´┐¢
yÎ▓´┐¢´┐¢´┐¢´┐¢1Tο´┐¢´┐¢k´┐¢RP´┐¢´┐¢w´┐¢;m´┐¢Da´┐¢A2´┐¢N´┐¢Xq´┐¢M´┐¢´┐¢´┐¢¤╝$´┐¢K´┐¢S´┐¢Ì╣9'´┐¢,U´┐¢├×´┐¢´┐¢"G´┐¢hWZ´┐¢´┐¢v─¼h´┐¢´┐¢EDr´┐¢`´┐¢Ae´┐¢ãÄ5$´┐¢´┐¢´┐¢Y´┐¢i´┐¢´┐¢e5´┐¢´┐¢(´┐¢8_
´┐¢2o´┐¢´┐¢´┐¢Pc´┐¢´┐¢) ´┐¢#´┐¢s´┐¢,2´┐¢">´┐¢´┐¢=)´┐¢´┐¢iM4j´┐¢´┐¢´┐¢O´┐¢´┐¢1
´┐¢´┐¢4´┐¢?´┐¢´┐¢R?{´┐¢´┐¢´┐¢´┐¢ÐÖCe´┐¢´┐¢µñ┐´┐¢_´┐¢´┐¢"}´┐¢´┐¢Ln´┐¢#TZK´┐¢´┐¢4´┐¢$´┐¢´┐¢"M´┐¢´┐¢´┐¢´┐¢JD´┐¢r$´┐¢´┐¢1G´┐¢´┐¢´┐¢´┐¢|´┐¢H2Îû´┐¢´┐¢´┐¢´┐¢´┐¢5G´┐¢´┐¢´┐¢#´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢O{<D´┐¢´┐¢#´┐¢;K´┐¢M´┐¢´┐¢╩©´┐¢a8SD6s´┐¢´┐¢´┐¢´┐¢ ´┐¢G´┐¢´┐¢f ´┐¢´┐¢´┐¢´┐¢whw!"#´┐¢´┐¢´┐¢-´┐¢3A´┐¢´┐¢q´┐¢´┐¢´┐¢/´┐¢`´┐¢]g´┐¢qi´┐¢´┐¢5O}N´┐¢(´┐¢´┐¢0 ´┐¢1´┐¢´┐¢´┐¢´┐¢h´┐¢´┐¢
Q´┐¢6├Ç´┐¢kU´┐¢´┐¢´┐¢!´┐¢g/(´┐¢´┐¢uq4´┐¢n´┐¢n´┐¢~´┐¢╔©UR'4´┐¢´┐¢;P´┐¢aK´┐¢5ð¢´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢ã¼R´┐¢´┐¢{´┐¢#7{´┐¢´┐¢t´┐¢¤╗J*´┐¢G´┐¢´┐¢´┐¢´┐¢´┐¢,:´┐¢´┐¢zB|´┐¢x´┐¢~-´┐¢´┐¢u´┐¢eH´┐¢vcwH"´┐¢c´┐¢´┐¢´┐¢ xC▄º´┐¢)O´┐¢´┐¢ny´┐¢yZEx´┐¢#e´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢}ÌÇ´┐¢´┐¢ZF0´┐¢fbT´┐¢´┐¢´┐¢´┐¢´┐¢.´┐¢Z(´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢╬í<´┐¢O|W´┐¢t!´┐¢´┐¢X=´┐¢U´┐¢'4$´┐¢´┐¢U6´┐¢´┐¢ ´┐¢w7f´┐¢$t´┐¢R´┐¢´┐¢­ƒëëI9´┐¢%!´┐¢═ë`G´┐¢´┐¢´┐¢´┐¢´┐¢y
´┐¢b´┐¢´┐¢´┐¢´┐¢´┐¢<L'A!´┐¢w%h=´┐¢´┐¢´┐¢Bom´┐¢´┐¢q´┐¢´┐¢~uP7Vg(c8Gt>´┐¢}´┐¢´┐¢xg ´┐¢t´┐¢´┐¢5ð¡z´┐¢(#´┐¢;!´┐¢▄ò´┐¢j´┐¢═ºk2c&´┐¢k´┐¢z´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢┬é´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢)´┐¢=´┐¢V´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢W\48´┐¢2´┐¢nG´┐¢´┐¢m´┐¢´┐¢ãü´┐¢Xs´┐¢´┐¢E*#´┐¢´┐¢#´┐¢Aj´┐¢´┐¢qJ╬┤c|w´┐¢´┐¢D´┐¢´┐¢´┐¢"7SËÿrL´┐¢u´┐¢1´┐¢´┐¢1´┐¢,]5´┐¢´┐¢´┐¢´┐¢ ´┐¢´┐¢´┐¢´┐¢´┐¢F´┐¢:´┐¢´┐¢\´┐¢´┐¢´┐¢$´┐¢m:q´┐¢6´┐¢´┐¢.}´┐¢ >0´┐¢´┐¢^╦À´┐¢HO´┐¢-´┐¢amO´┐¢´┐¢´┐¢E1´┐¢X╔äO´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢/´┐¢<´┐¢´┐¢V}'´┐¢ ´┐¢^´┐¢´┐¢Qqz#5´┐¢´┐¢´┐¢d/y´┐¢´┐¢´┐¢Ãë3Zi´┐¢´┐¢ G=i[/9
´┐¢G_´┐¢C´┐¢´┐¢|´┐¢3´┐¢T´┐¢´┐¢´┐¢´┐¢/´┐¢´┐¢´┐¢ZV´┐¢´┐¢A´┐¢´┐¢´┐¢´┐¢´┐¢´┐¢b´┐¢BaÌ×´┐¢´┐¢´┐¢´┐¢<
if I try to save the response.data in a file on my disk it creates a 3 pages pdf with the correct some data showing on the PDF tab reader but the pages are blank
const response = await axios
.post('https://APIURL/exportPDF/', connectionJson)
//console.log(((response.data).toString()))
fs.writeFile('c:/temp/my.pdf', response.data, (err) => {
// if (err) throw err;
console.log('The file has been saved!');
});
};
as I've been reading that the file might not be completely downloaded I've tried:
const finishedDownload = util.promisify(stream.finished);
const writer = fs.createWriteStream('c:/temp/myfile.pdf');
let connectionJson = {
"Username": "",
"Password": "",
}
const response = await axios({
method: 'post',
url: 'https://APIURL/exportPDF/',
responseType: 'stream',
headers: connectionJson
});
response.data.pipe(writer);
await finishedDownload(writer);
but writes a file 0bytes.
Fixed it including responseEncoding:binary and option while writing the file
'binary'
Please read the comments in the code!
async function axaios(idSession) {
let connectionJson = {
"DeviceId": "device-id",
"SessionId": idSession,
"ContainerId": contaierID,
}
//important! this option is needed
let conf = {"responseType": "arraybuffer",
"responseEncoding": "binary"}
const response = await axios
.post('https://APIULR/exportPDF/', connectionJson, conf)
//console.log(response.data)
//'binary' needed as option!
await fs.writeFile('c:/temp/my.pdf', response.data, 'binary' , (err) => {
if (err) throw err;
console.log('The file has been saved!');
});
}
```
Thank you all for your suggestions!
Thanks a lot KJ

Unable to upload large files using nodejs/axios

I am writing a nodejs client that would upload files (files can be both binary or text files) from my local dev machine to my server which is written in Java, configuring which is not an option. I am using the following code to upload files, it works fine for files upto 2 gb, but beyond that it throws an error mentioned below. Now you may think that the server might not be allowing files more than 2 gb but I have successfully uploaded files upto 10 gb using Rest clients like Postman and Insomnia on the same instance.
const fs = require("fs");
const path = require("path");
const axios = require("axios");
const FormData = require("form-data");
function uploadAxios({ filePath }) {
let formData;
try {
formData = new FormData();
formData.append("filedata", fs.createReadStream(filePath));
} catch (e) {
console.error(e)
}
axios
.post(
`https://myinstance.com`,
formData,
{
headers: {
...formData.getHeaders(),
"Content-Type": "multipart/form-data",
Authorization:
"Basic xyz==",
},
maxContentLength: Infinity,
maxBodyLength: Infinity,
// maxContentLength: 21474836480,
// maxBodyLength: 21474836480, // I have tried setting these values with both numbers and the keyword Infinity but nothing works
}
)
.then(console.log)
.catch(console.error);
}
const filePath = "C:\\Users\\phantom007\\Documents\\BigFiles\\3gb.txt";
uploadAxios({ filePath });
Error I get:
#
# Fatal error in , line 0
# API fatal error handler returned after process out of memory
#
<--- Last few GCs --->
es[7844:0000023DC49CE190] 47061 ms: Mark-sweep 33.8 (41.8) -> 33.8 (41.8) MB, 417.2 / 0.1 ms (+ 947.1 ms in 34029 steps since start of marking, biggest step 431.0 ms, walltime since start of marking 15184 ms) finalize incremental marking via stack guard[7844:0000023D
C49CE190] 48358 ms: Mark-sweep 34.4 (41.8) -> 31.8 (40.5) MB, 1048.4 / 0.0 ms (+ 0.0 ms in 1 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 1049 ms) finalize incremental marking via task GC in old spac
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 000002E294C255E9 <JSObject>
0: builtin exit frame: new ArrayBuffer(aka ArrayBuffer)(this=0000022FFFF822D1 <undefined>,65536)
1: _read [fs.js:~2078] [pc=0000004AD942D301](this=0000039E67337641 <ReadStream map = 000002F26D804989>,n=65536)
2: read [_stream_readable.js:454] [bytecode=000002A16EB59689 offset=357](this=0000039E67337641 <ReadStream map = 000002F26D804989>,n=0)
3: push [_stream_readable.js:~201]...
FATAL ERROR: Committing semi space failed. Allocation failed - process out of memory
It looks like the error is because it has exceed the memory limit, i know by passing the flag --max-old-space-size i can overcome this, but i want this to be scalable and not hardcode an upper limit.
PS: My dev machine has 12 GB free memory
Edit: I added the error trace.
I'm using multer to define limit, see next code:
app.use(multer({
storage: storage,
dest: path.join(pathApp),
limits: {
fileSize: 5000000
},
fileFilter: function fileFilter(req, file, cb) {
var filetypes = /json/;
var mimetype = filetypes.test(file.mimetype);
var extname = filetypes.test(path.extname(file.originalname));
if (mimetype && extname) {
console.log("Port ".concat(app.get('port')) + " - Uploading file " + file.originalname);
return cb(null, true, req);
}
cb(JSON.stringify({
"success": false,
"payload": {
"app": "upload",
"function": "upload"
},
"error": {
"code": 415,
"message": 'File type not valid'
}
}));
}
}).single('file1'));

How to convert req.files.resume.data to createReadStream in nodejs

I am uploading .mp4/mov file using express-uploader, in my api call i received req.files.resume object which is having [data] value like < Buffer 00 98 09 99 88 77 66 ... > of type object.
I have tried so many thing but not able to convert that object to pass into fs.createReadStream(), here's my code
router.post('/update-resume', authUtil.ensureAuthenticated,
function(req, res, next){
if(!req.files){
res.status(400).json({message: "No file"});
} else if(req.files.resume){
const resume = req.files.resume;
if(resume.mimetype === "video/mp4" || resume.mimetype === "video/quicktime"){
// Create the streams
var read =
fs.createReadStream(Buffer.from(resume.data.toString())); //this is giving error of TypeError [ERR_INVALID_ARG_VALUE]
}
}
});
got stuck with this, If anyone can help me!
I am using this code as aws lambda function with api gateway, to upload large file to s3 bucket.

File upload fails when file is over 60K in size

I have been working to convert an in-house application away from using FTP, as the security team has told us to get off FTP. So I've been using HTTP uploads instead, and for the most part it works very well. Our environment is a mishmash of Linux, HP-UX, Solaris, and AIX. On our Linux servers, curl is universally available, so I have been using curl's POST capabilities for uploads, and it's worked flawlessly. Unfortunately, the Unix machines rarely have curl, or even wget, so I wrote a GET script with perl which works fine, and the POST script I wrote for perl(lifted and adapted from elsewhere on the web) works brilliantly for Unix, up until the data being uploaded is greater than about 60K(which curl handles fine in Linux, btw). Beyond that, the Apache error log starts spitting out:
CGI.pm: Server closed socket during multipart read (client aborted?).
No such error ever occurs when I use curl for the upload. Here's my POST script, using Socket, since LWP is not available on every server, and not at all on any of the Unix servers.
#!/usr/bin/perl -w
use strict;
use Socket;
my $v = 'dcsm';
my $upfile = $ARGV[0] or die 'Upload File not found or not specified.' . "\n";
my $hostname = $ARGV[1] or die 'Hostname not specified.' . "\n";
$| = 1;
my $host = "url.mycompany dot com";
my $url = "/csmtar.cgi";
my $start = times;
my ( $iaddr, $paddr, $proto );
$iaddr = inet_aton($host);
$paddr = sockaddr_in( 80, $iaddr );
$proto = getprotobyname('tcp');
unless ( socket( SOCK, PF_INET, SOCK_STREAM, $proto ) ) {
die "ERROR : init socket: $!";
}
unless ( connect( SOCK, $paddr ) ) {
die "no connect: $!\n";
}
my $length = 0;
open( UH, "< $upfile" ) or warn "$!\n";
$length += -s $upfile;
my $boundary = 'nn7h23ffh47v98';
my #head = (
"POST $url HTTP/1.1",
"Host: $host",
"User-Agent: z-uploader",
"Content-Length: $length",
"Content-Type: multipart/form-data; boundary=$boundary",
"",
"--$boundary",
"Content-Disposition: form-data; name=\"hostname\"",
"",
"$hostname",
"--$boundary",
"Content-Disposition: form-data; name=\"ren\"",
"",
"true",
"--$boundary",
"Content-Disposition: file; name=\"filename\"; filename=\"$upfile\"",
"--$boundary--",
"",
"",
);
my $header = join( "\r\n", #head );
$length += length($header);
$head[3] = "Content-Length: $length";
$header = join( "\r\n", #head );
$length = -s $upfile;
$length += length($header);
select SOCK;
$| = 1;
print SOCK $header;
while ( sysread( UH, my $buf, 8196 ) ) {
if ( length($buf) < 8196 ) {
$buf = $buf . "\r\n--$boundary";
syswrite SOCK, $buf, length($buf);
} else {
syswrite SOCK, $buf, 8196;
}
print STDOUT '.',;
}
close UH;
shutdown SOCK, 1;
my #data = (<SOCK>);
print STDOUT "result->#data\n";
close SOCK;
Anybody see something that jumps out at them?
UPDATE:
I made the following updates, and the errors appear to be unchanged.
To address the content-length issue, and attempt to eliminate the potential for the loop equalling the exact number of characters before appending the final boundary, I made the following code update.
my $boundary = 'nn7h23ffh47v98';
my $content = <<EOF;
--$boundary
Content-Disposition: form-data; name="hostname"
$hostname
--$boundary
Content-Disposition: file; name="filename"; filename="$upfile"
--$boundary--
EOF
$length += length($content);
my $header = <<EOF;
POST $url HTTP/1.1
Host: $host
User-Agent: z-uploader
Content-Length: $length
Content-Type: multipart/form-data; boundary=$boundary
EOF
$header .= $content;
select SOCK;
$| = 1;
print SOCK $header;
my $incr = ($length + 100) / 20;
$incr = sprintf("%.0f", $incr);
while (sysread(UH, my $buf, $incr )) {
syswrite SOCK, $buf, $incr;
}
syswrite SOCK, "\n--$boundary", $incr;
You are asking if there's "something that jumps out", from looking at the code.
Two things jump out at me:
1) The Content-Length parameter in a POST HTTP message specifies the exact byte count of the entity portion of the HTTP message. See section 4.4 of RFC 2616.
You are setting the Content-Length: header to the exact size of the of the file you're uploading. Unfortunately, in addition to the file itself, you are also sending the MIME headers.
The "entity" portion of the HTTP message, as defined by RFC 2616, essentially consists of everything after the blank line following the last header of the HTTP message header. Everything below that point must be included in the Content-Length: header. The Content-Length header is NOT the size of the file you're uploading, but the same of the HTTP message's entire entity portion, which follows the header.
2) Ignoring the broken Content-Length: header, if the size of the file happens to be an exact multiple of 8196 bytes, the MIME document you are constructing will most likely be corrupted. Your last sysread() call will get the last 8196 bytes in the file, which you will happy copy through, the next call to sysread() will return 0, and you will terminate the loop, without emitting the trailing boundary delimiter. The MIME document will be corrupt.

Resources