Node writeFileSync encoding options for images - node.js

I'm using fs.writeFileSync(file, data[, options]) to save a file returned from http.get(options[, callback])
This works fine for text files but images, pdfs etc end up being corrupted. From the searching around that I've done, it's apparently because fs.writeFileSync(file, data[, options]) defaults to UTF-8
I've tried setting the encoding to 'binary', the mime-type and the extension to no avail. It feels like something really obvious that I'm overlooking, can anyone point me in the right direction?
Thank you in advance
Update
I'm running this through electron. I didn't think it was worth mentioning as electron is just running node, but I'm not a node or electron expert so I'm not sure

Create a Buffer from the image data and set its encoding to binary. Then pass that data into a stream.PassThrough and pipe that into a stream.Writable.
var fs = require('fs');
var stream = require('stream');
var imgStream = new stream.PassThrough();
imgStream.end(Buffer.from(data, 'binary'));
var wStream = fs.createWriteStream('./<dest>.<ext>');
imgStream.once('end', () => {
console.log('Image Written');
});
imgStream.once('error', (err) => {
console.log(err);
});
imgStream.pipe(wStream);

Related

Redirect Readable object stdout process to file in node

I use an NPM library to parse markdown to HTML like this:
var Markdown = require('markdown-to-html').Markdown;
var md = new Markdown();
...
md.render('./test', opts, function(err) {
md.pipe(process.stdout)
});
This outputs the result to my terminal as intended.
However, I need the result inside the execution of my node program. I thought about writing the output stream to file and then reading it in at a later time but I can't figure out a way to write the output to a file instead.
I tried to play around var file = fs.createWriteStream('./test.html'); but the node.js streams rather give me headaches than results.
I've also looked into the library's repo and Markdown inherits from Readable via util like this:
var util = require('util');
var Readable = require('stream').Readable;
util.inherits(Markdown, Readable);
Any resources or advice would be highly appreciated. (I would also take another library for parsing the markdown, but this gave me the best results so far)
Actually creating a writable file-stream and piping the markdown to this stream should work just fine. Try it with:
const writeStream = fs.createWriteStream('./output.html');
md.render('./test', opts, function(err) {
md.pipe(writeStream)
});
// in case of errors you should handle them
writeStream.on('error', function (err) {
console.log(err);
});

pipe file directly from file system

I'm using "express": "^4.13.3" on node 6.9.0
When i try to pipe data a jpeg image:
const path = config.storageRoot + '/' + req.params.originalFileName;
var mimetype = mime.lookup(req.params.originalFileName);
res.writeHead(200, { 'Content-Type': mimetype});
fs.createReadStream(path).pipe(res);
i get xml data inside the result:
<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="Adobe XMP Core 5.5-c014 79.151739, 2013/04/03-12:12:15 ">
When i use res.end with the result from a fs.readFile instead, the binary content is formatted correctly.
What am i doing wrong?
Take a look how I'm piping files in the examples in this answer:
How to serve an image using nodejs
It's something like this:
// 'type' is the MIME type
var s = fs.createReadStream(file);
s.on('open', function () {
res.set('Content-Type', type);
s.pipe(res);
});
s.on('error', function () {
res.set('Content-Type', 'text/plain');
res.status(404).end('Not found');
});
So I'm just setting the header to be posted by Express instead of posting the headers explicitly. Also I'm handling the stream events. Maybe you should try doing it similarly because the way I did it seems to work, according to Travis:
https://travis-ci.org/rsp/node-static-http-servers
Another thing in addition to handling the stream events and errors would be to make sure that you have the correct encoding, permissions etc. I don't know what result are you expecting and what that XML means or where it comes from, but handling the stream events may tell you more about what is happening.

Buffer entire file in memory with Node.js

I have a relatively small file (some hundreds of kilobytes) that I want to be in memory for direct access for the entire execution of the code.
I don't know exactly the internals of Node.js, so I'm asking if a fs open is enough or I have to read all file and copy to a Buffer?
Basically, you need to use the readFile or readFileSync function from the fs module. They return the complete content of the given file, but differ in their behavior (asynchronous versus synchronous).
If blocking Node.js (e.g. on startup of your application) is not an issue, you can go with the synchronized version, which is as easy as:
var fs = require('fs');
var data = fs.readFileSync('/etc/passwd');
If you need to go asynchronous, the code is like that:
var fs = require('fs');
fs.readFile('/etc/passwd', function (err, data ) {
// ...
});
Please note that in either case you can give an options object as the second parameter, e.g. to specify the encoding to use. If you omit the encoding, the raw buffer is returned:
var fs = require('fs');
fs.readFile('/etc/passwd', { encoding: 'utf8' }, function (err, data ) {
// ...
});
Valid encodings are utf8, ascii, utf16le, ucs2, base64 and hex. There is also a binary encoding, but it is deprecated and should not be used any longer. You can find more details on how to deal with encodings and buffers in the appropriate documentation.
As easy as
var buffer = fs.readFileSync(filename);
With Node 0.12, it's possible to do this synchronously now:
var fs = require('fs');
var path = require('path');
// Buffer mydata
var BUFFER = bufferFile('../public/mydata');
function bufferFile(relPath) {
return fs.readFileSync(path.join(__dirname, relPath)); // zzzz....
}
fs is the file system. readFileSync() returns a Buffer, or string if you ask.
fs correctly assumes relative paths are a security issue. path is a work-around.
To load as a string, specify the encoding:
return readFileSync(path,{ encoding: 'utf8' });

Why append rather than write when using knox / node.js to grab file from Amazon s3

I'm experimenting with the knox module for node.js as a way of managing some small files in an Amazon S3 bucket. Everything works fine stand-alone: I can upload a file, download a file, etc. However, I want to be able to download a file on recurring schedule. When I modify the code to run on an interval, I'm getting the downloaded file appending to the previous instance instead of overwriting.
I'm not sure if I've made a mistake in the file write code or in the knox handling code. I've tried several different write approaches (writeFile, writeStream, etc.) and I've looked at the knox source code. Nothing obvious to me stands out as a problem. Here's the code I'm using:
knox = require('knox');
fs = require('fs');
var downFile = DOWNFILE;
var downTxt = '';
var timer = INTERVAL;
var path = S3PATH + downFile;
setInterval(function()
{
var s3client = knox.createClient(
{
key: '********************',
secret: '**********************************',
bucket: '********'
});
s3client.get(path).on('response', function(response)
{
response.setEncoding('ascii');
response.on('data', function(chunk)
{
downTxt += chunk;
});
response.on('end', function()
{
fs.writeFileSync(downFile, downTxt, 'ascii');
});
}).end();
},
timer);
The problem is with your placement of var downTxt = '';. That is the only place you set downTxt to blank, so every time you retrieve more data, you add it to the data that you got in the previous request because you never clear the data from the previous request. The simplest fix is to move that line to just before the setEncoding line.
However, the way you are processing the data is unnecessarily complicated. Try something like this instead. You don't need to recreate the client every time, and setting the encoding will just break things if you are downloading non-text files, and it won't make a difference with text files. Next, you shouldn't manually collect the data, you can immediately start writing it to the file as you receive it. Lastly, since request is a standard stream, you don't need to monitor the 'data' event because you can just use pipe.
var knox = require('knox'),
fs = require('fs'),
downFile = DOWNFILE,
timer = INTERVAL,
path = S3PATH + downFile,
s3client = knox.createClient({
key: '********************',
secret: '**********************************',
bucket: '********'
});
(function downloadFile() {
var str = fs.createWriteStream(downFile);
s3client.get(path).pipe(str);
str.on('close', function() {
setTimeout(downloadFile, timer);
});
})();

NodeJS: Asynchronous file read problems

New to NodeJS.
Yes I know I could use a framework, but I want to get a good grok on it before delving into the myriad of fine fine tools that are out there.
my problem:
var img = fs.readFileSync(path);
the above works;
fs.readFile(path, function (err, data)
{
if (err) throw err;
console.log(data);
});
the above doesn't work;
the input path is : 'C:\NodeSite\chrome.jpg'
oh and working on Windows 7.
any help would be much appreciated.
Fixed
Late night/morning programming, introduces errors that are hard to spot. The path was being set from two different places, and so the source path were different in both cases. Thankyou for your help. I am a complete numpty. :)
If you are not setting an encoding when reading a file, you will get the binary content.
So for example, the following snippet will output the content of the test file using UTF-8 encoding. If you don't use an encoding, you will get an output like "" on your console (raw binary buffer).
var fs = require('fs');
var path = "C:\\tmp\\testfile.txt";
fs.readFile(path, 'utf8', function (err, data) {
if (err) throw err;
console.log(data);
});
Another issue (especially on windows-based OS's) can be the correct escaping of the target path. The above example shows how path's on Windows have to be escaped.
java guys will just use this javascript asynchronous command as if in pure java , troublefreely :
var fs = require('fs');
var Contenu = fs.readFileSync( fILE_FULL_Name , 'utf8');
console.log( Contenu );
That should take care of small & big files.

Resources