Node.js fs.writeFile() not creating new files? - node.js

I need to create many .json files for the system i am trying to develop. To do this, i ran a for loop over the file names i needed, then used fs.writeFileSync('filename.json', [data]).
However, when trying to open these later, and when I try to find them in the project folder, I cannot find them anywhere.
I have tried writing in a name that was less complex and should have appeared in the same directory as my script, however that was fruitless as well. To my understanding, even if my file name wasn't what I expected it to be, I should get at least something, somewhere, however I end up with nothing changed.
My current code looks like this:
function addEmptyDeparture(date) {
fs.readFileSync(
__dirname + '/reservations/templates/wkend_dep_template.json',
(err, data) => {
if (err) throw err
fs.writeFileSync(
getDepartureFileName(date),
data
)
}
)
}
function getDepartureFileName(date){
return __dirname + '/reservations/' +
date.getFullYear() +
'/departing/' +
(date.getMonth() + 1).toString().padStart(2, "0") +
date.getDate().toString().padStart(2, "0") +
'.json'
}
Where data is the JSON object returned from fs.readFileSync() and is immediately written into fs.writeFileSync(). I don't think I need to stringify this, since it's already a JSON object, but I may be wrong.
The only reason I think it's not working at all (as opposed to simply not showing up in my project) is that, in a later part of the code, we have this:
fs.readFileSync(
getDepartureFileName(date)
)
.toString()
which is where I get an error for not having a file by that name.
It is also worth noting that date is a valid date object, as I was able to test that part in a fiddle.
Is there something I'm misunderstanding in the effects of fs.writeFile(), or is this not the best way to write .json files for use on a server?

You probably are forgetting to stringify the data:
fs.writeFileSync('x.json', JSON.stringify({id: 1}))

I have tried to create similar case using a demo with writeFileSync() creating different files and adding json data to these ,using a for loop. In my case it works . Each time a new file name is created . Here is my GitHub for the same :-
var fs = require('fs');
// Use readFileSync() method
// Store the result (return value) of this
// method in a variable named readMe
for(let i=0 ; i < 4 ; i++) {
var readMe = JSON.stringify({"data" : i});
fs.writeFileSync('writeMe' + i + '.json', readMe, "utf8");
}
// Store the content and read from
// readMe.txt to a file WriteMe.txt
Let me know if this what you have been trying at your end.

Related

adm-zip not adding all files

I'm noticing a strange behavior while using this library. I'm trying to compress multiple EML files, to do so I first convert them to buffers and add them to adm-zip instance using the addFile() method. Here's my code:
const zip = new AdmZip();
assetBodies.forEach((body) => {
// emlData to buffer
let emlBuffer = Buffer.from(body);
zip.addFile(`${new Date().getTime()}.eml`, emlBuffer);
});
zip.getEntries().forEach((entry) => {
console.log("entry name", entry.entryName);
});
const willSendthis = zip.toBuffer();
The problem is that sometimes it compresses all the files and sometimes it doesn't.
For example, I received 5 items in the assetBodies array, but when I log the entries of the zip file I only see 1 or 2, sometimes 5.
Am I missing something or there's an issue with the library?
EDIT:
It's worth mentioning that some of the files are quite large in terms of text so I wonder if that could be the issue

Should I hard code references to my node_modules folder?

I'm creating an npm package called ctgen and I need to reference a folder within that package. Should I hard code the reference to that folder?
e.g. src = './node_modules/ctgen/scaffold'
or is there a pre-made function that will do this for me?
For a bit more clarity on my issue. I have the following function in my package:
var createFile = function (fileName, componentName, doc) {
// Tell the user what is happening
console.log(chalk.blue('\nCreating', fileName, '...'))
// Bring in the scaffold files (note this should point to the scaffold folder in node modules/ctgen)
var scaffold = './scaffold/' + fileName
fs.readFile(scaffold, 'utf8', function (err, data) {
if (err) return console.log(chalk.red(err))
var result = data.replace(/%cname%/g, componentName)
if (doc) {
var d = new Date()
result = result.replace(/%cfname%/g, doc.componentName)
result = result.replace(/%cdesc%/g, doc.componentDesc)
result = result.replace(/%cauthor%/g, doc.userName)
result = result.replace(/%cagithub%/g, doc.userGithub)
result = result.replace(/%creationdate%/g, d.getDate() + '/' + d.getMonth() + '/' + d.getFullYear())
}
fs.writeFile('./src/components/' + componentName + '/' + fileName, result, function (err) {
if (err) return console.log(chalk.red(err))
console.log(chalk.green(fileName, 'created!'))
})
})
}
It looks in a folder called scaffold for the following files:
view.php
style.styl
component.json
It then pulls the file into a cache, performs a find and replace on some strings and then writes the output to a file in the users project.
It seems though that whenever I try to reference the 'scaffold' folder, it's trying to find it in the users project folder and not in my package folder.
I'm very hesitant to reference the scaffold folder by writing '/node_modules/ctgen/scaffold' as that seems like the wrong thing to do to me.
What you want is __dirname.
If I understand your question, you have a ressource folder contained in your module, and have trouble accessing it since the current path is the path of the app, not your module.
__dirname will contain the path of your script file, and so will point to your module file.
I presume your module is named ctgen, and the files you want to access are in ctgen/scaffold. So in your code, try to access __dirname/scaffold.
In short NO. If you want to read more about how node looks for required files pleas read this: https://nodejs.org/dist/latest-v7.x/docs/api/modules.html#modules_addenda_package_manager_tips
if you are creating an npm package, specify your dependent npm modules within your package.json and use them as intended in node (with require('your-dependent-module-name')). After you published your module and someone npm install --save your-module, it will also install your dependencies into [usrDir]/node_modules/[your-module]/node_modules, where your module will find what it needs.
That beeing said; yes, you could reference __dirname + 'node_modules/[something] but that wouldn't be smart as it makes assumptions how the user is using your module. At the same time, node+npm solves this problem for you, so there is no reason for you to want to do this)

Creating multiple files from Vinyl stream with Through2

I've been trying to figure this out by myself, but had no success yet. I don't even know how to start researching for this (though I've tried some Google searchs already, to no avail), so I decided to ask this question here.
Is it possible to return multiple Vinyl files from a Through2 Object Stream?
My use case is this: I receive an HTML file via stream. I want to isolate two different sections of the files (using jQuery) and return them in two separate HTML files. I can do it with a single section (and a single resulting HTML file), but I have absolutely no idea on how I would do generate two different files.
Can anyone give me a hand here?
Thanks in advance.
The basic approach is something like this:
Create as many output files from your input file as you need using the clone() function.
Modify the .contents property of each file depending on what you want to do. Don't forget that this is a Buffer, not a String.
Modify the .path property of each file so your files don't overwrite each other. This is an absolute path so use something like path.parse() and path.join() to make things easier.
Call this.push() from within the through2 transform function for every file you have created.
Here's a quick example that splits a file test.txt into two equally large files test1.txt and test2.txt:
var gulp = require('gulp');
var through = require('through2').obj;
var path = require('path');
gulp.task('default', function () {
return gulp.src('test.txt')
.pipe(through(function(file, enc, cb) {
var c = file.contents.toString();
var f = path.parse(file.path);
var file1 = file.clone();
var file2 = file.clone();
file1.contents = new Buffer(c.substring(0, c.length / 2));
file2.contents = new Buffer(c.substring(c.length / 2));
file1.path = path.join(f.dir, f.name + '1' + f.ext);
file2.path = path.join(f.dir, f.name + '2' + f.ext);
this.push(file1);
this.push(file2);
cb();
}))
.pipe(gulp.dest('out'));
});

Obfuscate RequireJS module name [duplicate]

This question already has an answer here:
Replace module ids with fake names
(1 answer)
Closed 7 years ago.
My project layout is:
-bin\
-out.closeure.js // (compiled from out.js using the closure compiler)
-out.js // this is the requirejs output file, no minification done)
-src\
-lib\
-library.js
-main.js
-somefile.js
Now when I use RequireJS to combine my project to a single file, is there a way to mangle the names of the module? For example in main.js instead of:
require(['somefile'], function(SomeFile){
//blah
});
I'll have
require(['a6fa7'], function(SomeFile){
//blah
});
Since I am using closure compiler to obfuscated everything, the only thing not being mangled is the module names, and I want to mangle that as well.
I looked at the https://github.com/stevensacks/grunt-requirejs-obfuscate plugin but it's not working and I'm not sure if that plugin does what I want.
Edit1: I encourage to use instead AMDclean, there may be no reason to keep the footprint amd produce on an optimized build. If you want to keep it, the following way is the workaround i made:
Use the onBuildWrite and onBuildRead as the documentation states:
onBuildWrite A function that will be called for every write to an optimized bundle of modules. This allows transforms of the content before serialization.
"replaceModuleNames": [],
//executed on every file read
onBuildRead: function(moduleName, path, contents){
this["replaceModuleNames"].push({
"name": moduleName,
"with": makeid()
});
//Always return a value.
return contents;
},
onBuildWrite: function (moduleName, path, contents) {
console.log("moduleName: " + moduleName + " ,path: " + path + " ,contents: ... ");
var text = contents;
RegExp.escape = function(text) {
return text.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, "\\$&");
};
this["replaceModuleNames"].forEach(function(element){
var regE = new RegExp('(?:define\\(|require\\().*(?:"|\')('+RegExp.escape(element["name"])+')(?:"|\')');
text = text.replace(regE, function(a, b){
return a.replace(b, element["with"]);
});
console.log("moduleName: "+moduleName+ " replaceWith: " + element["with"] + " result: " + text);
});
//Always return a value.
return text;
}
Place it on the build.js, you have to implement makeid method,
i tested it but it might fail, so dont rely on it for production
Edit2: text.replace(element["name"], element["with"]); may cause inconsistent code, replaced with regexp groups, i am not experienced with regex so its open to improvements.

Meteor/Node writeFile crashes server

I have the following code:
Meteor.methods({
saveFile: function(blob, name, path, encoding) {
var path = cleanPath(path), fs = __meteor_bootstrap__.require('fs'),
name = cleanName(name || 'file'), encoding = encoding || 'binary',
chroot = Meteor.chroot || 'public';
// Clean up the path. Remove any initial and final '/' -we prefix them-,
// any sort of attempt to go to the parent directory '..' and any empty directories in
// between '/////' - which may happen after removing '..'
path = chroot + (path ? '/' + path + '/' : '/');
// TODO Add file existance checks, etc...
fs.writeFile(path + name, blob, encoding, function(err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
console.log('The file ' + name + ' (' + encoding + ') was saved to ' + path);
}
});
function cleanPath(str) {
if (str) {
return str.replace(/\.\./g,'').replace(/\/+/g,'').
replace(/^\/+/,'').replace(/\/+$/,'');
}
}
function cleanName(str) {
return str.replace(/\.\./g,'').replace(/\//g,'');
}
}
});
Which I took from this project
https://gist.github.com/dariocravero/3922137
The code works fine, and it saves the file, however it repeats the call several time and each time it causes meteor to reset using windows version 0.5.4. The F12 console ends up looking like this: . The meteor console loops over the startup code each time the 503 happens and repeats the console logs in the saveFile function.
Furthermore in the target directory the image thumbnail keeps displaying and then display as broken, then a valid thumbnail again, as if the fs is writing it multiple times.
Here is the code that calls the function:
"click .savePhoto":function(e, template){
e.preventDefault();
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var id = e.srcElement.id;
var item = Session.get("employeeItem");
var file = template.find('input[name='+id+']').files[0];
// $(template).append("Loading...");
var dataURL = '/.bgimages/'+file.name;
Meteor.saveFile(file, file.name, "/.bgimages/", function(){
if(id=="goodPhoto"){
EmployeeCollection.update(item._id, { $set: { good_photo: dataURL }});
}else{
EmployeeCollection.update(item._id, { $set: { bad_photo: dataURL }});
}
// Update an image on the page with the data
$(template.find('img.'+id)).delay(1000).attr('src', dataURL);
});
},
What's causing the server to reset?
My guess would be that since Meteor has a built-in "automatic directories scanning in search for file changes", in order to implement auto relaunching of the application to newest code-base, the file you are creating is actually causing the server reset.
Meteor doesn't scan directories beginning with a dot (so called "hidden" directories) such as .git for example, so you could use this behaviour to your advantage by setting the path of your files to a .directory of your own.
You should also consider using writeFileSync insofar as Meteor methods are intended to run synchronously (inside node fibers) contrary to the usual node way of asynchronous calls, in this code it's no big deal but for example you couldn't use any Meteor mechanics inside the writeFile callback.
asynchronousCall(function(error,result){
if(error){
// handle error
}
else{
// do something with result
Collection.update(id,result);// error ! Meteor code must run inside fiber
}
});
var result=synchronousCall();
Collection.update(id,result);// good to go !
Of course there is a way to turn any asynchronous call inside a synchronous one using fibers/future, but that's beyond the point of this question : I recommend reading this EventedMind episode on node future to understand this specific area.

Resources