I have a node server running, and I am trying to figure out how to convey the paths to files on my servers (and how to respond to get requests for these resources) so that I basically have a static file server, but one that I can control based on request parameters (POST or GET etc.). Right now my file structure is set up as follows (dir_ means directory):
Main Folder:
server.js
dir_content:
home.html
style.css
dir_uploads:
dir_finished:
file1.txt
file2.txt
To respond to my requests, my code looks like the following:
http.createServer(function(request, response) {
if(request.method.toLowerCase() == 'get') {
var filePath = './dir_content' + request.url;
if (filePath == './dir_content/') {
filePath = './dir_content/home.html';
}
fs.exists(filePath, function (exists) {
if (exists) {
fs.readFile(filePath, function (error, content) {
if (error) {
response.writeHead(500);
response.end();
}
else {
response.writeHead(200, {'Content-Type': contentType});
response.end(content, 'utf-8');
}
})
This allows me to respond to any GET requests for web pages with the correct page (if it exists) but it eventually warps the path to a resource on my server.
Example: Someone trying to retrieve file1.txt would navigate to localhost:8080/dir_content/dir_uploads/dir_finished/file1.txt
but my code would add the additional ./dir_content to their request, making it look like they were trying to visit:
localhost:8080/dir_content/dir_content/dir_uploads/dir_finished/file1.txt
Is there a simpler way to provide accurate absolute paths to resources in folders WITHOUT external node modules (by setting a sort of base directory somehow)?
At first glance it appears you aren't accounting for the request.url possibly including the dir_content prefix. If that is indeed true, then I would recommend a simple regular expression test to see if it is there:
if (!/^[\/]?dir_content\//i.test(request.url))
filePath = '/dir_content' + request.url;
filePath = '.' + filePath;
Simple fiddle to demonstrate the regular expression:
http://jsfiddle.net/97Q43/
Related
Bit of context: I am learning nodejs/express and have got a small application that in the end should function as an api. I have got a routes directory with a few subdirectories containing files such as Post.js or Users.js, each file defining a few routes for Posts, Users etc.
I have the following bit of code in my index.js placed in routes directly:
public readDir(path, app) {
let dir = path != null ? path : __dirname;
fs.readdir(dir, (err, elements) => {
if(err) throw err;
if(!elements) return;
elements.forEach(element => {
if(element === "init.js") return;
let new_path = x.join(dir, "/", element);
fs.lstat(new_path , (err, stat) => {
if(err) throw err;
if(stat.isDirectory()) {
this.readDir(new_path , app);
} else if(stat.isFile()) {
require(PATH)(app);
}
});
});
});
}
What it does is the following: It reads the Routes directory with each subdirectory by calling itself in a loop and requiring any file that is found (path module is imported as x, I should probably change that sometime). This works fortunately, every route is mapped properly and can be accessed by making a call with postman / insomnia.
My question would be how this could be done better, primarily performance wise whilst still keeping the structure of multiple files and/or directories?
I have already seen this answer and this one and though both seem like great and functional answers I was wondering which would be the better option?
Any pointers would be great!
I'm trying to make some code which makes server unzip requested file by using nodejs(express)...
app.post('/unzip', function(req, res) {
//Get User Information
var id = req.body.id;
//Get ZIP Information
var rendering_ready_file_unzip = req.body.filename + '.zip';
var rendering_ready_file_unzip_nonext = req.body.filename;
//Extract zip
var extract = require('extract-zip');
var unzip_route = path.join(__dirname, '../unzip/' + "id" + '/' + date + '/');;
extract(path.join(__dirname, '../upload/' + rendering_ready_file_unzip), {dir: unzip_route}, function (err) {
if (err) {
console.log(err);
}
res.end();
});}
It works... but other languages like Korean damaged after unzip.. So I want to know about unzip-modules which can designate encoding type.
Do you know it?
The problem may not be with the module. It helps to reduce troubled code to the minimum, and in this case that might be the following:
const path = require('path');
const extract = require('extract-zip');
const file_unzip = 'test.zip';
extract(path.join(__dirname, file_unzip), {dir: __dirname}, function (err) {
if (err) {
console.log(err);
}
});
After putting that in index.js and installing extract-unzip, a same-directory test case is possible in bash. Echo Korean characters to a file and make sure they are there:
$echo 안녕하세요>test
$cat test
안녕하세요
Zip the file, remove the original and make sure it is gone:
$zip test.zip test
adding: test (stored 0%)
$rm test
$ls test*
test.zip
Run the script and see that the file has been extracted and contains the same characters:
$node index.js
$ls test*
test test.zip
$cat test
안녕하세요
I got the same results with characters from several other languages. So at least in this setup, the module unzips without changing the characters in the inner files. Try running the same tests on your system, and take a good look at what happens prior to unzipping. Problems could lurk in how the files are generated, encoded, zipped, or uploaded. Investigate one step at a time.
I wanted to ask if anyone knows of a good solution for how to use node.js to look in a .scss file and grab all the classes listed and to then put them in either an object or an array?
The thing with this is that you are going to need the sass folder to be available to you server, this is not a recommended practice since you only publish the css compiled file, there is no need to also publish the dev assets.
However if you do so, you will need to read .scss file using node and from there use a regex to match the .class strings inside the file.
This will make the reading of the file:
var fs = require('fs');
function readSassFile () {
fs.readFile('./public/scss/components/_styles.scss', 'utf8', function (err, data) {
if (err) {
console.log(err);
return;
}
regexArray(data);
});
}
As you can see, at the end if the readFile retrieves the file with success, I'm calling a function regexArray() and sending the data of the file loaded.
In the regexArray function you need to define a regex to evaluate the string of the file loaded.
function regexArray (data) {
var re = /\.\S*/g;
var m;
var classArray = [];
while ((m = re.exec(data)) !== null) {
if (m.index === re.lastIndex) {
re.lastIndex++;
}
classArray.push(m[0]);
}
console.log(classArray);
}
the var re is the regular expression matching any string starting with a . and ending with a non-whitespace character which will match your css class names.
then we evaluate the m variable when is different from null and store the results in the array classArray, then you can log it to see the results.
I made the test with the path that is in the fs.readFile method, you can change it for you own path.
I have the following code:
Meteor.methods({
saveFile: function(blob, name, path, encoding) {
var path = cleanPath(path), fs = __meteor_bootstrap__.require('fs'),
name = cleanName(name || 'file'), encoding = encoding || 'binary',
chroot = Meteor.chroot || 'public';
// Clean up the path. Remove any initial and final '/' -we prefix them-,
// any sort of attempt to go to the parent directory '..' and any empty directories in
// between '/////' - which may happen after removing '..'
path = chroot + (path ? '/' + path + '/' : '/');
// TODO Add file existance checks, etc...
fs.writeFile(path + name, blob, encoding, function(err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
console.log('The file ' + name + ' (' + encoding + ') was saved to ' + path);
}
});
function cleanPath(str) {
if (str) {
return str.replace(/\.\./g,'').replace(/\/+/g,'').
replace(/^\/+/,'').replace(/\/+$/,'');
}
}
function cleanName(str) {
return str.replace(/\.\./g,'').replace(/\//g,'');
}
}
});
Which I took from this project
https://gist.github.com/dariocravero/3922137
The code works fine, and it saves the file, however it repeats the call several time and each time it causes meteor to reset using windows version 0.5.4. The F12 console ends up looking like this: . The meteor console loops over the startup code each time the 503 happens and repeats the console logs in the saveFile function.
Furthermore in the target directory the image thumbnail keeps displaying and then display as broken, then a valid thumbnail again, as if the fs is writing it multiple times.
Here is the code that calls the function:
"click .savePhoto":function(e, template){
e.preventDefault();
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var id = e.srcElement.id;
var item = Session.get("employeeItem");
var file = template.find('input[name='+id+']').files[0];
// $(template).append("Loading...");
var dataURL = '/.bgimages/'+file.name;
Meteor.saveFile(file, file.name, "/.bgimages/", function(){
if(id=="goodPhoto"){
EmployeeCollection.update(item._id, { $set: { good_photo: dataURL }});
}else{
EmployeeCollection.update(item._id, { $set: { bad_photo: dataURL }});
}
// Update an image on the page with the data
$(template.find('img.'+id)).delay(1000).attr('src', dataURL);
});
},
What's causing the server to reset?
My guess would be that since Meteor has a built-in "automatic directories scanning in search for file changes", in order to implement auto relaunching of the application to newest code-base, the file you are creating is actually causing the server reset.
Meteor doesn't scan directories beginning with a dot (so called "hidden" directories) such as .git for example, so you could use this behaviour to your advantage by setting the path of your files to a .directory of your own.
You should also consider using writeFileSync insofar as Meteor methods are intended to run synchronously (inside node fibers) contrary to the usual node way of asynchronous calls, in this code it's no big deal but for example you couldn't use any Meteor mechanics inside the writeFile callback.
asynchronousCall(function(error,result){
if(error){
// handle error
}
else{
// do something with result
Collection.update(id,result);// error ! Meteor code must run inside fiber
}
});
var result=synchronousCall();
Collection.update(id,result);// good to go !
Of course there is a way to turn any asynchronous call inside a synchronous one using fibers/future, but that's beyond the point of this question : I recommend reading this EventedMind episode on node future to understand this specific area.
I'm a beginner in Node.js, and was having trouble with this piece of code.
var fs = require('fs');
Framework.Router = function() {
this.run = function(req, res) {
fs.exists(global.info.controller_file, function(exists) {
if (exists) {
// Here's the problem
res.writeHead(200, {'Content-Type':'text/html'});
var cname = App.ucfirst(global.info.controller)+'Controller';
var c = require(global.info.controller_file);
var c = new App[cname]();
var action = global.info.action;
c[action].apply(global.info.action, global.info.params);
res.end();
} else {
App.notFound();
return false;
}
});
}
};
The problem lies in the part after checking if the 'global.info.controller_file' exists, I can't seem to get the code to work properly inside the: if (exists) { ... NOT WORKING }
I tried logging out the values for all the variables in that section, and they have their expected values, however the line: c[action].apply(global.info.action, global.info.params);
is not running as expected. It is supposed to call a function in the controller_file and is supposed to do a simple res.write('hello world');. I wasn't having this problem before I started checking for the file using fs.exists. Everything inside the if statement, worked perfectly fine before this check.
Why is the code not running as expected? Why does the request just time out?
Does it have something to do with the whole synchronous vs asynchronous thing? (Sorry, I'm a complete beginner)
Thank you
Like others have commented, I would suggest you rewrite your code to bring it more in-line with the Node.js design patterns, then see if your problem still exists. In the meantime, here's something which may help:
The advice about not using require dynamically at "run time" should be heeded, and calling fs.exists() on every request is tremendously wasteful. However, say you want to load all *.js files in a directory (perhaps a "controllers" directory). This is best accomplished using an index.js file.
For example, save the following as app/controllers/index.js
var fs = require('fs');
var files = fs.readdirSync(__dirname);
var dotJs = /\.js$/;
for (var i in files) {
if (files[i] !== 'index.js' && dotJs.test(files[i]))
exports[files[i].replace(dotJs, '')] = require('./' + files[i]);
}
Then, at the start of app/router.js, add:
var controllers = require('./controllers');
Now you can access the app/controllers/test.js module by using controllers.test. So, instead of:
fs.exists(controllerFile, function (exists) {
if (exists) {
...
}
});
simply:
if (controllers[controllerName]) {
...
}
This way you can retain the dynamic functionality you desire without unnecessary disk IO.