I am working on a Nodejs powered system that runs within a local network and I need to check if files exist on a different local drive of the computer that the Nodejs app runs on.
I have tried using the fs.exists function, but that doesn't.
Is this possible? I am guessing there are security risks involved, but because the system runs 100% on a local network, is there any work around to achieve this?
the reason I need to check that the files exist is because the file name holds the version number, and I need to get the latest version (highest number)
this is what I tried:
// the example looks for example#1.wav in the V:\public folder
var filename = "example"
var versionCount = 1;
if (fs.existsSync("V:\public\"+filename+"#"+versionCount+".wav")) {
console.log("V:\public\"+filename+"#"+versionCount+".wav Found!");
} else {
console.log("V:\public\"+filename+"#"+versionCount+".wav does not exists");
}
I am running Nodejs on Windows.
Any suggestions would be greatly apprecaited! TIA!
Posting an answer incase anyone runs into the same problem in the future..
I resolved this problem by using forward slashes (/) instead of back slashes ()
Related
Problem:
trying to get an image out of azure fileshare for manipulation. I need to read the file as an Drawing.Image for manipulation. I cannot create a valid FileInfo object or Image using uncpath (which I need to do in order to use over IIS)
Current Setup:
Attach a virtual directory called Photos in IIS website pointing to UNCPath of the Azure file share (e.g. \myshare.file.core.windows.net\sharename\pathtoimages)
This works as http://example.com/photos/img.jpg so I know it is not a permissions or authentication issue.
For some reason though I cannot get a reference to File.
var imgpath = Path.Combine(Server.MapPath("~/Photos"),"img.jpg")
\\resolves as \\myshare.file.core.windows.net\sharename\pathtoimages\img.jpg
var fi = new FileInto(imgpath);
if(fi.exists) //this returns false 100% of the time
var img = System.Drawing.Image.FromFile(fi.FullName);
The problem is that the file is never found to exist, even though I cant take that path and put it in an explorer window and return the img.jpg 100% of the time.
Does anyone have any idea why this would not be working?
Do I need to be using CloudFileShare object to just get a read of a file I know is there?
It turns out the issue is that I needed to wrap my code in an impersonation of the azure file share userid since the virtual directory is not really in play at all at this point.
using (new impersonation("UserName","azure","azure pass"))
{
//my IO.File code
}
I used this guys impersonation script found here.
Can you explain why DirectoryInfo.GetFiles produces this IOException?
I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.
I have been searching for why this has been happening, but recently I reinstalled Ubuntu 16.04 and I copied a Node + Express project to a flash drive then pasted it to the exact same corresponding location (~/Programming/project/). Upon doing that, everything else works as I would expect but Mongo is not responding when I make requests to it through Mongoose. I do not have any reason to believe that Mongoose is the reason for the failure to respond. I have a couple of routes in which I know should work, the exact same code works on my friend's machine (same Ubuntu, version and everything). I have uninstalled and reinstalled everything (Including Ubuntu) multiple times. The only thing that works is making a call to find something with a specific ID will return if and only if the ID does not exist. Mongo won't return all the records or anything else. The website just spins endlessly (locally hosted on my machine). However, using Mongo in the terminal works fine. I can query and get results as if everything is normal. Has this happened to anyone else or is there any ideas? I can try to include some code.
This does not work
Greeting.find({}, function(err, greetings) {
res.status(200).json(greetings);
});
This does work.
Greeting.findById(req.params.id, function(err, greeting) {
if(err)
res.status(404).json({"error": "Greeting with that ID does not exist"});
res.status(200).json(greeting);
});
EDIT:
Sorry, I am new to stack overflow so I am still getting the hang of what should be added or not...
mongoose.connect(database.url);
mongoose.connection.on('error', function() {
console.info("Could not run mongodb, did you forget to run mongod?");
});
The database.url is what it needs to be, the connection is open as far as I can tell...
I should also mention that while installing Ubuntu, I wiped my previous dual boot in favor of just having Ubuntu, and so I opted in for the hard-drive encryption... Could that be preventing Mongo from working properly? If so, how would I fix that?
The issue was in fact the encrypted hard drive. I reinstalled Ubuntu and that fixed it. I'm still not sure how to make it work with an encrypted disk.
I am working with Electron and Node.js. We have developed an application that works fine on windows and as a requirement had to package it for mac os. I packaged the application using electron-packager, the packaging process completes and package is generated. Double clicking it throws an error that permission denied for mkdir, as i am using node localstorage to maintain some settings on the user's local machine. somehow mac doesn't local storage to create folder in the root of the application. Any help in this matter will be great. Thanks
First off, is the code in question in the main process or in a renderer process? If it is the latter, you don't need to use 'node-localstorage', because you can use the renderer's native LocalStorage. If you are in the main process, then you need to provide your own storage strategy so using 'node-localstorage' is a viable option.
In any case, you need to carefully consider where to store the data; for starters, let's look at where Electron's renderer processes would store its LocalStorage data: this differs based on the OS, but you can get and set the paths using the app module -- the path in question is userData, which on OS X would default to ~/Library/Application Support/<App Name>. Electron uses that folder to persist cookies, caches, LocalStorage etc. so I would suggest using that folder as well. (Otherwise, refer to XDG defaults for good defaults)
What your example above was trying to do is store your 'errorLogDb' in the current working directory, which might depend on your OS, where your App is installed, how you executed it, etc.
Finally, it's a good idea to differentiate between your 'production' app and your app during development and testing, because you might not want to use the same storage folders for every environment. In any case, just writing to './errorLogDb' is likely to cause lots of headaches so I'd be thankful for the permission denied error.
this strategy worked for me:
const { LocalStorage } = require('node-localstorage');
let ls;
mb.on('ready', () => {
let prefsPath = mb.app.getPath('userData') + '/prefs';
ls = new LocalStorage(prefsPath);
loadPrefs();
});
mb.on('after-create-window', () => { /* ls... */ }
exports.togglePref = () => { /* ls... */ }
Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.