ssh2-sftp-client get() request giving 'denied permission - error' - node.js

I am using this code in my electron app to connect to an sftp server where I need to collect some data. I have no problem listing the files in the /out folder, but it fails to get the sftp file with 'deined permission' error. Ideally I would like to be able get() file and access the text data within directly in the function without storing to a file.
let Client = require('ssh2-sftp-client');
let sftp = new Client();
var root = '/out';
var today = new Date();
var mon = ((today.getMonth()+1) < 10)? "0" + (today.getMonth()+1) : (today.getMonth()+1);
var date = (today.getDate() < 10)? "0" + today.getDate() : today.getDate();
var fileDate = mon + date;
sftp.connect({
host: '<server-address>',
port: 2222,
username: 'XXXXXXXX',
password: 'xxxxxxxx',
privateKey: fs.readFileSync(path.join(__dirname, '../rsa/<file-name-here>.pem'))
})
.then(() => {
return sftp.list(root, 'SN5M' + fileDate);
})
.then((fileInfo) => {
if (fileInfo) {
var filePath = root + '/' + fileInfo[fileInfo.length - 1].name;
return sftp.get(filePath).then((file) => {
console.log(file);
event.returnValue = file;
sftp.end();
})
.catch((err) => {
console.log('File get error', err);
event.returnValue = err;
sftp.end();
});
}
})
.catch((err) => {
console.log('File info error', err);
event.returnValue = err;
sftp.end();
});

Try this and see if it works or not
'get' returns (String|Stream|Buffer).
let dst = fs.createWriteStream('/local/file/path/data.txt');
sftp.get(filePath,dst)
Refer https://www.npmjs.com/package/ssh2-sftp-client#orga0dfcd5

Looking at your code, you have two problems.
If you call get() with only 1 argument, it returns a buffer, not a file. To get the file, just do
client.get(sourceFilePath, localFilePath)
and the file will be saved locally as localFilePath. Both arguments are strings and need to be full paths i.e. include the filename, not just the directory. The filename for the second argument can be different from the first. However, if all you want is to retrieve the file, you are better off using fastGet() rather than get(). The get() method is good for when you want to do something in the code with the data e.g. a buffer or write stream piping/processing. The fastGet() method is faster than get() as it does the transfer using concurrent processes, but does not permit use of buffers or streams for further processing.
The error message you are seeing is either due to the way you are calling get() or it is an indication you don't have permission to read the file your trying to access (as the user your connected with). Easiest way to check this is to use the openSSH sftp program (available on Linux, mac and windows) and the key your using (use the -i switch) to try and download the file. If it fails with a permission error, then you know it is a permission error and not a problem with your code or ssh2-sftp-client module.
EDIT: I just noticed you are also using both a password and a key file. You don't need both - either one will work, but you don't need to use both. I tend to use a keyfile when possible as it avoids having to have a password stored somewhere. Make sure not to add a passphrase to your key. Alternatively, you can use something like the dotenv module and store your credentials and other config in a .env file which you do not check into version control.

Related

Liquibase lib file has console.log which prints the DB credentials. Please find the line#47 of the attached liquibase lib file. how to stop it?

const LiquibaseTS = require('node-liquibase').Liquibase;
const POSTGRESQL_DEFAULT_CONFIG = require('node-liquibase').POSTGRESQL_DEFAULT_CONFIG;
const myConfig = {
...POSTGRESQL_DEFAULT_CONFIG,
changeLogFile: './changelog.xml',
url: 'jdbc:postgresql://localhost:5432/node_liquibase_testing',
username: 'postgres',
password: 'postgres123',
logLevel: 'info'
}
const instTs = new LiquibaseTS(myConfig);
instTs.update();
I've the above code in my index.js file.
I get the below console output when the line instTs.update(); get executed,
Running /Users/path1/path2/sampleapp/node_modules/node-liquibase/dist/liquibase/liquibase --changeLogFile="./changelog.xml" --url="jdbc:postgresql://localhost" --username="Postgres" --password="postgres123" --classpath="/Users/path1/path2/sampleapp/node_modules/node-liquibase/dist/drivers/postgresql-42.2.8.jar" --logLevel="info" update ...
when I debug the node-liquibase library I found that this console print is due to the line#47 of the file /Users/path1path2/sampleapp/node_modules/node-liquibase/dist/node-liquibase.cjs.development.js
the code snippet in the node-liquibase.cjs.development.js file is,
CommandHandler.spawnChildProcess = function spawnChildProcess(commandString) {
console.log("Running " + commandString + "...");
return new Promise(function (resolve, reject) {
child_process.exec(commandString, function (error, stdout, stderr) {
console.log('\n', stdout);
if (error) {
console.error('\n', stderr); // error.stderr = stderr;
return reject(error);
}
resolve(stdout);
});
});
};
return CommandHandler;
}();
as per that file, the line# 47 is very first line of the function
console.log("Running " + commandString + "...");
This code is printing the DB details/credentials in the console. Also I noticed that liquibase prints the DB details/credentials on the console along with the error for the changeset error scenario.
What I need?
I want to stop printing the DB credentials on the console during normal run.
During error scenario, I want only the liquibase error printed on the console. it should not print the error with db credentials on the console.
index.js code instTs.update(); is not throwing any exception during error scenario. How can we find/distinguish the error scenario. eg: changeset to insert a value where the table doesn't exist.
Any help greatly appreciated.
Since it prints out the configuration, your only option is probably to move the sensitive information to another spot liquibase can read configuration from.
If you make a liquibase.properties file with your password in it, I don't think that will show in the logs. I've not used the node-liquibase frontend much, so I don't know if you need to explicitly add a liquibasePropertiesFile setting to your config.
Alternately, if you use a newer version of liquibase (probably newer than what node-liquibase ships with by default, but there is a way to override that with the liquibase argument) you can set the password and other fields with environment variables as well, and those will not be logged.

fs.createReadStream getting a different path than what's being passed in

I'm using NodeJS on a VM. One part of it serves up pages, and another part is an API. I've run into a problem, where fs.createReadStream attempts to access a different path than what is being passed into the function. I made a small test server to see if it was something else in the server affecting path usage, for whatever reason, but it's happening on my test server as well. First, here's the code:
const fs = require('fs');
const path = require('path');
const csv = require('csv-parser');
const readCSV = (filename) => {
console.log('READ CSV GOT ' + filename); // show me what you got
return new Promise((resolve, reject) => {
const arr = [];
fs.createReadStream(filename)
.pipe(csv())
.on('data', row => {
arr.push(row);
})
.on('error', err => {
console.log(err);
})
.on('end', () => {
resolve(arr);
});
}
}
// tried this:
// const dir = path.relative(
// path.join('path', 'to', 'this', 'file),
// path.join('path', 'to', 'CONTENT.csv')
// );
// tried a literal relative path:
// const dir = '../data/CONTENT.csv';
// tried a literal absolute path:
// const dir = '/repo/directory/server/data/CONTENT.csv';
// tried an absolute path:
const dir = path.join(__dirname, 'data', 'CONTENT.csv');
const content = readCSV(dir)
.then(result => {console.log(result[0]);})
.catch(err => {console.log(err);});
...but any way I slice it, I get the following output:
READCSV GOT /repo/directory/server/data/CONTENT.csv
throw er; // Unhandled 'error' event
^
Error: ENOENT: no such file or directory, open '/repo/directory/data/CONTENT.csv'
i.e., is fs.createReadStream somehow stripping out the directory of the server, for some reason? I suppose I could hard code the directory into the call to createReadStream, maybe? I just want to know why this is happening.
Some extra: I'm stuck on node v8.11, can't go any higher. On the server itself, I believe I'm using older function(param) {...} instead of arrow functions -- but the behavior is exactly the same.
Please help!!
Code is perfect working.
I think you file CONTENT.csv should be in data folder like "/repo/directory/data/CONTENT.csv".
I'm answering my own question, because I found an answer, I'm not entirely sure why it's working, and at least it's interesting. To the best of my estimation, it's got something to do with the call stack, and where NodeJS identifies as the origin of the function call. I've got my server set up in an MVC pattern so my main app.js is in the root dir, and the function that's being called is in /controllers folder, and I've been trying to do relative paths from that folder -- I'm still not sure why absolute paths didn't work.
The call stack goes:
app.js:
app.use('/somepath', endpointRouter);
...then in endpointRouter.js:
router.get('/request/file', endpointController.getFile);
...then finally in endpointController.js:
const readCSV = filename => {
//the code I shared
}
exports.getFile = (req, res, next) => {
// code that calls readCSV(filename)
}
...and I believe that because Node views the chain as originating from app.js, it then treats all relative paths as relative to app.js, in my root folder. Basically when I switched to the super unintuitive single-dot-relative path: './data/CONTENT.csv', it worked with no issue.

NodeJS fs.unlink() not releasing file handles

I am using the following call to delete existing files in my nodeJS app runing on Linux (RHEL).
fs.unlink(downloadsFolder + '/' + file)
However, after a few days I noticed the files are still in the system since the file handles were not released. I restarted the node server and those files were eventually gone. How do I fix this issue programmatically?
dzdo lsof -L | grep -i deleted
node 48782 root 600743243 403197165 /mnt/downloads/file_1516312894734.csv (deleted)
node 48782 root 14999 403197166 /mnt/downloads/file_1516729327306.csv (deleted)
I also get this warning in the logs for the fs.unlink(), would this be causing it?
(node:48782) [DEP0013] DeprecationWarning: Calling an asynchronous function without callback is deprecated.
As the warning says, the fs.unlink method is asynchronous which means you have to provide a callback function that will be executed when delete operation complete:
fs.unlink(downloadsFolder + '/' + file, (err) => {
if (err) throw err;
console.log('File successfully deleted');
});
Or you could use synchronous version of fs.unlink:
fs.unlinkSync(downloadsFolder + '/' + file);
I bumped into the same problem and since this question has no proper solution yet, that's how I handled it;
In my case, I changed this code:
const uploaded = await s3.send(newPutObjectCommand({
Bucket: `myBucketName-${module}`,
Key: pusher === 'storage'
? user + '/' + filename
: `${user}/${fName}`,
Body: fileStream
}));
for this one:
const uploaded = ctx === 'p'
? await s3.send(new PutObjectCommand({
Bucket: `ek-knote-${module}`,
Key: pusher === 'storage'
? user + '/' + filename
: `${user}/${fName}`,
Body: fileStream
}))
: {$metadata: {httpStatusCode: 200}};
And then is when that bug appear.
This piece of logic only uploads a file into aws s3. The only difference is that the down below just does so when ctx (the context) has value 'p', (production). I rewrite it that way so when I'm in 'd' (development) I don't actually upload the file (because they have the bad vice to charge for resources used). The body of the object I'm sending has a constant named 'fileStream'. That is just a read stream:
const fileStream = fs.createReadStream(fileRoute);
So it didn't happen before. So what was the bug?
For some reason I unknow, when fileStream const is used is somehow destroyed, but when I skipped that part, once the file get's unlinked still get's hanging there until the server is reboot. I think it's because the const is being held by memory (because you can't actually erease it by right clicking and selecting delete either) but don't quote me on that;
So I fixed it by actually destroying, for lack of a better word, the fileStream:
fork.on('close', async code => {
obj.doc.destroy();
await unlink(obj.file);
});
obj is an object I have, among others, the file route (file) and the actual readStream (doc). Just destroying it before unlink it fix the problem.

neo4j, nodejs, session expire error, how to fix it?

I was trying to use neo4j at backend. First I want to import csv to neo4j. (first tried to see how many lines csv file has)
But having problem, the code is following
var neo4j = require('neo4j-driver').v1;
var driver = neo4j.driver("bolt://localhost", neo4j.auth.basic("neo4j", "neo4j"));
function createGraphDataBase(csvfilepath)
{
var session = driver.session();
return session
.run( 'LOAD CSV FROM {csvfilepath} AS line RETURN count(*)',
{csvfilepath}
)
.then(result => {
session.close();
console.log(' %d lines in csv.file', result);
return result;
})
.catch(error => {
session.close();
console.log(error);
return error;
});
}
the "csvfilepath" is the path of csv file, it is as follows.
'/Users/.../Documents/Project/.../test/spots.csv';
is there something wrong with giving path like this?
I am calling that function on other module as
var api = require('./neo4j.js');
const csvFile = path.join(__dirname,csvFileName);
api.createGraphDataBase(csvFile);
I am having error as
Error: Connection was closed by server
....
I am new to these, please help!
The URL that you specify in a LOAD CSV clause must be a legal URL.
As stated in this guide:
Make sure to use the right URLs esp. file URLs.+ On OSX and Unix use
file:///path/to/data.csv, on Windows, please use
file:c:/path/to/data.csv
In your case, csvfilepath needs to specify the file:/// protocol (since you seem to be running on OSX) for a local file. Based on your example, the value should be something like this:
'file:///Users/.../Documents/Project/.../test/spots.csv'

Meteor/Node writeFile crashes server

I have the following code:
Meteor.methods({
saveFile: function(blob, name, path, encoding) {
var path = cleanPath(path), fs = __meteor_bootstrap__.require('fs'),
name = cleanName(name || 'file'), encoding = encoding || 'binary',
chroot = Meteor.chroot || 'public';
// Clean up the path. Remove any initial and final '/' -we prefix them-,
// any sort of attempt to go to the parent directory '..' and any empty directories in
// between '/////' - which may happen after removing '..'
path = chroot + (path ? '/' + path + '/' : '/');
// TODO Add file existance checks, etc...
fs.writeFile(path + name, blob, encoding, function(err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
console.log('The file ' + name + ' (' + encoding + ') was saved to ' + path);
}
});
function cleanPath(str) {
if (str) {
return str.replace(/\.\./g,'').replace(/\/+/g,'').
replace(/^\/+/,'').replace(/\/+$/,'');
}
}
function cleanName(str) {
return str.replace(/\.\./g,'').replace(/\//g,'');
}
}
});
Which I took from this project
https://gist.github.com/dariocravero/3922137
The code works fine, and it saves the file, however it repeats the call several time and each time it causes meteor to reset using windows version 0.5.4. The F12 console ends up looking like this: . The meteor console loops over the startup code each time the 503 happens and repeats the console logs in the saveFile function.
Furthermore in the target directory the image thumbnail keeps displaying and then display as broken, then a valid thumbnail again, as if the fs is writing it multiple times.
Here is the code that calls the function:
"click .savePhoto":function(e, template){
e.preventDefault();
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var id = e.srcElement.id;
var item = Session.get("employeeItem");
var file = template.find('input[name='+id+']').files[0];
// $(template).append("Loading...");
var dataURL = '/.bgimages/'+file.name;
Meteor.saveFile(file, file.name, "/.bgimages/", function(){
if(id=="goodPhoto"){
EmployeeCollection.update(item._id, { $set: { good_photo: dataURL }});
}else{
EmployeeCollection.update(item._id, { $set: { bad_photo: dataURL }});
}
// Update an image on the page with the data
$(template.find('img.'+id)).delay(1000).attr('src', dataURL);
});
},
What's causing the server to reset?
My guess would be that since Meteor has a built-in "automatic directories scanning in search for file changes", in order to implement auto relaunching of the application to newest code-base, the file you are creating is actually causing the server reset.
Meteor doesn't scan directories beginning with a dot (so called "hidden" directories) such as .git for example, so you could use this behaviour to your advantage by setting the path of your files to a .directory of your own.
You should also consider using writeFileSync insofar as Meteor methods are intended to run synchronously (inside node fibers) contrary to the usual node way of asynchronous calls, in this code it's no big deal but for example you couldn't use any Meteor mechanics inside the writeFile callback.
asynchronousCall(function(error,result){
if(error){
// handle error
}
else{
// do something with result
Collection.update(id,result);// error ! Meteor code must run inside fiber
}
});
var result=synchronousCall();
Collection.update(id,result);// good to go !
Of course there is a way to turn any asynchronous call inside a synchronous one using fibers/future, but that's beyond the point of this question : I recommend reading this EventedMind episode on node future to understand this specific area.

Resources