Using sudo command in a child process using node - node.js

I am trying to copy a bundle directory into a root directory of a remote server. I try to do this using node and so far I achieved piping the tar content to server and untar it. However when I try to move the directory to root folder it requires sudo access and I just couldn't find a way to do it. I tried -t option for pseudoterminal but I guess that works running from a shell. Here is what I have done so far, any help is highly appreciated:
const path = require("path");
const exec = require('child_process').exec;
var absolutePath = path.resolve(__dirname, "../");
const allCommands = [];
/*
*
*
* 1-) cd to the root folder of the app
* 2-) tar dist folder and pipe the result to the ssh connection
* 3-) connect to server with ssh
* 4-) try to create dist and old_dists folder, if not existing they will be created otherwise they will give an error and rest of the script will continue running
* 5-) cp contents of dist folder to old_dists/dist_$(dateofmoment) folder so if something is wrong somehow you have an backup of the existing config
* 6-) untar the piped tar content into dist folder, untar only files under the first parent directory --strip-components=1 flag, if it was 2 it will dive 2 level from the root folder
*
*
*/
allCommands.push("cd " + absolutePath);
allCommands.push("tar -czvP dist | ssh hostnameofmyserver 'mkdir dist ; mkdir old_dists; cp -R dist/ old_dists/dist_$(date +%Y%m%d_%H%M%S) && tar -xzvP -C dist --strip-components=1'");
//I would like to untar the incoming file into /etc/myapp for example rather than my home directory, this requires sudo and don't know how to handle it
exec(allCommands.join(" && "),
(error, stdout, stderr) => {
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
if (error !== null) {
console.log(`exec error: ${error}`);
}
});
Also whats the best place for storing web application folder in ubuntu server where multiple user can deploy an app, is it a good practice to make the owner of the directory root user, or it just doesn't matter?

As noted in the man page for ssh, you can specify multiple -t arguments to force pty allocation even if the OpenSSH client's stdin is not a tty (which it won't be by default when you spawn a child process in node).
From there you should be able to simply write the password to the child process's .stdin stream when you see the sudo prompt on the .stdout stream.
On a semi-related note, if you want more (programmatic) control over the ssh connection or you don't want to spin up a child process, there is the ssh2 module. You could even do the tarring within node too if you wanted, as there are also tar modules on npm.

Related

Is it possible to check if a folder exists inside a shell command running in exec?

I'm trying to create a oneliner in node using exec. The idea is to create a folder called admin and unzip untar a file into it, so:
mkdir admin
tar xvfz release.tar.gz -C admin/
The problem is, sometimes admin exists (its ok, I want to overwrite it contents), and that using exec will trigger an error:
exec('mkdir admin && tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
if(err) { //mkdir fails when the folder exist }
});
Is there a way to elegantly continue if mkdir fails? Ideally, I want to clean the contents of admin like rm -rf admin/ so the new untar start fresh, but then again, that command will fail.
PS: I know I can check with FS for the folder before lunching exec, but Im interested on an all in one exec solution. (if possible)
EDIT: The question How to mkdir only if a dir does not already exist? is similar but it is about the specific use of mkdir alone, this instead is about concatenation and error propagation.
You don't need to have mkdir fail on an existing target, you can use the --parents flag:
-p, --parents
no error if existing, make parent directories as needed
turning your oneliner into:
exec('mkdir -p admin && tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
// continue
});
Alternatively, you could also use ; instead of && to chain the calls which will always continue, no matter the exit code:
exec('mkdir admin; tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
// continue
});

Loop in shell script not working on remote server

The code tries to ssh from my local server to remote server and runs some commands.
ssh root#$remoteip 'bash -s' <<END3
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
directory=$(dirname ${dir})
echo $dir >> dirstr.txt
mkdir -p $directory
chown $root:$root $directory
chmod 777 $directory
done
END3
the above creates a directory structure on remote server which is working fine.
I want to tar up the same directory structure. so i'm using same logic as above.
ssh root#$remoteip 'bash -s' <<END3
touch emptyfile
tar -cf gcda.tar emptyfile
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
tar -rf gcda.tar $dir
done
END3
The above piece of code should create a tar with all the directories included returned by for loop. i tried the code logic by copying the code to remote server and running it there and it worked. But if I ssh connect from my local server to remote server and try it is not enetring the for loop. it is not appending anything to tar file created with empty file in second line.
Try <<'END3'
Note the quotes around END3, they prevent shell substitutions inside the here-document. You want the $-signs to be transferred to the other side of the ssh connection, not interpreted locally. Same for the backticks.
Extracted from the comments as the accepted answer. Posting as community wiki

Rsync to Amazon Linux EC2 instance - failed: No such file or directory

I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.

Vagrant Puppet using wget to fetch and extract .zip file

Hi I am creating a Vagrant setup and I am needing to fetch a .zip file which will be put in /vagrant/www directory.
They way I am trying to do this is:
exec { 'setup octobercms':
command => "/bin/sh -c 'wget -P /vagrant/www https://github.com/octobercms/install/archive/master.zip'",
timeout => 900,
logoutput => true
}
When vagrant up has been triggered I can see that the file is downloading but it is not appearing in the /vagrant/www directory. The file is not really anything to do with vagrant but will be used to install October CMS.
When using puppet what would be the best way to fetch a zipped file and extract its contents into a directory and remove the zipped archive?
Any help would be much appreciated.
The exec command is run in a generic construct of the shell, and doesn't obey the constraints of the user account evoking it. Try:
exec { 'setup octobercms':
command => "cd /vagrant/www/; /bin/sh -c 'wget https://github.com/octobercms/install/archive/master.zip'",
timeout => 900,
logoutput => true
}

shell cmd works but child_process.exec does not

unzip -o /path/to/my.zip
successfully creates a new directory with the inflated archive but
require('child_process').exec('unzip -o /path/to/my.zip', function(err, stdout, stderr){…})
doesn't create the new directory, even though there is no error and the stdout is the same as when I execute it directly in a shell. What am I doing wrong?

Resources