unzip -o /path/to/my.zip
successfully creates a new directory with the inflated archive but
require('child_process').exec('unzip -o /path/to/my.zip', function(err, stdout, stderr){…})
doesn't create the new directory, even though there is no error and the stdout is the same as when I execute it directly in a shell. What am I doing wrong?
Related
i tried to execute an easy shell skript with curl comand.
#!/usr/bin/env bash
.~/.foo
curl -n ${URL}/submit/foo/netrc
i executed this shell script with sudo rights and got the error message /root/.foo No such file or directory This hidden file .foo has an variable URL. If i executed this script without sudo rights. I got the error forbidden. Where is my mistake ?
I'm learning about s6 and I've come to a point where I want to use s6-log. I have the following Dockerfile
FROM alpine:3.10
RUN apk --no-cache --update add s6
WORKDIR /run/service
COPY \
./rootfs/run \
./rootfs/app /run/service/
CMD ["s6-supervise", "."]
with ./rootfs/app being just a simple sh script
#!/bin/sh
while true;
do
sleep 1
printf "Hello %s\n" "$(date)"
done
and run being
#!/bin/execlineb -P
fdmove -c 2 1
s6-log -b n20 s1000000 t /var/log/app/
/run/service/app
Why do I keep getting
s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory
? Without the s6-log line it all works fine.
So it seems that I've been doing this incorrectly. Namely I should've used s6-svscan instead of s6-supervice.
Using s6-svscan I can create a log/ subdirectory in my service's directory so that my app's stdout is redirected to logger's stdin as described on s6-svscan's website:
For every new subdirectory dir it finds, the scanner spawns a s6-supervise process on it. If dir/log exists, it spawns a s6-supervise process on both dir and dir/log, and maintains a never-closing pipe from the service's stdout to the logger's stdin.
I've written run script like so:
#!/bin/execlineb -P
s6-log -b n20 s512 T /var/log/app
and with that I've changed the CMD to
CMD ["s6-svscan", "/run/"]
where /run/service/ contains both run script for my service (without s6-log call) and log subdirectory with the run script above.
I'm trying to create a oneliner in node using exec. The idea is to create a folder called admin and unzip untar a file into it, so:
mkdir admin
tar xvfz release.tar.gz -C admin/
The problem is, sometimes admin exists (its ok, I want to overwrite it contents), and that using exec will trigger an error:
exec('mkdir admin && tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
if(err) { //mkdir fails when the folder exist }
});
Is there a way to elegantly continue if mkdir fails? Ideally, I want to clean the contents of admin like rm -rf admin/ so the new untar start fresh, but then again, that command will fail.
PS: I know I can check with FS for the folder before lunching exec, but Im interested on an all in one exec solution. (if possible)
EDIT: The question How to mkdir only if a dir does not already exist? is similar but it is about the specific use of mkdir alone, this instead is about concatenation and error propagation.
You don't need to have mkdir fail on an existing target, you can use the --parents flag:
-p, --parents
no error if existing, make parent directories as needed
turning your oneliner into:
exec('mkdir -p admin && tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
// continue
});
Alternatively, you could also use ; instead of && to chain the calls which will always continue, no matter the exit code:
exec('mkdir admin; tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
// continue
});
i've created a bash script batch-create-users.sh and I want execute it from the terminal, Im on the folder when the script is and I run the command
./batch-create-users.sh and I got error
the file './batch-create-users.sh' is not executable by this user
I entered as sudo -s and give the password but it doesn't help, any idea?
Give execute privilege to the file before executing.
chmod +x batch-create-users.sh
I am trying to copy a bundle directory into a root directory of a remote server. I try to do this using node and so far I achieved piping the tar content to server and untar it. However when I try to move the directory to root folder it requires sudo access and I just couldn't find a way to do it. I tried -t option for pseudoterminal but I guess that works running from a shell. Here is what I have done so far, any help is highly appreciated:
const path = require("path");
const exec = require('child_process').exec;
var absolutePath = path.resolve(__dirname, "../");
const allCommands = [];
/*
*
*
* 1-) cd to the root folder of the app
* 2-) tar dist folder and pipe the result to the ssh connection
* 3-) connect to server with ssh
* 4-) try to create dist and old_dists folder, if not existing they will be created otherwise they will give an error and rest of the script will continue running
* 5-) cp contents of dist folder to old_dists/dist_$(dateofmoment) folder so if something is wrong somehow you have an backup of the existing config
* 6-) untar the piped tar content into dist folder, untar only files under the first parent directory --strip-components=1 flag, if it was 2 it will dive 2 level from the root folder
*
*
*/
allCommands.push("cd " + absolutePath);
allCommands.push("tar -czvP dist | ssh hostnameofmyserver 'mkdir dist ; mkdir old_dists; cp -R dist/ old_dists/dist_$(date +%Y%m%d_%H%M%S) && tar -xzvP -C dist --strip-components=1'");
//I would like to untar the incoming file into /etc/myapp for example rather than my home directory, this requires sudo and don't know how to handle it
exec(allCommands.join(" && "),
(error, stdout, stderr) => {
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
if (error !== null) {
console.log(`exec error: ${error}`);
}
});
Also whats the best place for storing web application folder in ubuntu server where multiple user can deploy an app, is it a good practice to make the owner of the directory root user, or it just doesn't matter?
As noted in the man page for ssh, you can specify multiple -t arguments to force pty allocation even if the OpenSSH client's stdin is not a tty (which it won't be by default when you spawn a child process in node).
From there you should be able to simply write the password to the child process's .stdin stream when you see the sudo prompt on the .stdout stream.
On a semi-related note, if you want more (programmatic) control over the ssh connection or you don't want to spin up a child process, there is the ssh2 module. You could even do the tarring within node too if you wanted, as there are also tar modules on npm.