schroot: pass a command to be executed as if it’s in a shell - linux

i want to do something like:
schroot -c name -u root "export A=3 && export B=4"
but I get the error:
Failed to execute “export”: No such file or directory
In other words, I want to be able to programmatically execute shell commands inside the schroot environment. What is the right way to get this behavior?

I recommend:
schroot -c name -u root sh -c "export A=3 && export B=4"
or better:
schroot -c name -u root -- sh -c "export A=3 && export B=4"
This runs the shell with the '-c' option telling it (the shell) to read the following argument as the command (script) to be executed. The same technique works with other analogous commands: 'su', 'nohup', ...
The -- option terminates the arguments to schroot and ensures that any options on the rest of the command line are passed to and interpreted by the shell, not by schroot. This was suggested by SR_ in a comment, and the man page for schroot also suggests it should be used too (search for 'Separator'). The GNU getopt() function by default permutes arguments, which is not wanted here. The -- prevents it from permuting the arguments after the --.

schroot -c name -u root -- export A=3 && export B=4
Ensuring that /etc/schroot/schroot.conf has
run-exec-scripts=true
run-setup-scripts=true

You could try
schroot -c name -u root "/bin/bash -c 'export A=3; export B=4'"
but this is the first time i've heard of schroot. And the exports look like they're useless...even running the double-quoted stuff directly from the command line, it seems the child shell doesn't want to affect the parent's environment.

Related

running sudo -i command in bash script? [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

Bash on Windows: script only progresses in -v mode

I have a script which starts like this:
#!/bin/bash
echo "Running on OSTYPE: '$OSTYPE'"
DISTRO=""
CODENAME=""
SUDO=$(command -v sudo 2>/dev/null)
echo foo
When I run it as-is or with bash -x, it stops after the assignment to SUDO, I get as only output
+ DISTRO=
+ CODENAME=
++ command -v sudo
+ SUDO=
When I add -v to the bash invocation to get even more verbose output, the script runs normally and I see my "foo". I'm using bash 4.4.23 as shipped on https://git-scm.com/downloads
What is going wrong on my system and how can I debug this?
This happens because $(command ...) is a subshell.
Instead of set -x or bash -x create a file like follow inside your home folder:
.bash_env
set -x
So the set -x will be applied to any new bash instance and subinstance.

How to parse json data correctly using jq to set to var inside shell script [duplicate]

I do this in a script:
read direc <<< $(basename `pwd`)
and I get:
Syntax error: redirection unexpected
in an ubuntu machine
/bin/bash --version
GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)
while I do not get this error in another suse machine:
/bin/bash --version
GNU bash, version 3.2.39(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2007 Free Software Foundation, Inc.
Why the error?
Does your script reference /bin/bash or /bin/sh in its hash bang line? The default system shell in Ubuntu is dash, not bash, so if you have #!/bin/sh then your script will be using a different shell than you expect. Dash does not have the <<< redirection operator.
Make sure the shebang line is:
#!/bin/bash
or
#!/usr/bin/env bash
And run the script with:
$ ./script.sh
Do not run it with an explicit sh as that will ignore the shebang:
$ sh ./script.sh # Don't do this!
If you're using the following to run your script:
sudo sh ./script.sh
Then you'll want to use the following instead:
sudo bash ./script.sh
The reason for this is that Bash is not the default shell for Ubuntu. So, if you use "sh" then it will just use the default shell; which is actually Dash. This will happen regardless if you have #!/bin/bash at the top of your script. As a result, you will need to explicitly specify to use bash as shown above, and your script should run at expected.
Dash doesn't support redirects the same as Bash.
Docker:
I was getting this problem from my Dockerfile as I had:
RUN bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
However, according to this issue, it was solved:
The exec form makes it possible to avoid shell string munging, and
to RUN commands using a base image that does not contain /bin/sh.
Note
To use a different shell, other than /bin/sh, use the exec form
passing in the desired shell. For example,
RUN ["/bin/bash", "-c", "echo hello"]
Solution:
RUN ["/bin/bash", "-c", "bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)"]
Notice the quotes around each parameter.
You can get the output of that command and put it in a variable. then use heredoc. for example:
nc -l -p 80 <<< "tested like a charm";
can be written like:
nc -l -p 80 <<EOF
tested like a charm
EOF
and like this (this is what you want):
text="tested like a charm"
nc -l -p 80 <<EOF
$text
EOF
Practical example in busybox under docker container:
kasra#ubuntu:~$ docker run --rm -it busybox
/ # nc -l -p 80 <<< "tested like a charm";
sh: syntax error: unexpected redirection
/ # nc -l -p 80 <<EOL
> tested like a charm
> EOL
^Cpunt! => socket listening, no errors. ^Cpunt! is result of CTRL+C signal.
/ # text="tested like a charm"
/ # nc -l -p 80 <<EOF
> $text
> EOF
^Cpunt!
do it the simpler way,
direc=$(basename `pwd`)
Or use the shell
$ direc=${PWD##*/}
Another reason to the error may be if you are running a cron job that updates a subversion working copy and then has attempted to run a versioned script that was in a conflicted state after the update...
On my machine, if I run a script directly, the default is bash.
If I run it with sudo, the default is sh.
That’s why I was hitting this problem when I used sudo.
In my case error is because i have put ">>" twice
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> >> $LOG_PATH
i just correct it as
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> $LOG_PATH
Before running the script, you should check first line of the shell script for the interpreter.
Eg:
if scripts starts with /bin/bash , run the script using the below command
"bash script_name.sh"
if script starts with /bin/sh, run the script using the below command
"sh script_name.sh"
./sample.sh - This will detect the interpreter from the first line of the script and run.
Different Linux distributions having different shells as default.

Can I use source test1.sh with gnome-terminal -x?

I am executing 2 shell scripts from the main using source.
main.sh
#/bin/sh
a=1
b=2
c=3
gnome-terminal -x sh -c ". ./test1.sh|less" (note the source ".")
gnome-terminal -x sh -c ". ./test2.sh|less"
...
...
test1.sh
#!/bin/sh
echo "a="$a #doesn't print anything
I was able to do following 2 separately but when I combine, I am not able to access variables of main into other files
1. gnome-terminal -x sh -c "test1.sh|less" #able to execute in separate terminal
2. . ./test1.sh #able to access variables from main.sh in test1.sh
There are two problems here, the first is that you do not export the variables.
In this case, you must do an:
export a b c
after setting the variables.
The second problem is that the terminal windows will be launched reusing a pre-existing gnome-terminal session if one exists. This pre-existing session will have no idea of these environment variables. As a result you need to pass the option --disable-factory to the gnome-terminal command e.g.
gnome-terminal --disable-factory -x sh -c ". ./test1.sh|less"
and then you will see the proper value in the window.
Your variables have to be exported for them to be accessible to child processes:
export a=1
etc.

How to get standard output from subshell?

I have a script like this?
command='scp xxx 192.168.1.23:/tmp'
su - nobody -c "$command"
The main shell didn't print any info.
How can I get output from the sub command?
You can get all of its output by just redirecting the corresponding output channel:
command='scp ... '
su - nobody -c "$command" > file
or
var=$(su - nobody -c "$command")
But if you don't see anything, maybe the diagnostics output of scp is disabled?
Is there a "-q" option somewhere in your real command?
You aren't actually running the scp. When you use the
VAR=value cmd ...
syntax, the VAR=value setting goes into the environment of cmd but it's not available in the current shell. The command after your -c is empty, or the previous value of $command if there was one.

Resources