I am running a bash script with the shebang line #!/bin/bash on RHEL 7.6 . The bash version is 4.2 and I have verified this by running /bin/bash --version. But once this script is being executed, the BASH_VERSION variable being reported inside the script is 3.X. I have printed out the binary location when the script is running is indeed /bin/bash by printing $(ls -l /proc/$$/exe). That actually reports /bin/bash
lrwxrwxrwx 1 xx yy 0 Jan 28 21:03 /proc/6855/exe -> /bin/bash
I am really confused as to how this can happen?
Related
wsl -h shows the following:
--exec, -e <CommandLine> Execute the specified command without using the default Linux shell.
-- Pass the remaining command line as is.
What does "without using the default Linux shell" mean (i.e. what else is it going to use, if not the default shell!?)?.
Additionally, by way of an example, I now have three possible ways to run Linux ls from my PowerShell prompt (i.e. this will not be Get-ChildItem aliased to ls, but instead a Linux command via WSL):
PS C:\> wsl -e ls # Alternatively, wsl --exec ls
PS C:\> wsl -- ls
PS C:\> wsl ls
But all outputs appear to be the same. How would you explain the differences between these three ways of running a WSL Linux command from a PowerShell prompt?
I think it means wsl runs the command directly, instead of spawning a shell process to run the command.
For example, if I run :
wsl -e sleep 10
From another terminal, I have :
root 1482 1 0 11:32 tty3 00:00:00 /init
ubuntu 1483 1482 0 11:32 tty3 00:00:00 sleep 10
We can see /init is the parent of sleep 10, without a shell in between.
A cool trick is using this to set the X11 $DISPLAY variable, letting you use windows terminal to get remote shells using WSLG.
# in microsoft terminal or powershell use this command line
wsl.exe -- ssh -a -X -Y $hostname
then on the remote system
# DISPLAY will show something like localhost:10.0 on the remote system
echo $DISPLAY
# use a program like xeyes to test
xeyes
Shell module, Shell.pm, does not seem to run shell commands with Centos 7.4.
For instance following script is OK with Centos 6.4:
#!/usr/bin/perl
use Shell qw(ps);
$cmd=ps;
print $cmd . "\n";
Result is as expected:
PID TTY TIME CMD
29090 pts/1 00:00:00 bash
29325 pts/1 00:00:00 test.pm
29326 pts/1 00:00:00 ps
But with Centos 7.4
#!/usr/bin/perl -I /usr/share/perl5/CPAN
use Shell qw(ps);
$cmd=ps;
print $cmd . "\n";
Result is:
ps
If i add to the previous script:
cat("/etc/passwd");
Following error is raised:
Undefined subroutine &main::cat called at ./test.pm line 10
With a real script none of system commands are well interpreted. Should I rewrite everything with system('command')!?
Finally I succeeded to make it work !
Installation was not quite good.
I had to run :
cpan App::cpanminus
then
cpanm Shell
Alright guys, so I try to install rvm in a docker container based on ubuntu:14.04. During the process, I discovered that some people do something like this to ensure docker commands are also run with the bash:
RUN ln -fs /bin/bash /bin/sh
Now The weirdness happens and I hope someone of you can explain it to me:
→ docker run -it --rm d81ff50de1ce /bin/bash
root#e93a877ab3dc:/# ls -lah /bin
....
lrwxrwxrwx 1 root root 9 Mar 1 16:15 sh -> /bin/bash
lrwxrwxrwx 1 root root 9 Mar 1 16:15 sh.distrib -> /bin/bash
...
root#e93a877ab3dc:/# /bin/sh
sh-4.3# echo $0
/bin/sh
Can someone explain what's going on here? I know I could just prefix my commands in the dockerfile w/ bash -c, but I would like to understand what is happening here and if possible still ditch the bash -c prefix in the dockerfile.
Thanks a lot,
Robin
It's because bash has a compatibility mode where it tries to emulate sh if it is started via the name sh, as the manpage says:
If bash is invoked with the name sh, it tries to mimic the startup
behavior of historical versions of sh as closely as possible, while
conforming to the POSIX standard as well. When invoked as an
interactive login shell, or a non-interactive shell with the --login
option, it first attempts to read and execute commands from
/etc/profile and ~/.profile, in that order. The --noprofile option
may be used to inhibit this behavior. When invoked as an interactive
shell with the name sh, bash looks for the variable ENV, expands its
value if it is defined, and uses the expanded value as the name of a
file to read and execute. Since a shell invoked as sh does not
attempt to read and execute commands from any other startup files, the
--rcfile option has no effect. A non-interactive shell invoked with the name sh does not attempt to read any other startup files. When
invoked as sh, bash enters posix mode after the startup files are
read.
I am trying to submit a script to slurm that runs m4 on an input file. m4 is installed on our cluster, and if I run the script by itself, everything works as expected. But when I submit a run to slurm via a slurm script, I get an error.
Here is the script I would like to run (named m4it.sh).
[Note that I'm printing PATH and SHELL in an attempt to debug.]
#!/usr/bin/env bash
echo "Beginning m4it.sh"
echo "PATH=$PATH"
echo "SHELL=$SHELL"
echo
m4 file.m4 > fileout.txt
and here is my slurm script:
#!/usr/bin/env bash
#
#SBATCH --job-name=m4it
### Account name (req'd)
#SBATCH --account=MyAccount
### Redirect .o and .e files to the logs dir
#SBATCH -o m4it.out
#SBATCH -e m4it.err
#
#SBATCH --ntasks=1
#SBATCH --time=00:01:00
#SBATCH --mem-per-cpu=125
echo "PATH=$PATH"
echo "SHELL=$SHELL"
echo
echo "running m4it.sh"
echo
./m4it.sh
which submits successfully to slurm via
sbatch m4it.slurm
When it executes, I get the following error in my m4it.err logfile:
./m4it.sh: line 8: m4: command not found
The PATH and the SHELL variables (printed to m4it.out by the m4it.slurm and by the m4it.sh scripts) are identical. The PATH contains my PATH when I login, and SHELL is /bin/bash, as expected.
Even if I include a symlink to the m4 executable from a directory in my PATH, I still get this error. Also, it is not just m4 that is the problem. The script will report the command "apropos" as an unknown command, even though it runs fine on the command line. The script can "cd" and "ls" just fine though.
I've checked read/write/execute permissions.
ls -ld / /usr /usr/bin /usr/bin/m4
yields the following:
dr-xr-xr-x. 30 root root 4096 Apr 8 11:11 /
drwxr-xr-x. 14 root root 4096 Feb 17 20:24 /usr
dr-xr-xr-x. 2 root root 36864 Apr 29 11:14 /usr/bin
-rwxr-xr-x 1 root root 212440 Jun 3 2010 /usr/bin/m4
It seems that the node the m4it.sh script executes on is different from the front node and that somehow information (environment variables or paths) are not coming across. I have also tried to export all my settings with the argument --export=ALL as follows:
sbatch m4it.slurm --export=ALL
but this didn't work either (same result).
Can anyone help here?
I was able to log in to the compute node in an interactive session. Indeed that node's /usr/bin is significantly different than the front node's, and m4 is not installed.
This also explains why the symlink from a directory in my PATH no longer worked. It was pointing to /usr/bin/m4, but as soon as the job was executed on that compute node, /usr/bin/m4 no longer existed, and thus the symlink was invalid.
If I want to use m4, the solution is to either ask the admins to install m4 on the compute nodes or, alternatively, copy a local version of the executable to somewhere in my home directory that exists in my PATH variable.
my user-data script
#!
set -e -x
echo `whoami`
su root
yum update -y
touch ~/PLEASE_WORK.txt
which is fed in from the command:
ec2-run-instances ami-05355a6c -n 1 -g mongo-group -k mykey -f myscript.sh -t t1.micro -z us-east-1a
but when I check the file /var/log/cloud-init.log, the tail -n 5 is:
[CLOUDINIT] 2013-07-22 16:02:29,566 - cloud-init-cfg[INFO]: cloud-init-cfg ['runcmd']
[CLOUDINIT] 2013-07-22 16:02:29,583 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
[CLOUDINIT] 2013-07-22 16:02:29,686 - cloud-init-cfg[DEBUG]: handling runcmd with freq=None and args=[]
[CLOUDINIT] 2013-07-22 16:02:33,691 - cloud-init-run-module[INFO]: cloud-init-run-module ['once-per-instance', 'user-scripts', 'execute', 'run-parts', '/var/lib/cloud/data/scripts']
[CLOUDINIT] 2013-07-22 16:02:33,699 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
I've also verified that curl http://169.254.169.254/latest/user-data returns my file as intended.
and no other errors or the output of my script happens. how do I get the user-data scrip to execute on boot up correctly?
Actually, cloud-init allows a single shell script as an input (though you may want to use a MIME archive for more complex setups).
The problem with the OP's script is that the first line is incorrect. You should use something like this:
#!/bin/sh
The reason for this is that, while cloud-init uses #! to recognize a user script, the operating system needs a complete shebang line in order to execute the script.
So what's happening in the OP's case is that cloud-init behaves correctly (i.e. it downloads and tries to run the script) but the operating system is unable to actually execute it.
See: Shebang (Unix) on Wikipedia
Cloud-init does not accept plain bash scripts, just like that. It's a beast that eats YAML file that defines your instance (packages, ssh keys and other stuff).
Using MIME you can also send arbitrary shell scripts, but you have to MIME-encode them.
$ cat my-boothook.txt
#!/bin/sh
echo "Hello World!"
echo "This will run as soon as possible in the boot sequence"
$ cat my-user-script.txt
#!/usr/bin/perl
print "This is a user script (rc.local)\n"
$ cat my-include.txt
# these urls will be read pulled in if they were part of user-data
# comments are allowed. The format is one url per line
http://www.ubuntu.com/robots.txt
http://www.w3schools.com/html/lastpage.htm
$ cat my-upstart-job.txt
description "a test upstart job"
start on stopped rc RUNLEVEL=[2345]
console output
task
script
echo "====BEGIN======="
echo "HELLO From an Upstart Job"
echo "=====END========"
end script
$ cat my-cloudconfig.txt
#cloud-config
ssh_import_id: [smoser]
apt_sources:
- source: "ppa:smoser/ppa"
$ ls
my-boothook.txt my-include.txt my-user-script.txt
my-cloudconfig.txt my-upstart-job.txt
$ write-mime-multipart --output=combined-userdata.txt \
my-boothook.txt:text/cloud-boothook \
my-include.txt:text/x-include-url \
my-upstart-job.txt:text/upstart-job \
my-user-script.txt:text/x-shellscript \
my-cloudconfig.txt
$ ls -l combined-userdata.txt
-rw-r--r-- 1 smoser smoser 1782 2010-07-01 16:08 combined-userdata.txt
The combined-userdata.txt is the file you want to paste there.
More info here:
https://help.ubuntu.com/community/CloudInit
Also note, this highly depends on the image you are using. But you say it is really cloud-init based image, so this applies. There are other cloud initiators which are not named cloud-init - then it could be different.
This is a couple years old now, but for others benefit I had the same issue, and it turned out that cloud-init was running twice, from inside /etc/rc3.d . Deleting these files inside the folder allowed the userdata to run correctly:
lrwxrwxrwx 1 root root 22 Jun 5 02:49 S-1cloud-config -> ../init.d/cloud-config
lrwxrwxrwx 1 root root 20 Jun 5 02:49 S-1cloud-init -> ../init.d/cloud-init
lrwxrwxrwx 1 root root 26 Jun 5 02:49 S-1cloud-init-local -> ../init.d/cloud-init-local
The problem is with cloud-init not allowing the user script to run on the next start-up.
First remove the cloud-init artifacts by executing:
rm /var/lib/cloud/instances/*/sem/config_scripts_user
And then your userdata must look like this:
#!/bin/bash
echo "hello!"
And then start you instance. It now works (tested).