'command not found' when trying to run bash script [duplicate] - linux

This question already has answers here:
Shell script not running, command not found
(12 answers)
Closed 3 years ago.
I am trying to run a bash script from a script called dev_ro, here is how it's being called.
export SUBNET="$(first_available_docker_network --lock-seconds 7200)"
I am calling dev_ro by ./dev_ro
I am confirm I have
#!/bin/bash
at the top of both files.
Here are perms for both files
$ ls -lh dev_ro
-rwxrwxr-x 1 ME ME 423 Aug 21 15:57 dev_ro
$ ls -lh first_available_docker_network
-rwxrwxr-x 1 ME ME 2.2K Aug 21 15:55 first_available_docker_network
This is the output from running ./dev_ro
++ first_available_docker_network --lock-seconds 7200
compose/everest-compose: line 25: first_available_docker_network: command not found
Additionally when I try to run the script:
ME#SERVER:~/Rosetta/compose$ first_available_docker_network
first_available_docker_network: command not found
ME#SERVER:~/Rosetta/compose$
I have the same setup running on a different server and it's working. The code was pulled from Git, so it's the same codebase.
Any help is much appreciated.
ME#OTHER_SERVER:~/Rosetta/compose$ first_available_docker_network
DEBUG:root:Docker subnets: [IPv4Network(... etc
ME#OTHER_SERVER:~/Rosetta/compose$ ^C

first_available_docker_network is not a standard linux command. This must be your custom script. Try executing using its absolute path. For example, instead of using+
ME#SERVER:~/Rosetta/compose$ first_available_docker_network
use
ME#SERVER:~/Rosetta/compose$ absolute_path_of_script/first_available_docker_network
Or alternatively,
You can try adding the path of first_available_docker_network to the PATH variable.

Related

How to enable debugging for all bash scripts at system-wide level [duplicate]

This question already has answers here:
bash recursive xtrace
(2 answers)
Closed 11 months ago.
I have a linux system that uses lots of bash scripts as the bootup scripts. I want to print all the bash statements that are being executed in order to debug some issue. How can I do that at system-wide level?
I had the following as init in my system :
$ls -la /init
lrwxrwxrwx. 1 root root 20 Feb 10 04:46 /init -> /lib/systemd/systemd
I replaced init with a script like below :
#!/bin/bash
set -x
/lib/systemd/systemd
but no prints.
Download bash sources, change https://github.com/bminor/bash/blob/f3a35a2d601a55f337f8ca02a541f8c033682247/flags.c#L88 to echo_command_at_execute = 1. echo_command_at_execute might be reset somewhere later, in reset_shell_flags() and with change_flag('x', 0) - make echo_command_at_execute const and remove all assignments to it. Recompile and install. Create sh -> bash symlinks appriopriate for your system.
Note, that some scripts may parse stderr output of some shell programs. This modification may result in unstable system.
Copy /bin/bash to for example /bin/bash2, and similar with sh. Create an executable script:
#!/bin/bash2
/bin/bash2 -x "$#"
Repeat the process for sh. The paths have to be absolute.
Note, that for initrd you will have to regenerate it and include the copied executables and the scripts in the image. the copied executables in the image. Research your distribution specific ways about generating initrd.
I give an idea that do not confirm:
mv /bin/bash /bin/bashx
vi /bin/bash
/bin/bashx -x $#

Script command: separate input and output

I'm trying to monitor command execution on a shell.
I need to separate the input command, for example:
input:
ls -l /
output:
total 76
lrwxrwxrwx 1 root root 7 Aug 11 10:25 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Aug 11 11:18 boot
drwxr-xr-x 17 root root 3200 Oct 11 11:10 dev
...
Also, I want to do the same if I open another shell, for example, after connection through ssh to another server.
I've been using script command to do this and it works just fine!
It logs all command input and output even if the shell changes (through ssh, or entering a msfconsole, for example).
Nevertheless, I found two main issues:
For my project, I need to separate (using a decoder) each command from the rest, also it would be awesome to be able to separate command input and output, for example:
cmd1. pwd ---> /var/
cmd2. echo "hello world" ---> "hello world"
....
Sometimes the script command could generate an output with garbage due to shell special characters (for colors, for example) which I would like to filter out.
So I've been thinking about this and I guess I could create a simple script that read from the file written by "script" command and processed the data.
Nevertheless, I'm not sure about what could be the best approach to do this.
I'm evaluating different solutions and I would like to know different proposals from the community.
Maybe I'm losing something and you know a better tool rather than script command or have some idea I've not considered.
Best regards,
A useful util for distinguishing stdout from stderr is annotate-output, (install the "devscripts" package), which sends stderr and stdin both to stdout along with helpful little prefixes. For example, let's try counting characters of a file that exists, plus one that doesn't exist:
annotate-output wc -c /bin/bash /bin/nosuchshell
Output:
00:29:06 I: Started wc -c /bin/bash /bin/nosuchshell
00:29:06 E: wc: /bin/nosuchshell: No such file or directory
00:29:06 O: 1099016 /bin/bash
00:29:06 O: 1099016 total
00:29:06 I: Finished with exitcode 1
That output could be parsed separately using sed, awk, or even a tee and a few greps.

Bash binary reports a different version at different times

I am running a bash script with the shebang line #!/bin/bash on RHEL 7.6 . The bash version is 4.2 and I have verified this by running /bin/bash --version. But once this script is being executed, the BASH_VERSION variable being reported inside the script is 3.X. I have printed out the binary location when the script is running is indeed /bin/bash by printing $(ls -l /proc/$$/exe). That actually reports /bin/bash
lrwxrwxrwx 1 xx yy 0 Jan 28 21:03 /proc/6855/exe -> /bin/bash
I am really confused as to how this can happen?

user-data (cloud-init) script not executing on EC2

my user-data script
#!
set -e -x
echo `whoami`
su root
yum update -y
touch ~/PLEASE_WORK.txt
which is fed in from the command:
ec2-run-instances ami-05355a6c -n 1 -g mongo-group -k mykey -f myscript.sh -t t1.micro -z us-east-1a
but when I check the file /var/log/cloud-init.log, the tail -n 5 is:
[CLOUDINIT] 2013-07-22 16:02:29,566 - cloud-init-cfg[INFO]: cloud-init-cfg ['runcmd']
[CLOUDINIT] 2013-07-22 16:02:29,583 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
[CLOUDINIT] 2013-07-22 16:02:29,686 - cloud-init-cfg[DEBUG]: handling runcmd with freq=None and args=[]
[CLOUDINIT] 2013-07-22 16:02:33,691 - cloud-init-run-module[INFO]: cloud-init-run-module ['once-per-instance', 'user-scripts', 'execute', 'run-parts', '/var/lib/cloud/data/scripts']
[CLOUDINIT] 2013-07-22 16:02:33,699 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
I've also verified that curl http://169.254.169.254/latest/user-data returns my file as intended.
and no other errors or the output of my script happens. how do I get the user-data scrip to execute on boot up correctly?
Actually, cloud-init allows a single shell script as an input (though you may want to use a MIME archive for more complex setups).
The problem with the OP's script is that the first line is incorrect. You should use something like this:
#!/bin/sh
The reason for this is that, while cloud-init uses #! to recognize a user script, the operating system needs a complete shebang line in order to execute the script.
So what's happening in the OP's case is that cloud-init behaves correctly (i.e. it downloads and tries to run the script) but the operating system is unable to actually execute it.
See: Shebang (Unix) on Wikipedia
Cloud-init does not accept plain bash scripts, just like that. It's a beast that eats YAML file that defines your instance (packages, ssh keys and other stuff).
Using MIME you can also send arbitrary shell scripts, but you have to MIME-encode them.
$ cat my-boothook.txt
#!/bin/sh
echo "Hello World!"
echo "This will run as soon as possible in the boot sequence"
$ cat my-user-script.txt
#!/usr/bin/perl
print "This is a user script (rc.local)\n"
$ cat my-include.txt
# these urls will be read pulled in if they were part of user-data
# comments are allowed. The format is one url per line
http://www.ubuntu.com/robots.txt
http://www.w3schools.com/html/lastpage.htm
$ cat my-upstart-job.txt
description "a test upstart job"
start on stopped rc RUNLEVEL=[2345]
console output
task
script
echo "====BEGIN======="
echo "HELLO From an Upstart Job"
echo "=====END========"
end script
$ cat my-cloudconfig.txt
#cloud-config
ssh_import_id: [smoser]
apt_sources:
- source: "ppa:smoser/ppa"
$ ls
my-boothook.txt my-include.txt my-user-script.txt
my-cloudconfig.txt my-upstart-job.txt
$ write-mime-multipart --output=combined-userdata.txt \
my-boothook.txt:text/cloud-boothook \
my-include.txt:text/x-include-url \
my-upstart-job.txt:text/upstart-job \
my-user-script.txt:text/x-shellscript \
my-cloudconfig.txt
$ ls -l combined-userdata.txt
-rw-r--r-- 1 smoser smoser 1782 2010-07-01 16:08 combined-userdata.txt
The combined-userdata.txt is the file you want to paste there.
More info here:
https://help.ubuntu.com/community/CloudInit
Also note, this highly depends on the image you are using. But you say it is really cloud-init based image, so this applies. There are other cloud initiators which are not named cloud-init - then it could be different.
This is a couple years old now, but for others benefit I had the same issue, and it turned out that cloud-init was running twice, from inside /etc/rc3.d . Deleting these files inside the folder allowed the userdata to run correctly:
lrwxrwxrwx 1 root root 22 Jun 5 02:49 S-1cloud-config -> ../init.d/cloud-config
lrwxrwxrwx 1 root root 20 Jun 5 02:49 S-1cloud-init -> ../init.d/cloud-init
lrwxrwxrwx 1 root root 26 Jun 5 02:49 S-1cloud-init-local -> ../init.d/cloud-init-local
The problem is with cloud-init not allowing the user script to run on the next start-up.
First remove the cloud-init artifacts by executing:
rm /var/lib/cloud/instances/*/sem/config_scripts_user
And then your userdata must look like this:
#!/bin/bash
echo "hello!"
And then start you instance. It now works (tested).

Crontab Permission Denied [duplicate]

This question already has answers here:
CronJob not running
(19 answers)
Closed last month.
I'm having problem with crontab when I'm running a script.
My sudo crontab -e looks like this:
05 00 * * * /opt/mcserver/backup.sh
10 00 * * * /opt/mcserver/suspend.sh
05 08 * * * /sbin/shutdown -r +1
11 11 * * * /opt/mcserver/start.sh <--- This isn't working
And the start.sh looks like this:
#!/bin/sh
screen java -d64 -Xincgc -Xmx2048M -jar craftbukkit.jar nogui
and have these permissions (ls -l output)
-rwxr-xr-x 1 eve eve 72 Nov 24 14:17 start.sh
I can run the command from the terminal, either using sudo or not
./start.sh
But it wont start with crontab.
If i do
grep -iR "start.sh" /var/log
I get the following output
/var/log/syslog:Nov 27 11:11:01 eve-desk CRON[5204]: (root) CMD (eve /opt/mcserver/start.sh)
grep: /var/log/btmp: Permission denied
grep: /var/log/lightdm/x-0-greeter.log: Permission denied
grep: /var/log/lightdm/lightdm.log: Permission denied
grep: /var/log/lightdm/x-0.log: Permission denied
So my question is, why isn't it working?
And since my script run without using sudo, I don't necessarily need to put it in sudo crontab?
( and I'm using Ubuntu 12.10 )
Thanks in advance,
Philip
Answer to twalberg's response
1. Changed owner on craftbukkit to root, to see if that fixed the problem.
-rw-r--r-- 1 root root 12084211 Nov 21 02:14 craftbukkit.jar
and also added an explicit cd in my start.sh script as such:
#!/bin/sh
cd /opt/mcserver/
screen java -d64 -Xincgc -Xmx2048M -jar craftbukkit.jar nogui
2. I'm not quite sure what you mean here. Should I use the following path in my start.sh file when i start java?
(output from which java)
/usr/bin/java
3. When my server closes, screen is terminated. Is it a good idea to start screen in "detached mode" anyway?
Still got the same "Permission denied" error.
Problem solved!
By using the proper flag on screen, as below, it is now working as it should!
screen -d -m java -d64 -Xincgc -Xmx2048M -jar craftbukkit.jar nogui
Thanks a lot to those who answered, and especially twalberg!
Here are some things to check:
root obviously has read/execute permissions on start.sh, but what are the permissions on craftbukkit.jar - can root read it? You may also want to add an explicit cd /path/to/where/craftbukkit.jar/is in your start.sh script.
Is java in root's default path within cron? Note that this path is not necessarily the same as the one that you get via sudo, su or directly logging in as root - it's typically much more restricted. Use full path names to both java and craftbukkit.jar to work around that.
Since screen will not start with a terminal available, you may need screen -d -m ... instead. Hopefully, you intend to eventually attach to each screen instance and terminate it later, or you have arranged for it to terminate automatically when the script is done...
The /var/log/syslog entry shows that cron did in fact execute the script, so it must have failed for one of the above reasons (or something else I haven't noticed yet)
The other errors from grep are simply due to the fact your non-root user does not have permission to read those specific files (this is normal, and a good thing).
start.sh is owned by "eve:eve" and your crontab is running as root.
You can solve this by running following command
chown root:root /opt/craftbukkit/start.sh
Your crontab will be running as root though.
Tip: When running bash in crontab always use absolute paths (it will make debugging a lot easier).
The log shows the user has no access to dir " /var/log/", You should set the log files' permition for the cron's owner.

Resources