GPIO access rights under Linux Debian buster in bash - linux

sorry for the beginner's question, but I'm a bit stuck.
Debian Buster on an OrangePi Zero+
from root # echo "6" > /sys/class/gpio/export works fine as root
the same line in an bash script executed as root fails w/ Permission denied
The script is owner:root mod is 2777
From user Level $ sudo echo "6" > /sys/class/gpio/export fails as well
I need to execute this statement w/in a shell script (can be root)! so what can I do?

The culprit was found in CR/LF, imported from Windows. That's why it worked at cmd level and not in the script.

Related

Execute shell script whithin another script prompts: No such file or directory

(I'm new in shell script.)
I've been stuck with this issue for a while. I've tried different methods but without luck.
Description:
When my script attempt to run another script (SiebelMessageCreator.sh, which I don't own) it prompts:
-bash: ./SiebelMessageCreator.sh: No such file or directory
But the file exists and has execute permissions:
-rwxr-xr-x 1 owner ownergrp 322 Jun 11 2015 SiebelMessageCreator.sh
The code that is performing the script execution is:
(cd $ScriptPath; su -c './SiebelMessageCreator.sh' - owner; su -c 'nohup sh SiebelMessageSender.sh &' - owner;)
It's within a subshell because I first thought that it was throwing that message because my script was running in my home directory (When I run the script I'm root and I've moved to my non-root home directory to run the script because I can't move my script [ policies ] to the directory where the other script resides).
I've also tried with the sh SCRIPT.sh ./SCRIPT.sh. And changing the shebang from bash to ksh because the SiebelMessageCreator.sh has that shell.
The su -c 'sh SCRIPT.sh' - owner is necessary. If the script runs as root and not as owner it brokes something (?) (that's what my partners told me from their experience executing it as root). So I execute it as the owner.
Another thing that I've found in my research is that It can throw that message if it's a Symbolic link. I'm really not sure if the content of the script it's a symbolic link. Here it is:
#!/bin/ksh
BASEDIRROOT=/path/to/file/cpp-plwsutil-c
ore-runtime.jar (path changed on purpose for this question)
java -classpath $BASEDIRROOT com.hp.cpp.plwsutil.SiebelMessageCreator
exitCode=$?
echo "`date -u '+%Y-%m-%d %H:%M:%S %Z'` - Script execution finished with exit code $exitCode."
exit $exitCode
As you can see it's a very siple script that just call a .jar. But also I can't add it to my script [ policies ].
If I run the ./SiebelMessageCreator.sh manually it works just fine. But not with my script. I suppose that discards the x64 x32 bits issue that I've also found when I googled?
By the way, I'm automating some tasks, the ./SiebelMessageCreator.sh and nohup sh SiebelMessageSender.sh & are just the last steps.
Any ideas :( ?
did you try ?
. ./SiebelMessageCreator.sh
you can also perform which sh or which ksh, then modify the first line #!/bin/ksh

bash redirect to /dev/stdout: Not a directory

I recently upgraded from CentOS 5.8 (with GNU bash 3.2.25) to CentOS 6.5 (with GNU bash 4.1.2). A command that used to work with CentOS 5.8 no longer works with CentOS 6.5. It is a silly example with an easy workaround, but I am trying to understand what is going on underneath the bash hood that is causing the different behavior. Maybe it is a new bug in bash 4.1.2 or an old bug that was fixed and the new behavior is expected?
CentOS 5.8:
(echo "hi" > /dev/stdout) > test.txt
echo $?
0
cat test.txt
hi
CentOS 6.5:
(echo "hi" > /dev/stdout) > test.txt
-bash: /dev/stdout: Not a directory
echo $?
1
Update: It doesn't look like this is problem related to CentOS version. I have another CentOS 6.5 machine where the command works. I have eliminated any environment variables as the culprit. Any ideas?
On all the machines these commands gives the same output:
ls -ld /dev/stdout
lrwxrwxrwx 1 root root 15 Apr 30 13:30 /dev/stdout -> /proc/self/fd/1
ls -lL /dev/stdout
crw--w---- 1 user1 tty 136, 0 Oct 28 23:21 /dev/stdout
Another Update: It seems the sub-shell is inheriting the redirected stdout of the parent shell. The is not too surprising I guess, but still why does it work on one machine, but fail on the other machine when they are running the same bash version?
On the working machine:
((ls -la /dev/stdout; ls -la /proc/self/fd/1) >/dev/stdout) > test.txt
cat test.txt
lrwxrwxrwx 1 root root 15 Aug 13 08:14 /dev/stdout -> /proc/self/fd/1
l-wx------ 1 user1 aladdin 64 Oct 29 06:54 /proc/self/fd/1 -> /home/user1/test.txt
I think Yu Huang is right, redirecting to /tmp works on both machines. Both machines are using isilon NAS for the /home mount, but probably one has slightly different filesystem version or configuration that caused the error. In conclusion, redirecting to /dev/stdout should be avoided unless you know the parent process will not redirecting it.
UPDATE: This problem arose after upgrade to NFS v4 from v3. After downgrading back to v3 this behavior went away.
Good morning, user1999165, :)
I suspect it's related to the underlying filesystem. On the same machine, try:
(echo "hi" > /dev/stdout) > /tmp/test.txt
/tmp/ should be linux native (ext3 or something) filesystem
On many Linux systems, /dev/stdout is an alias (link or similar) for file descriptor 1 of the current process. When you look at it from C, then the global stdout is connected to file descriptor 1.
That means echo foo > /dev/stdout is the same as echo foo 1>&1 or a redirect of a file descriptor to itself. I wouldn't expect this to work since the semantics are "close descriptor to redirect and then clone the new target". So to make it work, there must be special code which notices that the two file descriptors are actually the same and which skips the "close" step.
My guess is that on the system where it fails, BASH isn't able to figure out /dev/stdout == fd1 and actually closes it. The error message is weird, though. OTOH, I don't know any other common error which would fit better.
Note: I tried to replicate your problem on Kubuntu 14.04 with BASH 4.3.11 and here, the redirect works (i.e. I don't get an error). Maybe it's a bug in BASH 4.1 which was fixed, since.
I was seeing issues writing piped stdin input to AWS EFS (NFSV4) that paralleled this issue. (Using Centos 6.8 so unfortunately cannot upgrade bash to 4.2).
I asked AWS support about this, here's their response --
This problem is not related to EFS itself, the problem here is with bash. This issue was fixed in bash 4.2 or later in RHEL.
To avoid this problem, please, try to create a file handle before running the echo command
within a subshell, after that the same file handler can be used as a redirect. Like the below example:
exec 5> test.txt; (echo "hi" >&5); cat test.txt
hi

Bash Script works but not in when executed from crontab

I am new to linux and the script below is just an example of my issue:
I have a script which works as expected when I execute it however when I set it to run via crontab it doesn't work as expected because it doesn't read the file content into the variable.
I have a file 'test.txt' which has 'abc' in it. My script puts the text into a variable 'var' and then I echo it out to a log file:
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
This works perfectly fine when I execute it and it echo's into the log file but not when I set it via crontab:
* * * * * /home/pi/MyScripts/test.sh
The cron job runs, and it sent me the following error message:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
But I have given it 777 permissions:
-rwxrwxrwx 1 pi pi 25 Jun 10 15:31 test.txt
-rwxrwxrwx 1 pi pi 77 Jun 10 15:34 test.sh
Any ideas?
This happens when you run the script with a different shell. It's especially relevant for systems where /bin/sh is dash:
$ cat myscript
echo "$(< file)"
$ bash myscript
hello world
$ sh myscript
$
To fix it, add #!/bin/bash as the first line in your script.
Others have provided answers, but I will give you a big clue from your error message; emphasis mine:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
Note how the cron job was trying to use /bin/sh to run the script. That’s solved by always indicating which shell you want to use at the top of your script like this.
#!/bin/bash
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
If your script is using bash, then you must explicitly set /bin/bash in some way.
Also, regarding permissions you say this:
But I have given it 777 permissions:
First, 777 permissions is a massive security risk. If you do that it means that anyone or anything on the system can read, write & execute the file. Don’t do that. In the case of a cron job the only entity that needs 7 permissions on a file is the owner of the crontab running that file.
Meaning if this is your crontab, just change the permissions to 755 which allows others to read & execute but not write. Or maybe better yet change it to 700 so only you—as the owner of the file—can do anything to the file. But avoid 777 permissions if you want to keep your system safe, stable & sane.
You have two options. In the first line of your file, tell what program you want to interpret the script
#!/bin/bash
...more code...
Or in your crontab, tell what program you want to interpret the script
* * * * * bash /home/pi/MyScripts/test.sh
In this option, you do not need to make the script executable

bashdb startup error: bashdb/lib/setshow.sh: line 91: /dev/pts/2: Permission denied

I'm trying to use bashdb on CentOS 4.1 (unfortunately I can't choose a different/newer OS).
I installed bash 4.2 then bashdb 4.2-0.8. THere were no complaints from configure, make, make checks, or make install: everything looked peachy.
But trying to use bashdb either as 'bash --debugger myscript' or 'bashdb myscript' always gets this error:
[bot#sjbld1 bin]$ bashdb -- putxen.sh
bash debugger, bashdb, release 4.2-0.8
Copyright 2002, 2003, 2004, 2006, 2007, 2008, 2009, 2010, 2011 Rocky Bernstein
This is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
/usr/local/share/bashdb/lib/setshow.sh: line 91: /dev/pts/2: Permission denied
/usr/local/share/bashdb/lib/setshow.sh: line 91: /dev/pts/2: Permission denied
/usr/local/share/bashdb/lib/setshow.sh: line 91: /dev/pts/2: Permission denied
/usr/local/share/bashdb/lib/setshow.sh: line 91: /dev/pts/2: Permission denied
[bot#sjbld1 bin]$
There's no line 91 in setshow.sh, and there's no /dev/pts in a directory listing of /dev.
Any suggestions how to proceed will be very much appreciated. I'm taking on a broken mess of shell script, and I'm not hot at bash (or Linux) and hope for more intimate debugging than set -x and echo statements.
Thanks
For completeness I should have added the bash script I was trying to use as bashdb test, as requested by konsolebox, though the "Permission denied" problem occurs with any code, and is solved by using sudo as suggested by Red Cricket. Here's the script:
[bot#sjcpbrvpxbld1 bin]$ cat putxen.sh
if [ x$1 == x ]
then
echo must have filename as parameter
exit 1
fi
if [ -e $1 ]
then
echo $1 found
else
echo cannot find ./$1
exit 1
fi
FTPTGT=10.10.10.25
DIRTGT=xva
echo ftp upload file to $DIRTGT directory on $FTPTGT
ftp -n $FTPTGT <<EOF
user anonymous pass
hash
bin
cd $DIRTGT
put "$1"
bye
Since programs often have stdout and stderr redirected, bashdb tries to write its output to a tty, unless directed otherwise; bashdb determines the console running the tty command.
Normally you don't need to run bashdb as root. But for reasons that are a mystery here, the user you are running bashdb as, is not able to write to the tty registered to it. That is:
echo hi > $(tty)
will probably give you the same "Permission denied". Probably ls -l $(tty) will tell you what's up there.
However, as suggested in the comments, you can workaround this by running as root such as via sudo: e.g.
sudo bashdb -- putxen.sh
Another workaround is to add your user to the group, e.g. tty that is listed when you run ls -l $(tty).

Crontab Permission Denied [duplicate]

This question already has answers here:
CronJob not running
(19 answers)
Closed last month.
I'm having problem with crontab when I'm running a script.
My sudo crontab -e looks like this:
05 00 * * * /opt/mcserver/backup.sh
10 00 * * * /opt/mcserver/suspend.sh
05 08 * * * /sbin/shutdown -r +1
11 11 * * * /opt/mcserver/start.sh <--- This isn't working
And the start.sh looks like this:
#!/bin/sh
screen java -d64 -Xincgc -Xmx2048M -jar craftbukkit.jar nogui
and have these permissions (ls -l output)
-rwxr-xr-x 1 eve eve 72 Nov 24 14:17 start.sh
I can run the command from the terminal, either using sudo or not
./start.sh
But it wont start with crontab.
If i do
grep -iR "start.sh" /var/log
I get the following output
/var/log/syslog:Nov 27 11:11:01 eve-desk CRON[5204]: (root) CMD (eve /opt/mcserver/start.sh)
grep: /var/log/btmp: Permission denied
grep: /var/log/lightdm/x-0-greeter.log: Permission denied
grep: /var/log/lightdm/lightdm.log: Permission denied
grep: /var/log/lightdm/x-0.log: Permission denied
So my question is, why isn't it working?
And since my script run without using sudo, I don't necessarily need to put it in sudo crontab?
( and I'm using Ubuntu 12.10 )
Thanks in advance,
Philip
Answer to twalberg's response
1. Changed owner on craftbukkit to root, to see if that fixed the problem.
-rw-r--r-- 1 root root 12084211 Nov 21 02:14 craftbukkit.jar
and also added an explicit cd in my start.sh script as such:
#!/bin/sh
cd /opt/mcserver/
screen java -d64 -Xincgc -Xmx2048M -jar craftbukkit.jar nogui
2. I'm not quite sure what you mean here. Should I use the following path in my start.sh file when i start java?
(output from which java)
/usr/bin/java
3. When my server closes, screen is terminated. Is it a good idea to start screen in "detached mode" anyway?
Still got the same "Permission denied" error.
Problem solved!
By using the proper flag on screen, as below, it is now working as it should!
screen -d -m java -d64 -Xincgc -Xmx2048M -jar craftbukkit.jar nogui
Thanks a lot to those who answered, and especially twalberg!
Here are some things to check:
root obviously has read/execute permissions on start.sh, but what are the permissions on craftbukkit.jar - can root read it? You may also want to add an explicit cd /path/to/where/craftbukkit.jar/is in your start.sh script.
Is java in root's default path within cron? Note that this path is not necessarily the same as the one that you get via sudo, su or directly logging in as root - it's typically much more restricted. Use full path names to both java and craftbukkit.jar to work around that.
Since screen will not start with a terminal available, you may need screen -d -m ... instead. Hopefully, you intend to eventually attach to each screen instance and terminate it later, or you have arranged for it to terminate automatically when the script is done...
The /var/log/syslog entry shows that cron did in fact execute the script, so it must have failed for one of the above reasons (or something else I haven't noticed yet)
The other errors from grep are simply due to the fact your non-root user does not have permission to read those specific files (this is normal, and a good thing).
start.sh is owned by "eve:eve" and your crontab is running as root.
You can solve this by running following command
chown root:root /opt/craftbukkit/start.sh
Your crontab will be running as root though.
Tip: When running bash in crontab always use absolute paths (it will make debugging a lot easier).
The log shows the user has no access to dir " /var/log/", You should set the log files' permition for the cron's owner.

Resources