I'm using Ubuntu server and I have a cgi-bin script doing the following . . .
#!/bin/bash
echo Content-type: text/plain
echo ""
cat /home/user/.program/logs/file.log | tail -400 | col -b > /tmp/o.txt
cat /tmp/o.txt
Now if I run this script with I am "su" the script fills o.txt and then the host.com/cgi-bin/script runs but only shows up to the point I last ran it from the CLI
My apache error log is showing "permission denied" errors. So I know the user apache is running under somehow cannot cat this file. I tried using chown to no avail. Since this file is in a user directory, what is the best way to either duplicate it or symbolic link it or what?
I even considered running the script as root in a crontab to sort of "update" the file in /tmp/ but that did not work for me. How would somebody experienced with cgi-bin handle access to a file in a users directory?
The Apache user www-data does not have write access to a temporary file owned by another user.
But in this particular case, no temporary file is required.
tail -n 400 logfile | col -b
However, if Apache is running in a restricted chroot, it also has no access to /home.
The log file needs to be chmod o+r and all directories leading down to it should be chmod o+x. Make sure you understand the implications of this! If the user has a reason to want to prevent access to an intermediate directory, having read access to the file itself will not suffice. (Making something have www-data as its group owner is possible in theory, but impractical and pointless, as anybody who finds the CGI script will have access to the file anyway.)
More generally, if you do need a temporary file, the simple fix (not even workaround) is to generate a unique temporary file name, and remove it afterwards.
temp=$(mktemp -t cgi.XXXXXXXX) || exit $?
trap 'rm -f "$temp"' 0
trap 'exit 127' 1 2 15
tail -n 400 logfile | col -b >"$temp"
The first trap makes sure the file is removed when the script terminates. The second makes sure the first trap runs if the script is interrupted or killed.
I would be inclined to change the program that creates the log in the first place and write it to some place visible to Apache - maybe through symbolic links.
For example:
ln -s /var/www/cgi-bin/logs /home/user/.program/logs
So your program continues to write to /home/user/.program/logs but the data actually lands in /var/www/cgi-bin/logs where Apache can read it.
Related
In my script I want to open a specific (device driver) file as FD 3.
exec 3< works fine for this in regular cases.
However the device driver file is only readable as root, so I'm looking for a way to open the FD as root using sudo.
-> How can I open a file (descriptor) with sudo rights?
Unfortunately I have to keep the file open for the runtime of the script, so tricks like piping in or out do not work.
Also I don't want to run the whole script under sudo rights.
If sudo + exec is not possible at all, an alternative solution is that I could call a program, in background like sudo tail -f -- but this poses another set of problems:
how to determine whether the program call was successful
how to get error messages if the call was not successful
how to "kill" the program at the end of execution.
EDIT:
To clarify what I want to achieve:
open /dev/tpm0 which requires root permissions
execute my commands with user permissions
close /dev/tpm0
The reason behind this is that opening /dev/tpm0 blocks other commands from accessing the tpm which is critical in my situation.
Thanks for your help
Can you just do something like the following?
# open the file with root privileges for reading
exec 3< <(sudo cat /dev/tpm0)
# read three characters from open file descriptor
read -n3 somechars <&3
# read a line from the open file descriptor
read line <&3
# close the file descriptor
exec 3<&-
In order to detect a failed open, you could do something like this:
exec 3< <(sudo cat /dev/tpm0 || echo FAILEDCODE)
Then when you first read from fd 3, see if you get the FAILCODE. Or you could do something like this:
rm -f /tmp/itfailed
exec 3< <(sudo cat /dev/tpm0 || touch /tmp/itfailed)
Then check for /tmp/itfailed; if it exists, the sudo command failed.
I am doing an ftp and I want to check the status. I don't want to use '$?' as mostly it returns 0 (Success) for ftp even though internally ftp didn't go through.
I know I can check the log file and do a grep from there for "Transfer complete" (221 status). That works fine but I don't want to do it as I have many different reports doing ftp. So creating multiple log files for all of them is what I want to avoid.
Can I get the logged information in a local script variable and process it inside the script itself?
Something similar to these (I've tried both but neither worked):
Grab FTP output in BASH SCRIPT
FTP status check whether successful or not
Below is something similar to what I am trying to do:
ftp -inv ${HOST} > log_file.log <<!
user ${USER} ${PASS}
bin
cd "${TARGET}"
put ${FEEDFILE}
bye
!
Any suggestions on how can I get the entire ftp output in a script variable and then check it within the script?
To capture stdout to a variable you can use bash's command substitution, so either OUTPUT=`cmd` or OUTPUT=$(cmd).
Here's an example how to capture the output from ftp in your case:
CMDS="user ${USER} ${PASS}
bin
cd \"${TARGET}\"
put \"${FEEDFILE}\"
bye"
OUTPUT=$(echo "${CMDS}" | ftp -inv ${HOST})
I'm asked to take away the permission for the root user to execute a bash script, is that even possible? Actually how would one take away any permissions from some user? with chmod I could modify it for the current logged in user, but not for root.
If you are simply looking for a small safeguard, an obstacle to accidentally running the script as root, write the script to voluntarily exit if run as root. Add the following to the beginning of the script:
if [[ $EUID == 0 ]]; then
printf 'Do not run this script as root\n' >&2
exit 1
fi
You can't do that, root can do everything. But there are some measures to make it difficult. You can use the following command in ext{2-4} filesystems:
chattr +i script.sh
Doing this the file can't be modified, but it can be unlocked using chattr -i script.sh
Other thing you can do is: Put the script you want unchangeable for root on remote server and mount it via NFS. If the server does not offer write permissions that locks out the local root account. Of course the local root account could just copy the files over locally, unmount the remote stuff, put the copy in place and change that.
You cannot lock out root from deleting your files. If you cannot trust your root to keep files intact, you are having a social problem, not a technical one.
I am new to linux and the script below is just an example of my issue:
I have a script which works as expected when I execute it however when I set it to run via crontab it doesn't work as expected because it doesn't read the file content into the variable.
I have a file 'test.txt' which has 'abc' in it. My script puts the text into a variable 'var' and then I echo it out to a log file:
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
This works perfectly fine when I execute it and it echo's into the log file but not when I set it via crontab:
* * * * * /home/pi/MyScripts/test.sh
The cron job runs, and it sent me the following error message:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
But I have given it 777 permissions:
-rwxrwxrwx 1 pi pi 25 Jun 10 15:31 test.txt
-rwxrwxrwx 1 pi pi 77 Jun 10 15:34 test.sh
Any ideas?
This happens when you run the script with a different shell. It's especially relevant for systems where /bin/sh is dash:
$ cat myscript
echo "$(< file)"
$ bash myscript
hello world
$ sh myscript
$
To fix it, add #!/bin/bash as the first line in your script.
Others have provided answers, but I will give you a big clue from your error message; emphasis mine:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
Note how the cron job was trying to use /bin/sh to run the script. That’s solved by always indicating which shell you want to use at the top of your script like this.
#!/bin/bash
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
If your script is using bash, then you must explicitly set /bin/bash in some way.
Also, regarding permissions you say this:
But I have given it 777 permissions:
First, 777 permissions is a massive security risk. If you do that it means that anyone or anything on the system can read, write & execute the file. Don’t do that. In the case of a cron job the only entity that needs 7 permissions on a file is the owner of the crontab running that file.
Meaning if this is your crontab, just change the permissions to 755 which allows others to read & execute but not write. Or maybe better yet change it to 700 so only you—as the owner of the file—can do anything to the file. But avoid 777 permissions if you want to keep your system safe, stable & sane.
You have two options. In the first line of your file, tell what program you want to interpret the script
#!/bin/bash
...more code...
Or in your crontab, tell what program you want to interpret the script
* * * * * bash /home/pi/MyScripts/test.sh
In this option, you do not need to make the script executable
I would like to know if there is anyway to send a mail as soon as someone tries su -, su or su root. I know the mail command and I am trying to write a script but I am very confused as to
where to write it - whether in .bashrc of root or in /etc/process
how to invoke the mail on the use of su
I've tried the usual Google search etc. but got links on usage of su, disabling it, securing ssh etc - none of which answered this question.
Thanks in advance
I guess that your underlying requirement is that you have a bunch of people you have given root privilege to but you don't completely trust them so you want to keep an eye on them. Your solution to this is to get yourself sent mail whenever they become root.
The problem with this solution is that the root user has unlimited privilege and so there's nothing to stop them from counteracting this mechanism. They could for instance, edit the /etc/login.defs file in one session, do the good thing that you want them to do and then later su to root and do the bad thing that you fear and at the end of that session they edit the /etc/login.defs file back to it's original state and you're none the wiser. Alternatively they could just make a copy of /usr/bin/bash and make the copy a suid file that will give them privilege whenever they run it.
You might be able to close any of the vulnerabilities I've just suggested but there will be many, many more. So you either trust them or else don't use su at all and give them sudo permission to run just those commands that they need to do the thing you want them to do.
There's a log file called /var/log/secure which receives an entry any time su is executed. It gets entries under other conditions as well. It's described in the Linux Administrator's Security Guide.
If user "fred" executes su -, an entry will appear which looks something like this:
Jul 27 08:57:41 MyPC su: pam_unix(su-l:session): session opened for user root by fred(uid=500)
A similar entry would appear with su or su root.
So you could set up a script which monitors /var/log/secure as follows:
#!/bin/sh
while inotifywait -e modify /var/log/secure; do
if tail -n1 /var/log/secure | grep " su: "; then
tail -n1 /var/log/secure | grep " su: " | mail -s "su occurred" you#email.com
fi
done
Note that you need to have the inotify-tool package installed to use inotifywait.
If this script is running in the background, it should send an email to you#email.com any time an su entry occurs.
Now where to run the script. One approach would be to put this into an executable script file (say, watchsu) and call it from your rc.local file:
nohup /path/to/watchsu 2>&1 &
I'm sure there are other ideas for where to start it. I'm not familiar with CentOS.
According to the man page for su, in /etc/login.defs you can set either SULOG_FILE file or SYSLOG_SU_ENABLE yes to log all su activity. Then you just need something like inotifywait to watch the log file for su events.