bash: open file descriptor with sudo rights - linux

In my script I want to open a specific (device driver) file as FD 3.
exec 3< works fine for this in regular cases.
However the device driver file is only readable as root, so I'm looking for a way to open the FD as root using sudo.
-> How can I open a file (descriptor) with sudo rights?
Unfortunately I have to keep the file open for the runtime of the script, so tricks like piping in or out do not work.
Also I don't want to run the whole script under sudo rights.
If sudo + exec is not possible at all, an alternative solution is that I could call a program, in background like sudo tail -f -- but this poses another set of problems:
how to determine whether the program call was successful
how to get error messages if the call was not successful
how to "kill" the program at the end of execution.
EDIT:
To clarify what I want to achieve:
open /dev/tpm0 which requires root permissions
execute my commands with user permissions
close /dev/tpm0
The reason behind this is that opening /dev/tpm0 blocks other commands from accessing the tpm which is critical in my situation.
Thanks for your help

Can you just do something like the following?
# open the file with root privileges for reading
exec 3< <(sudo cat /dev/tpm0)
# read three characters from open file descriptor
read -n3 somechars <&3
# read a line from the open file descriptor
read line <&3
# close the file descriptor
exec 3<&-
In order to detect a failed open, you could do something like this:
exec 3< <(sudo cat /dev/tpm0 || echo FAILEDCODE)
Then when you first read from fd 3, see if you get the FAILCODE. Or you could do something like this:
rm -f /tmp/itfailed
exec 3< <(sudo cat /dev/tpm0 || touch /tmp/itfailed)
Then check for /tmp/itfailed; if it exists, the sudo command failed.

Related

How do i copy the output from ttyS0 to a file but still see it on ttyS0 in Putty

I can use this to send ttyS0 to a log.txt file at the beginning of the script
exec >> /mnt/Carousel_Games/systeminfo/pcuae-log.txt
exec 2>&1
thats ok but now it will not show ttyS0 on Putty, I can use this instead
exec >> /dev/ttyS0
exec 2>&1
And it will show it on Putty but not in the log.txt.
Is there a way of getting it to do both, show it on Putty plus send it to the log.txt file...?
Its so I can see it booting on ttyS0 and I can look at the log.txt file if I need too, if someone is having a problem with it booting properly, I can see it booting on there machine with the log.txt file they send me.
One way to do it is to open another terminal and tail your file in which the data is being written:
tail -f /mnt/Carousel_Games/systeminfo/pcuae-log.txt
Then, from 2nd terminal, run your command (or your script containing command):
exec >> /mnt/Carousel_Games/systeminfo/pcuae-log.txt
and your 1st terminal will show you what's being coming into the file.

Running a process with the TTY detached

I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.

Unable to write on /dev/* files

I'm writing a basic char device driver for Linux kernel.
For this, the code flow I have considered is as follows:
alloc_chrdev_region() -> to use dynamic allocation of major number
class_create() -> to create device class in sysfs
device_creat() -> to create device under /dev/
cdv_init() -> to initialize char device structure
cdev_add() -> to add my device structure in kernel
I have added read, write, open, release methods in code.
When I try to read device file under /dev/ my read method is called.
But when I try to write on /dev/ file using echo it gives error
"bash: /dev/scull: Permission denied"
I have checked permissions of file using ls -l, and I have permissions to read or write on this file.
This problem occurs for every device driver module I have written. It works well in on another machine.
I'm working on ubuntu 15.10, custom compiled kernel 4.3.0
the result of ls -l /dev/scull:
crw------- 1 root root 247, 0 Dec 30 18:06 /dev/scull
the exact command I used to open the file
$ sudo echo 54 > /dev/scull
the source code for the open implementation
ssize_t scull_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos){
pr_alert("Device Written\n");
return 0;
}
Behavior I'm seeking here is, I should be able to see 'Device Written' in dmesg ouput?
I assume that you are normally not root on your bash shell. Then this command line
sudo echo 54 > /dev/scull
does not what you think. The command is executed in two steps:
The bash setups the output redirection, i.e., it tries to open /dev/scull with the current user privileges.
The command sudo echo 54 is executed whereas stdout is connected to the file.
As you have no write-permissions as non-root user, the first step fails and the bash reports
"bash: /dev/scull: Permission denied"
You must already be root to setup the output redirection. Thus execute
sudo -i
which gives you an interactive shell with root privileges. The you can execute
echo 54 > /dev/scull
within that root shell.
I know the thread is too old to answer but just in case if someone wants to know alternative method without switching to root user, here is the solution:
sudo bash -c 'echo "54" > /dev/my_dev'
I wanted to note that on your system only root (file owner) has read / write permissions. Your (normal) user account has not! So another (fast) solution would be to give all users read / write permissions.
Probably this is not the safest solution! Only do this in your test environment!
sudo chmod a+rw /dev/scull
But now you test your module with your user account (without sudo)
echo "hello, world!" > /dev/scull
cat < /dev/scull
You can do so while going root with the command
sudo su
and then going into the /dev folder and enter your command (to save data into /dev/scull).
cd /dev
echo 54 > scull

Cgi-bin script to cat a file owned by a user

I'm using Ubuntu server and I have a cgi-bin script doing the following . . .
#!/bin/bash
echo Content-type: text/plain
echo ""
cat /home/user/.program/logs/file.log | tail -400 | col -b > /tmp/o.txt
cat /tmp/o.txt
Now if I run this script with I am "su" the script fills o.txt and then the host.com/cgi-bin/script runs but only shows up to the point I last ran it from the CLI
My apache error log is showing "permission denied" errors. So I know the user apache is running under somehow cannot cat this file. I tried using chown to no avail. Since this file is in a user directory, what is the best way to either duplicate it or symbolic link it or what?
I even considered running the script as root in a crontab to sort of "update" the file in /tmp/ but that did not work for me. How would somebody experienced with cgi-bin handle access to a file in a users directory?
The Apache user www-data does not have write access to a temporary file owned by another user.
But in this particular case, no temporary file is required.
tail -n 400 logfile | col -b
However, if Apache is running in a restricted chroot, it also has no access to /home.
The log file needs to be chmod o+r and all directories leading down to it should be chmod o+x. Make sure you understand the implications of this! If the user has a reason to want to prevent access to an intermediate directory, having read access to the file itself will not suffice. (Making something have www-data as its group owner is possible in theory, but impractical and pointless, as anybody who finds the CGI script will have access to the file anyway.)
More generally, if you do need a temporary file, the simple fix (not even workaround) is to generate a unique temporary file name, and remove it afterwards.
temp=$(mktemp -t cgi.XXXXXXXX) || exit $?
trap 'rm -f "$temp"' 0
trap 'exit 127' 1 2 15
tail -n 400 logfile | col -b >"$temp"
The first trap makes sure the file is removed when the script terminates. The second makes sure the first trap runs if the script is interrupted or killed.
I would be inclined to change the program that creates the log in the first place and write it to some place visible to Apache - maybe through symbolic links.
For example:
ln -s /var/www/cgi-bin/logs /home/user/.program/logs
So your program continues to write to /home/user/.program/logs but the data actually lands in /var/www/cgi-bin/logs where Apache can read it.

pipe stderr to syslog inside automated ftp script

I'm using simple script to automate ftp. The script looks like this:
ftp -nv $FTP_HOST<<END_FTP
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
But I would like to pipe STDERR to the syslog and STDOUT to a logfile. Normally I would do something like that: ftp -nv $FTP_HOST 1>>ftp.log | logger<<END_FTP but in this case that won't work because of <<END_FTP. How should I do it properly to make the script work? Note that I want to redirect only output from the FTP command inside my script and not the whole script.
This works without using a temp file for the error output. The 2>&1 sends the error output to where standard output is going — which is the pipe. The >> changes where standard output is going — which is now the file — without changing where standard error is going. So, the errors go to logger and the output to ftp.log.
ftp -nv $FTPHOST <<END_FTP 2>&1 >> ftp.log | logger
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
How about:
exec > mylogfile; exec 2> >(logger -t myftpscript)
in front of you ftp script
Another way of doing this I/O redirection is with the { ... } operations, thus:
{
ftp -nv $FTPHOST <<END_FTP >> ftp.log
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
# Optionally other commands here...stderr will go to logger too
} 2>&1 | logger
This is often the best mechanism when more than one command, but not all commands, need the same I/O redirection.
In context, though, I think this solution is the best (but that's someone else's answer, not mine):
ftp -nv $FTPHOST <<END_FTP 2>&1 >> ftp.log | logger
...
END_FTP
Why not create a netrc file and let that do your login and put the file for you.
The netrc file will let you login and allow you to define an init macro that will make the needed directory and put the file you want over there. Most ftp commands let you specify which netrc file you'd like to use, so you could use various netrc files for various purposes.
Here's an example netrc file called my_netrc:
machine ftp_host
user ftp_user
password swordfish
macrodef init
binary
mkdir my_dir
cd my_dir
put my_file
bye
Then, you could do this:
$ ftp -v -Nmy_netrc $FTPHOST 2>&1 >> ftp.log | logger

Resources