Pipe command with sudo - linux

I have a script which run this command successfully. I am using this command in another script which gives me error on this line (.md5: Permission denied).
I am running the previous script with sudo.
for i in ${NAME}*
do
sudo md5sum $i | sed -e "s/$i/${NAME}/" > ${NAME}.md5${i/#${NAME}/}
done

So you want to redirect output as root. It doesn't matter that you executed the command with sudo, because redirection is not part of the execution, so it's not performed by the executing user of the command, but by your current user.
The common trick is to use tee:
for i in ${NAME}*
do
md5sum $i | sed -e "s/$i/${NAME}/" | sudo tee ${NAME}.md5${i/#${NAME}/}
done
Note: I dropped the sudo from md5sum, as probably you don't need it.
Note: tee outputs in two directions: the specified file and stdout. If you want to suppress the output on stdout, redirect it to /dev/null.

You take the output of sudo md5sum $i and pipe it to a sed which is not running as root. sudo doesn't even know this sed exists.
But that's not the problem, because the sed does not need root permissions. The problem is > ${NAME}.... This redirects the output of sed to the file with this name. But the redirection is actually executed by your shell which is running as your user. And because > is a shell built-in operator, you can not prefix it with sudo.
The simple solution is to use tee. tee is a program (so you can run it with sudo) which writes it's input to the standard output and also to a file (like a T-Pipe, hence the name).
So you can just:
for i in ${NAME}*
do
md5sum $i | sed -e "s/$i/${NAME}/" | sudo tee ${NAME}.md5${i/#${NAME}/}
done
Note this will also dump all hashes to your standard output.

Related

Get file count at remote location during FTP in shell script on linux server

Requirement : Need to get file count based on wildcard entry present on remote location(Linux server) and store it in variable for validation purpose
Tried the below code
export ExpectedFileCount=$(ftp -inv $FTPSERVER >> $FTPLOGFILE <<END_SCRIPT
user $FTP_USER $FTP_PASSWORD
passive
cd $PATH
ls -ltr ${WILDCARD}*xml| wc -l | sed 's/ *//g'
quit
END_SCRIPT)
But the code is storing the code snippet in the variable and and executing the commands every time I call the variable.
Please suggest the changes in the script to execute the script once and store the value in the variable
This seems to work (on Ubuntu, no promises about portability):
export ExpectedFileCount=`ftp -in $FTPSERVER << END_SCRIPT | tee -a $FTPLOGFILE | egrep -c '\.xml$'
user $FTP_USER $FTP_PASSWORD
passive
cd $REMOTE_PATH
ls -l
quit
END_SCRIPT`
Issues:
$REMOTE_PATH used in place of $PATH for remote directory (as $PATH has a special meaning)
only a simple ls -l is performed inside the ftp session, and the output parsed locally, as it does not support arbitrary shell commands
I can't see how to capture the output of a command with a heredoc using $(...), but it seems to work with backticks if the closing backtick is after the final delimiter

How to read a file using cat with Perl -e parameters?

I've set up a penetration testing VM and am trying to practice privilege escalation.
I'm currently trying to read a file. I do not have access to the user's home directory where the file is located but I have permissions to run /usr/bin/perl as the user/admin.
My understanding is that I could run the following command to essentially cat the file and see what's inside using the perl permissions granted to me but it doesn't seem to be working and gives no result back
james#linuxtest:~$ sudo -l
Matching Defaults entries for james on linuxtest:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User james may run the following commands on linuxtest:
(james2) /usr/bin/perl
james#linuxtest:~$ sudo -u james2 perl -e 'print 'cat /home/james/test.txt''
I expected the result to be the contents of the file or at least an error of some sort but no result. Am I making a stupid mistake here?
I think you wanted
sudo -u james2 perl -e 'print `cat /home/james/test.txt`'
Backticks are used to execute a shell command and capture its output.
That's a weird way of doing
sudo -u james2 perl -e 'system "cat /home/james/test.txt"'
which is a weird way of doing
sudo -u james2 cat /home/james/test.txt
And since you're root, that's a weird way of doing
cat /home/james/test.txt

"stdin: is not a tty" from cronjob

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.
in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi
I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

Command output redirect to file and terminal [duplicate]

This question already has answers here:
How to redirect output to a file and stdout
(11 answers)
Closed 4 years ago.
I am trying to throw command output to file plus console also. This is because i want to keep record of output in file. I am doing following and it appending to file but not printing ls output on terminal.
$ls 2>&1 > /tmp/ls.txt
Yes, if you redirect the output, it won't appear on the console. Use tee.
ls 2>&1 | tee /tmp/ls.txt
It is worth mentioning that 2>&1 means that standard error will be redirected too, together with standard output. So
someCommand | tee someFile
gives you just the standard output in the file, but not the standard error: standard error will appear in console only. To get standard error in the file too, you can use
someCommand 2>&1 | tee someFile
(source: In the shell, what is " 2>&1 "? ). Finally, both the above commands will truncate the file and start clear. If you use a sequence of commands, you may want to get output&error of all of them, one after another. In this case you can use -a flag to "tee" command:
someCommand 2>&1 | tee -a someFile
In case somebody needs to append the output and not overriding, it is possible to use "-a" or "--append" option of "tee" command :
ls 2>&1 | tee -a /tmp/ls.txt
ls 2>&1 | tee --append /tmp/ls.txt

How to log output in bash and see it in the terminal at the same time?

I have some scripts where I need to see the output and log the result to a file, with the simplest example being:
$ update-client > my.log
I want to be able to see the output of the command while it's running, but also have it logged to the file. I also log stderr, so I would want to be able to log the error stream while seeing it as well.
update-client 2>&1 | tee my.log
2>&1 redirects standard error to standard output, and tee sends its standard input to standard output and the file.
Just use tail to watch the file as it's updated. Background your original process by adding & after your above command After you execute the command above just use
$ tail -f my.log
It will continuously update. (note it won't tell you when the file has finished running so you can output something to the log to tell you it finished. Ctrl-c to exit tail)
You can use the tee command for that:
command | tee /path/to/logfile
The equivelent without writing to the shell would be:
command > /path/to/logfile
If you want to append (>>) and show the output in the shell, use the -a option:
command | tee -a /path/to/logfile
Please note that the pipe will catch stdout only, errors to stderr are not processed by the pipe with tee. If you want to log errors (from stderr), use:
command 2>&1 | tee /path/to/logfile
This means: run command and redirect the stderr stream (2) to stdout (1). That will be passed to the pipe with the tee application.
Learn about this at askubuntu site
another option is to use block based output capture from within the script (not sure if that is the correct technical term).
Example
#!/bin/bash
{
echo "I will be sent to screen and file"
ls ~
} 2>&1 | tee -a /tmp/logfile.log
echo "I will be sent to just terminal"
I like to have more control and flexibility - so I prefer this way.

Resources