pseudo-terminal error will not be allocated because stdin is not a terminal - sudo - linux

There are other threads with this same topic but my issue is unique. I am running a bash script that has a function that sshes to a remote server and runs a sudo command on the remote server. I'm using the ssh -t option to avoid the requiretty issue. The offending line of code works fine as long as it's NOT being called from within the while loop. The while loop basically reads from a csv file on the local server and calls the checkAuthType function:
while read inputline
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done < configfile.csv
This is the function that sits at the top of the script (outside of any while loops):
function checkAuthType()
{
if [ $2 == linux ]; then
LINE=`ssh -t $1 'sudo grep "PasswordAuthentication" /etc/ssh/sshd_config | grep -v "yes\|Yes\|#"'`
fi
if [ $2 == unix ]; then
LINE=`ssh -n $1 'grep "PasswordAuthentication" /usr/local/etc/sshd_config | grep -v "yes\|Yes\|#"'`
fi
<more irrelevant code>
}
So, the offending line is the line that has the sudo command within the function. I can change the command to something simple like "sudo ls -l" and I will still get the "stdin is not a terminal" error. I've also tried "ssh -t -t" but to no avail. But if I call the checkAuthType function from outside of the while loop, it works fine. What is it about the while loop that changes the terminal and how do I fix it? Thank you one thousand times in advance.

Another option to try to get around the problem would be to redirect the file to a different file descriptor and force read to read from it instead.
while read inputline <&3
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done 3< configfile.csv

I am guessing you are testing with linux. You should try add the -n flag to your (linux) ssh command to avoid having ssh read from stdin - as it normally reads from stdin the while loop is feeding it your csv.
UPDATE
You should (usually) use the -n flag when scripting with SSH, and the flag is typically needed for 'expected behavior' when using a while read-loop. It does not seem to be the main issue here, though.
There are probably other solutions to this, but you could try adding another -t flag to force pseudo-tty allocation when stdin is not a terminal:
ssh -n -t -t

BroSlow's approach with a different file descriptor seems to work! Since the read command reads from fd 3 and not stdin,
ssh and hence sudo still have or get a tty/pty as stdin.
# simple test case
while read line <&3; do
sudo -k
echo "$line"
ssh -t localhost 'sudo ls -ld /'
done 3<&- 3< <(echo 1; sleep 3; echo 2; sleep 3)

Related

/proc/meminfo not updating when reading from ssh script

I have the following script in bash:
ssh user#1.1.1.1 "echo 'start'
mkdir -p /home/user/out
cp /tmp/big_file /home/user/out
echo 'syncing flash'
sync
while [[ $(cat /proc/meminfo | grep Dirty | awk '{print $2}') -ne 0 ]] ; do
echo \"$(cat /proc/meminfo)\"
sleep 1
sync
done
echo 'done'"
I have my host PC and a target PC which I am copying to. Before I run this script I have already scp'd a big file into /tmp on the target.
When I run this script it copies the file /tmp/big ok, but when it enters the loop to sync the flash and I wait for meminfo Dirty to get to zero what I see is always Dirty: 74224 kB repeated in the loop.
However in a different ssh session logged in to the target I have it running:
watch -n1 "cat /proc/meminfo | grep Drity"
And I see this count down from ~74000kb to 0kB.
The difference is that the ssh session doing the watch is logged in as root and the ssh is logged in a user.
So I did the same test with the ssh shell logged in as user and I saw always 0kb in Drity...
Does this imply that the user can't read meminfo relating to the whole system? - how can I tell when the flash has sync'd as a non-root user?
Since the argument to ssh is in double quotes, variables and command substitutions are expanded locally on the client before sending the command, they're not done on the remote machine. Since they're substituted on the client, you'll obviously get the same result each time through the loop (because the client isn't looping).
You should either escape the $ characters so they're sent to the server, or put the command inside single quotes (but the latter makes it difficult to include single quotes in the command).
ssh user#1.1.1.1 "echo 'start'
mkdir -p /home/user/out
cp /tmp/big_file /home/user/out
echo 'syncing flash'
sync
while [[ \$(awk '/Dirty/ {print \$2}' /proc/meminfo) -ne 0 ]] ; do
cat /proc/meminfo
sleep 1
sync
done
echo 'done'"
There's also no need for cat /proc/meminfo and grep Dirty in the command substitution. awk can do pattern matching and take a filename argument.

bash script loop breaks [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

'read -r' doesn't read beyond first line in a loop that does ssh [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

"stdin: is not a tty" from cronjob

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.
in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi
I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

Resources