"stdin: is not a tty" from cronjob - linux

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.

You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.

in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi

I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

Related

Get file count at remote location during FTP in shell script on linux server

Requirement : Need to get file count based on wildcard entry present on remote location(Linux server) and store it in variable for validation purpose
Tried the below code
export ExpectedFileCount=$(ftp -inv $FTPSERVER >> $FTPLOGFILE <<END_SCRIPT
user $FTP_USER $FTP_PASSWORD
passive
cd $PATH
ls -ltr ${WILDCARD}*xml| wc -l | sed 's/ *//g'
quit
END_SCRIPT)
But the code is storing the code snippet in the variable and and executing the commands every time I call the variable.
Please suggest the changes in the script to execute the script once and store the value in the variable
This seems to work (on Ubuntu, no promises about portability):
export ExpectedFileCount=`ftp -in $FTPSERVER << END_SCRIPT | tee -a $FTPLOGFILE | egrep -c '\.xml$'
user $FTP_USER $FTP_PASSWORD
passive
cd $REMOTE_PATH
ls -l
quit
END_SCRIPT`
Issues:
$REMOTE_PATH used in place of $PATH for remote directory (as $PATH has a special meaning)
only a simple ls -l is performed inside the ftp session, and the output parsed locally, as it does not support arbitrary shell commands
I can't see how to capture the output of a command with a heredoc using $(...), but it seems to work with backticks if the closing backtick is after the final delimiter

How do i make my bash script on download automatically turn into a terminal command? [duplicate]

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

Bash for loops on a remote server

I am attempting to run multiple commands via a bash script on a remote server. specifically, the for loop to be run on the remote server is giving me issues. I suspect it is because I don't know how to properly escape characters or use $().
Below is the code.
ssh (user)#(server) <<EOF
sudo su - (username)
whoami
'for e in $(`ls -lrt /usr/jboss/jbosseap | awk '{print $9}' | grep multichannel`);
do
echo "$e";
done'
Removing user and server names for obvious reasons. Just concentrate on the for loop. when I run that for loop command line (without the $()) its works fine. Just not sure how to nest it in a remote call.
Thanks very much for any and all help!
If you've got a complex script that you're trying to run over ssh you're going to be better off putting that script in a file and piping that file into ssh like:
cat remote_script.sh | ssh user#host
or:
cat remote_script.sh | ssh user#host sudo -u username
And now you don't have to worry about N levels of escaping.
You can run it as below .
here file "list " includes your list of nodes and script should be present in all nodes
for i in $(cat list ) ;do ssh -o StrictHostKeyChecking=no $i "/path/your_script" ;done

pseudo-terminal error will not be allocated because stdin is not a terminal - sudo

There are other threads with this same topic but my issue is unique. I am running a bash script that has a function that sshes to a remote server and runs a sudo command on the remote server. I'm using the ssh -t option to avoid the requiretty issue. The offending line of code works fine as long as it's NOT being called from within the while loop. The while loop basically reads from a csv file on the local server and calls the checkAuthType function:
while read inputline
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done < configfile.csv
This is the function that sits at the top of the script (outside of any while loops):
function checkAuthType()
{
if [ $2 == linux ]; then
LINE=`ssh -t $1 'sudo grep "PasswordAuthentication" /etc/ssh/sshd_config | grep -v "yes\|Yes\|#"'`
fi
if [ $2 == unix ]; then
LINE=`ssh -n $1 'grep "PasswordAuthentication" /usr/local/etc/sshd_config | grep -v "yes\|Yes\|#"'`
fi
<more irrelevant code>
}
So, the offending line is the line that has the sudo command within the function. I can change the command to something simple like "sudo ls -l" and I will still get the "stdin is not a terminal" error. I've also tried "ssh -t -t" but to no avail. But if I call the checkAuthType function from outside of the while loop, it works fine. What is it about the while loop that changes the terminal and how do I fix it? Thank you one thousand times in advance.
Another option to try to get around the problem would be to redirect the file to a different file descriptor and force read to read from it instead.
while read inputline <&3
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done 3< configfile.csv
I am guessing you are testing with linux. You should try add the -n flag to your (linux) ssh command to avoid having ssh read from stdin - as it normally reads from stdin the while loop is feeding it your csv.
UPDATE
You should (usually) use the -n flag when scripting with SSH, and the flag is typically needed for 'expected behavior' when using a while read-loop. It does not seem to be the main issue here, though.
There are probably other solutions to this, but you could try adding another -t flag to force pseudo-tty allocation when stdin is not a terminal:
ssh -n -t -t
BroSlow's approach with a different file descriptor seems to work! Since the read command reads from fd 3 and not stdin,
ssh and hence sudo still have or get a tty/pty as stdin.
# simple test case
while read line <&3; do
sudo -k
echo "$line"
ssh -t localhost 'sudo ls -ld /'
done 3<&- 3< <(echo 1; sleep 3; echo 2; sleep 3)

Shell script to compare remote directories

I have a shell script that I am using to compare directory contents. The script has to ssh to different servers to get a directory listing. When I run the script below, I am getting the contents of the server that I am logged into's /tmp directory listing and not that of the servers I am trying to ssh to. Could you please tell me what I am doing wrong?
The config file used in the script is as follows (called config.txt):
server1,server2,/tmp
The script is as follows
#!/bin/sh
CONFIGFILE="config.txt"
IFS=","
while read a b c
do
SERVER1=$a
SERVER2=$b
COMPDIR=$c
`ssh user#$SERVER1 'ls -l $COMPDIR'`| sed -n '1!p' >> server1.txt
`ssh user#$SERVER2 'ls -l $COMPDIR'`| sed -n '1!p' >> server2.txt
done < $CONFIGFILE
When I look at the outputs of server1.txt and server2.txt, they are both exactly the same - having the contents of /tmp of the server the script is running on (not server1 or 2). Doing the ssh +dir listing on command line works just fine. I am also getting the error "Pseudo-terminal will not be allocated because stdin is not a terminal". Adding the -t -t to the ssh command isnt helping either
Thank you
I have the back ticks in order to execute the command.
Backticks are not needed to execute a command - they are used to expand the standard output of the command into the command line. Certainly you don't want the output of your ssh commands to be interpreted as commands. Thus, it should work fine without the backticks:
ssh user#$SERVER1 "ls -l $COMPDIR" | sed -n '1!p' >>server1.txt
ssh user#$SERVER2 "ls -l $COMPDIR" | sed -n '1!p' >>server2.txt
(provided that double quotes to allow expansion of $COMPDIR are used).
first you need to generate keys to login to remote without keys
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
then try to ssh without pass
ssh remote-host
then try to invoke in your script but first make sanity check
var1=$(ssh remote-host) die "Cannot connect to remote host" unless $var1;

Resources