Having an issue passing variables to subshell - linux

So here is my problem, I have this script I wrote where I'm exporting two variables however they're not making it into the subshell.
The point of this script is to change a users password and clear out their pam_tally for CentOS and Ubuntu hosts.
A little background is that this environment's users are managed by puppet but the passwords are all local, ssh keys are not allowed either (this is set in stone and can't be changed so I have to work with what I got) and the reason is that every log in has to be manual (even number of sessions are limited to two so you can't even user csshX effectively).
Here is my script
#!/bin/bash
echo "Please enter user whose password you want to change"
read NEWUSER
echo "Please enter new password for user"
read -s -p "Temp Password:" TEMPPASSWORD
PASSWORD=$TEMPPASSWORD
export PASSWORD
NEWUSER2=$NEWUSER
export NEWUSER2
for i in HOST{cluster1,cluster2,cluster3}0{1..9}
do
ping -c 2 $i && (echo $i ; ssh -t $i '
sudo pam_tally2 --user=$NEWUSER2 --reset
echo -e "$PASSWORD\n$PASSWORD" | sudo passwd $NEWUSER2
sudo chage -d 0 $NEWUSER2
')
done

You are using ssh to connect to a remote host and run a script on that host. ssh does not export the local environment to the remote session but instead performs a login on the remote host which sets the environment according to the remote user's configuration on the remote host.
I suggest you pass all needed values via the command you want to execute. This could be done like this:
ssh -t $i '
sudo pam_tally2 --user='"$NEWUSER2"' --reset
echo -e "'"$PASSWORD"'\n'"$PASSWORD"'" | sudo passwd '"$NEWUSER2"'
sudo chage -d 0 '"$NEWUSER2"
Watch closely how this uses quotes. At each occasion where you used a variable, I terminate the single-quoted string (using '), then add a double-quoted use of the variable (e. g. "$PASSWORD") and then start the single-quoted string again (using ' again). This way, the shell executing the ssh command will expand the variables already, so you have no need to pass them into the remote shell.
But be aware that special characters in the password (like " or ' or or maybe a bunch of other characters) can mean trouble using this simple mechanism. To be safe against this as well, you would need to use the %q format specifier of the printf command to quote your values before passing them:
ssh -t "$i" "$(printf '
sudo pam_tally2 --user=%q --reset
{ echo %q; echo %q; } | sudo passwd %q
sudo chage -d 0 %q' \
"$NEWUSER2" "$PASSWORD" "$PASSWORD" "$NEWUSER2" "$NEWUSER2")"

Related

Is there a way to automatically answer for user prompt when doing ssh in a shell script without using expect or spawn?

I'm trying to test ssh trust between a linux box against 12 other linux boxes using a shell script and I'm trying to pass user input as 'yes' for the question below automatically.
Are you sure you want to continue connecting (yes/no)?
but the script is failing with error 'Host key verification failed'. I manually executed the ssh command with << EOT on one of the server but the I still get user prompt question. Is there any-other way to pass input value for user prompts automatically while running ssh command?
Note: I cannot use spawn or except do you some system limitation and I cannot install them due to organisations access restrictions.
I tried with the following options but none of them worked for me
[command] << [EOT, EOL, EOF]
echo 'yes'
[EOT, EOL, EOF]
yes | ./script.sh
printf "yes" | ./script.sh
echo "yes" | ./script.sh
./script.sh 'read -p "Are you sure you want to continue connecting (yes/no)?";echo "yes"'
sh```
for server in `cat server_list` ; do
UPPER_MACHINE_NAME=`echo $server | cut -d '.' -f 1`
UPPER_MACHINE_NAME=${UPPER_MACHINE_NAME^^}
ssh -tt user#$UPPER_MACHINE_NAME << EOT
echo 'yes'
touch /usr/Finastra/sshtest.txt
EOT
done
```

How to execute a command over ssh and store the value in a variable

I am trying to get the value of uname -r from a remote machine over ssh and use that value in my local script flow.
kern_ver=uname -r
sshpass -p "$passwd" ssh -o StrictHostKeyChecking=no root#$c 'kern_ver=echo \$kern_ver'
But looks like the value is not getting passed back to the local script flow .
Storing the command in the variable cmd is optional; you can hard-code the command as a string argument to ssh. The key is that you simply run the command on the remote host via ssh, and capture its output on the local host.
cmd="uname -r"
kern_ver=$(sshpass -p "$passwd" ssh -o StrictHostKeyChecking=no root#"$c" "$cmd")
Capture is var=$(...), as always.
ssh is a bit interesting because it unconditionally invokes a remote shell, so working with completely arbitrary commands (as opposed to simple things like uname -r) requires a different technique:
filename="/path/to/name with spaces/and/ * wildcard characters *"
printf -v cmd_str '%q ' ls -l "$filename"
output=$(ssh "$host" "$cmd_str")
This way you can use arguments with spaces, and they'll be passed with correct quoting through to the remote system (with the caveat that non-printable characters may be quoted with bash-only syntax, so this is only guaranteed to work in cases where the remote shell is also bash).

Run scripts remotely via SSH

I need to collect user information from 100 remote servers. We have public/private key infrastructure for authentication, and I have configured ssh-agent command to forward key, meaning i can login on any server without password prompt (auto login).
Now I want to run a script on all server to collect user information (how many user account we have on all servers).
This is my script to collect user info.
#!/bin/bash
_l="/etc/login.defs"
_p="/etc/passwd"
## get mini UID limit ##
l=$(grep "^UID_MIN" $_l)
## get max UID limit ##
l1=$(grep "^UID_MAX" $_l)
awk -F':' -v "min=${l##UID_MIN}" -v "max=${l1##UID_MAX}" '{ if ( $3 >= min && $3 <= max && $7 != "/sbin/nologin" ) print $0 }' "$_p"
I don't know how to run this script using ssh without interaction??
Since you need to log into the remote machine there is AFAICT no way to do this "without ssh". However, ssh accepts a command to execute on the remote machine once logged in (instead of the shell it would start). So if you can save your script on the remote machine, e.g. as ~/script.sh, you can execute it without starting an interactive shell with
$ ssh remote_machine ~/script.sh
Once the script terminates the connection will automatically be closed (if you didn't configure that away purposely).
Sounds like something you can do using expect.
http://linux.die.net/man/1/expect
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be.
If you've got a key on each machine and can ssh remotehost from your monitoring host, you've got all that's required to collect the information you've asked for.
#!/bin/bash
servers=(wopr gerty mother)
fmt="%s\t%s\t%s\n"
printf "$fmt" "Host" "UIDs" "Highest"
printf "$fmt" "----" "----" "-------"
count='awk "END {print NR}" /etc/passwd' # avoids whitespace problems from `wc`
highest="awk -F: '\$3>n&&\$3<60000{n=\$3} END{print n}' /etc/passwd"
for server in ${servers[#]}; do
printf "$fmt" "$server" "$(ssh "$server" "$count")" "$(ssh "$server" "$highest")"
done
Results for me:
$ ./doit.sh
Host UIDs Highest
---- ---- -------
wopr 40 2020
gerty 37 9001
mother 32 534
Note that this makes TWO ssh connections to each server to collect each datum. If you'd like to do this a little more efficiently, you can bundle the information into a single, slightly more complex collection script:
#!/usr/local/bin/bash
servers=(wopr gerty mother)
fmt="%s\t%s\t%s\n"
printf "$fmt" "Host" "UIDs" "Highest"
printf "$fmt" "----" "----" "-------"
gather="awk -F: '\$3>n&&\$3<60000{n=\$3} END{print NR,n}' /etc/passwd"
for server in ${servers[#]}; do
read count highest < <(ssh "$server" "$gather")
printf "$fmt" "$server" "$count" "$highest"
done
(Identical results.)
ssh remoteserver.example /bin/bash < localscript.bash
(Note: the "proper" way to authenticate without manually entering in password is to use SSH keys. Storing password in plaintext even in your local scripts is a potential security vulnerability)
You can run expect as part of your bash script. Here's a quick example that you can hack into your existing script:
login=user
IP=127.0.0.1
password='your_password'
expect_sh=$(expect -c "
spawn ssh $login#$IP
expect \"password:\"
send \"$password\r\"
expect \"#\"
send \"./$remote_side_script\r\"
expect \"#\"
send \"cd /lib\r\"
expect \"#\"
send \"cat file_name\r\"
expect \"#\"
send \"exit\r\"
")
echo "$expect_sh"
You can also use pscp to copy files back and forth as part of a script so you don't need to manually supply the password as part of the interaction:
Install putty-tools:
$ sudo apt-get install putty-tools
Using pscp in your script:
pscp -scp -pw $password file_to_copy $login#$IP:$dest_dir
maybe you'd like to try the expect command as following
#!/usr/bin/expect
set timeout 30
spawn ssh -p ssh_port -l ssh_username ssh_server_host
expect "password:"
send "your_passwd\r"
interact
the expect command will catch the "password:" and then auto fill the passwd your send by above.
Remember that replace the ssh_port, ssh_username, ssh_server_host and your_passwd with your own configure

Pseudo-terminal will not be allocated because stdin is not a terminal

I am trying to write a shell script that creates some directories on a remote server and then uses scp to copy files from my local machine onto the remote. Here's what I have so far:
ssh -t user#server<<EOT
DEP_ROOT='/home/matthewr/releases'
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR=$DEP_ROOT"/"$datestamp
if [ ! -d "$DEP_ROOT" ]; then
echo "creating the root directory"
mkdir $DEP_ROOT
fi
mkdir $REL_DIR
exit
EOT
scp ./dir1 user#server:$REL_DIR
scp ./dir2 user#server:$REL_DIR
Whenever I run it I get this message:
Pseudo-terminal will not be allocated because stdin is not a terminal.
And the script just hangs forever.
My public key is trusted on the server and I can run all the commands outside of the script just fine. Any ideas?
Try ssh -t -t(or ssh -tt for short) to force pseudo-tty allocation even if stdin isn't a terminal.
See also: Terminating SSH session executed by bash script
From ssh manpage:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary
screen-based programs on a remote machine, which can be very useful,
e.g. when implementing menu services. Multiple -t options force tty
allocation, even if ssh has no local tty.
Also with option -T from manual
Disable pseudo-tty allocation
Per zanco's answer, you're not providing a remote command to ssh, given how the shell parses the command line. To solve this problem, change the syntax of your ssh command invocation so that the remote command is comprised of a syntactically correct, multi-line string.
There are a variety of syntaxes that can be used. For example, since commands can be piped into bash and sh, and probably other shells too, the simplest solution is to just combine ssh shell invocation with heredocs:
ssh user#server /bin/bash <<'EOT'
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
Note that executing the above without /bin/bash will result in the warning Pseudo-terminal will not be allocated because stdin is not a terminal. Also note that EOT is surrounded by single-quotes, so that bash recognizes the heredoc as a nowdoc, turning off local variable interpolation so that the command text will be passed as-is to ssh.
If you are a fan of pipes, you can rewrite the above as follows:
cat <<'EOT' | ssh user#server /bin/bash
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
The same caveat about /bin/bash applies to the above.
Another valid approach is to pass the multi-line remote command as a single string, using multiple layers of bash variable interpolation as follows:
ssh user#server "$( cat <<'EOT'
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
)"
The solution above fixes this problem in the following manner:
ssh user#server is parsed by bash, and is interpreted to be the ssh command, followed by an argument user#server to be passed to the ssh command
" begins an interpolated string, which when completed, will comprise an argument to be passed to the ssh command, which in this case will be interpreted by ssh to be the remote command to execute as user#server
$( begins a command to be executed, with the output being captured by the surrounding interpolated string
cat is a command to output the contents of whatever file follows. The output of cat will be passed back into the capturing interpolated string
<< begins a bash heredoc
'EOT' specifies that the name of the heredoc is EOT. The single quotes ' surrounding EOT specifies that the heredoc should be parsed as a nowdoc, which is a special form of heredoc in which the contents do not get interpolated by bash, but rather passed on in literal format
Any content that is encountered between <<'EOT' and <newline>EOT<newline> will be appended to the nowdoc output
EOT terminates the nowdoc, resulting in a nowdoc temporary file being created and passed back to the calling cat command. cat outputs the nowdoc and passes the output back to the capturing interpolated string
) concludes the command to be executed
" concludes the capturing interpolated string. The contents of the interpolated string will be passed back to ssh as a single command line argument, which ssh will interpret as the remote command to execute as user#server
If you need to avoid using external tools like cat, and don't mind having two statements instead of one, use the read built-in with a heredoc to generate the SSH command:
IFS='' read -r -d '' SSH_COMMAND <<'EOT'
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
ssh user#server "${SSH_COMMAND}"
I'm adding this answer because it solved a related problem that I was having with the same error message.
Problem: I had installed cygwin under Windows and was getting this error: Pseudo-terminal will not be allocated because stdin is not a terminal
Resolution: It turns out that I had not installed the openssh client program and utilities. Because of that cygwin was using the Windows implementation of ssh, not the cygwin version. The solution was to install the openssh cygwin package.
All relevant information is in the existing answers, but let me attempt a pragmatic summary:
tl;dr:
DO pass the commands to run using a command-line argument:
ssh jdoe#server '...'
'...' strings can span multiple lines, so you can keep your code readable even without the use of a here-document:
ssh jdoe#server ' ... '
Do NOT pass the commands via stdin, as is the case when you use a here-document:
ssh jdoe#server <<'EOF' # Do NOT do this ... EOF
Passing the commands as an argument works as-is, and:
the problem with the pseudo-terminal will not even arise.
you won't need an exit statement at the end of your commands, because the session will automatically exit after the commands have been processed.
In short: passing commands via stdin is a mechanism that is at odds with ssh's design and causes problems that must then be worked around.
Read on, if you want to know more.
Optional background information:
ssh's mechanism for accepting commands to execute on the target server is a command-line argument: the final operand (non-option argument) accepts a string containing one or more shell commands.
By default, these commands run unattended, in a non-interactive shell, without the use of a (pseudo) terminal (option -T is implied), and the session automatically ends when the last command finishes processing.
In the event that your commands require user interaction, such as responding to an interactive prompt, you can explicitly request the creation of a pty (pseudo-tty), a pseudo terminal, that enables interacting with the remote session, using the -t option; e.g.:
ssh -t jdoe#server 'read -p "Enter something: "; echo "Entered: [$REPLY]"'
Note that the interactive read prompt only works correctly with a pty, so the -t option is needed.
Using a pty has a notable side effect: stdout and stderr are combined and both reported via stdout; in other words: you lose the distinction between regular and error output; e.g.:
ssh jdoe#server 'echo out; echo err >&2' # OK - stdout and stderr separate
ssh -t jdoe#server 'echo out; echo err >&2' # !! stdout + stderr -> stdout
In the absence of this argument, ssh creates an interactive shell - including when you send commands via stdin, which is where the trouble begins:
For an interactive shell, ssh normally allocates a pty (pseudo-terminal) by default, except if its stdin is not connected to a (real) terminal.
Sending commands via stdin means that ssh's stdin is no longer connected to a terminal, so no pty is created, and ssh warns you accordingly:
Pseudo-terminal will not be allocated because stdin is not a terminal.
Even the -t option, whose express purpose is to request creation of a pty, is not enough in this case: you'll get the same warning.
Somewhat curiously, you must then double the -t option to force creation of a pty: ssh -t -t ... or ssh -tt ... shows that you really, really mean it.
Perhaps the rationale for requiring this very deliberate step is that things may not work as expected. For instance, on macOS 10.12, the apparent equivalent of the above command, providing the commands via stdin and using -tt, does not work properly; the session gets stuck after responding to the read prompt:
ssh -tt jdoe#server <<<'read -p "Enter something: "; echo "Entered: [$REPLY]"'
In the unlikely event that the commands you want to pass as an argument make the command line too long for your system (if its length approaches getconf ARG_MAX - see this article), consider copying the code to the remote system in the form of a script first (using, e.g., scp), and then send a command to execute that script.
In a pinch, use -T, and provide the commands via stdin, with a trailing exit command, but note that if you also need interactive features, using -tt in lieu of -T may not work.
The warning message Pseudo-terminal will not be allocated because stdin is not a terminal. is due to the fact that no command is specified for ssh while stdin is redirected from a here document.
Due to the lack of a specified command as an argument ssh first expects an interactive login session (which would require the allocation of a pty on the remote host) but then has to realize that its local stdin is no tty/pty. Redirecting ssh's stdin from a here document normally requires a command (such as /bin/sh) to be specified as an argument to ssh - and in such a case no pty will be allocated on the remote host by default.
Since there are no commands to be executed via ssh that require the presence of a tty/pty (such as vim or top) the -t switch to ssh is superfluous.
Just use ssh -T user#server <<EOT ... or ssh user#server /bin/bash <<EOT ... and the warning will go away.
If <<EOF is not escaped or single-quoted (i. e. <<\EOT or <<'EOT') variables inside the here document will be expanded by the local shell before it is executing ssh .... The effect is that the variables inside the here document will remain empty because they are defined only in the remote shell.
So, if $REL_DIR should be both accessible by the local shell and defined in the remote shell, $REL_DIR has to be defined outside the here document before the ssh command (version 1 below); or, if <<\EOT or <<'EOT' is used, the output of the ssh command can be assigned to REL_DIR if the only output of the ssh command to stdout is genererated by echo "$REL_DIR" inside the escaped/single-quoted here document (version 2 below).
A third option would be to store the here document in a variable and then pass this variable as a command argument to ssh -t user#server "$heredoc" (version 3 below).
And, last but not least, it would be no bad idea to check if the directories on the remote host were created successfully (see: check if file exists on remote host with ssh).
# version 1
unset DEP_ROOT REL_DIR
DEP_ROOT='/tmp'
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR="${DEP_ROOT}/${datestamp}"
ssh localhost /bin/bash <<EOF
if [ ! -d "$DEP_ROOT" ] && [ ! -e "$DEP_ROOT" ]; then
echo "creating the root directory" 1>&2
mkdir "$DEP_ROOT"
fi
mkdir "$REL_DIR"
#echo "$REL_DIR"
exit
EOF
scp -r ./dir1 user#server:"$REL_DIR"
scp -r ./dir2 user#server:"$REL_DIR"
# version 2
REL_DIR="$(
ssh localhost /bin/bash <<\EOF
DEP_ROOT='/tmp'
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR="${DEP_ROOT}/${datestamp}"
if [ ! -d "$DEP_ROOT" ] && [ ! -e "$DEP_ROOT" ]; then
echo "creating the root directory" 1>&2
mkdir "$DEP_ROOT"
fi
mkdir "$REL_DIR"
echo "$REL_DIR"
exit
EOF
)"
scp -r ./dir1 user#server:"$REL_DIR"
scp -r ./dir2 user#server:"$REL_DIR"
# version 3
heredoc="$(cat <<'EOF'
# -onlcr: prevent the terminal from converting bare line feeds to carriage return/line feed pairs
stty -echo -onlcr
DEP_ROOT='/tmp'
datestamp="$(date +%Y%m%d%H%M%S)"
REL_DIR="${DEP_ROOT}/${datestamp}"
if [ ! -d "$DEP_ROOT" ] && [ ! -e "$DEP_ROOT" ]; then
echo "creating the root directory" 1>&2
mkdir "$DEP_ROOT"
fi
mkdir "$REL_DIR"
echo "$REL_DIR"
stty echo onlcr
exit
EOF
)"
REL_DIR="$(ssh -t localhost "$heredoc")"
scp -r ./dir1 user#server:"$REL_DIR"
scp -r ./dir2 user#server:"$REL_DIR"
I don't know where the hang comes from, but redirecting (or piping) commands into an interactive ssh is in general a recipe for problems. It is more robust to use the command-to-run-as-a-last-argument style and pass the script on the ssh command line:
ssh user#server 'DEP_ROOT="/home/matthewr/releases"
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR=$DEP_ROOT"/"$datestamp
if [ ! -d "$DEP_ROOT" ]; then
echo "creating the root directory"
mkdir $DEP_ROOT
fi
mkdir $REL_DIR'
(All in one giant '-delimited multiline command-line argument).
The pseudo-terminal message is because of your -t which asks ssh to try to make the environment it runs on the remote machine look like an actual terminal to the programs that run there. Your ssh client is refusing to do that because its own standard input is not a terminal, so it has no way to pass the special terminal APIs onwards from the remote machine to your actual terminal at the local end.
What were you trying to achieve with -t anyway?
After reading a lot of these answers I thought I would share my resulting solution. All I added is /bin/bash before the heredoc and it doesn't give the error anymore.
Use this:
ssh user#machine /bin/bash <<'ENDSSH'
hostname
ENDSSH
Instead of this (gives error):
ssh user#machine <<'ENDSSH'
hostname
ENDSSH
Or use this:
ssh user#machine /bin/bash < run-command.sh
Instead of this (gives error):
ssh user#machine < run-command.sh
EXTRA:
If you still want a remote interactive prompt e.g. if the script you're running remotely prompts you for a password or other information, because the previous solutions won't allow you to type into the prompts.
ssh -t user#machine "$(<run-command.sh)"
And if you also want to log the entire session in a file logfile.log:
ssh -t user#machine "$(<run-command.sh)" | tee -a logfile.log
I was having the same error under Windows using emacs 24.5.1 to connect to some company servers through /ssh:user#host. What solved my problem was setting the "tramp-default-method" variable to "plink" and whenever I connect to a server I ommit the ssh protocol. You need to have PuTTY's plink.exe installed for this to work.
Solution
M-x customize-variable (and then hit Enter)
tramp-default-method (and then hit Enter again)
On the text field put plink and then Apply and Save the buffer
Whenever I try to access a remote server I now use C-x-f /user#host: and then input the password. The connection is now correctly made under Emacs on Windows to my remote server.

Using the passwd command from within a shell script

I'm writing a shell script to automatically add a new user and update their password. I don't know how to get passwd to read from the shell script instead of interactively prompting me for the new password. My code is below.
adduser $1
passwd $1
$2
$2
from "man 1 passwd":
--stdin
This option is used to indicate that passwd should read the new
password from standard input, which can be a pipe.
So in your case
adduser "$1"
echo "$2" | passwd "$1" --stdin
[Update] a few issues were brought up in the comments:
Your passwd command may not have a --stdin option: use the chpasswd
utility instead, as suggested by ashawley.
If you use a shell other than bash, "echo" might not be a builtin command,
and the shell will call /bin/echo. This is insecure because the password
will show up in the process table and can be seen with tools like ps.
In this case, you should use another scripting language. Here is an example in Perl:
#!/usr/bin/perl -w
open my $pipe, '|chpasswd' or die "can't open pipe: $!";
print {$pipe} "$username:$password";
close $pipe
The only solution works on Ubuntu 12.04:
echo -e "new_password\nnew_password" | (passwd user)
But the second option only works when I change from:
echo "password:name" | chpasswd
To:
echo "user:password" | chpasswd
See explanations in original post: Changing password via a script
Nowadays, you can use this command:
echo "user:pass" | chpasswd
Read the wise words from:
http://mywiki.wooledge.org/BashFAQ/078
I quote:
Nothing you can do in bash can possibly work. passwd(1) does not read from standard input. This is intentional. It is for your protection. Passwords were never intended to be put into programs, or generated by programs. They were intended to be entered only by the fingers of an actual human being, with a functional brain, and never, ever written down anywhere.
Nonetheless, we get hordes of users asking how they can circumvent 35 years of Unix security.
It goes on to explain how you can set your shadow(5) password properly, and shows you the GNU-I-only-care-about-security-if-it-doesn't-make-me-think-too-much-way of abusing passwd(1).
Lastly, if you ARE going to use the silly GNU passwd(1) extension --stdin, do not pass the password putting it on the command line.
echo $mypassword | passwd --stdin # Eternal Sin.
echo "$mypassword" | passwd --stdin # Eternal Sin, but at least you remembered to quote your PE.
passwd --stdin <<< "$mypassword" # A little less insecure, still pretty insecure, though.
passwd --stdin < "passwordfile" # With a password file that was created with a secure `umask(1)`, a little bit secure.
The last is the best you can do with GNU passwd. Though I still wouldn't recommend it.
Putting the password on the command line means anyone with even the remotest hint of access to the box can be monitoring ps or such and steal the password. Even if you think your box is safe; it's something you should really get in the habit of avoiding at all cost (yes, even the cost of doing a bit more trouble getting the job done).
Here-document works if your passwd doesn't support --stdin and you don't want to (or can't) use chpasswd for some reason.
Example:
#!/usr/bin/env bash
username="user"
password="pass"
passwd ${username} << EOD
${password}
${password}
EOD
Tested under Arch Linux. This passwd is an element of shadow-utils and installed from the core/filesystem package, which you usually have by default since the package is required by core/base.
You could use chpasswd
echo $1:$2 | chpasswd
For those who need to 'run as root' remotely through a script logging into a user account in the sudoers file, I found an evil horrible hack, that is no doubt very insecure:
sshpass -p 'userpass' ssh -T -p port user#server << EOSSH
sudo -S su - << RROOT
userpass
echo ""
echo "*** Got Root ***"
echo ""
#[root commands go here]
useradd -m newuser
echo "newuser:newpass" | chpasswd
RROOT
EOSSH
I stumbled upon the same problem and for some reason the --stdin option was not available on the version of passwd I was using (shipped in Ubuntu 14.04).
If any of you happen to experience the same issue, you can work it around as I did, by using the chpasswd command like this:
echo "<user>:<password>" | chpasswd
Tested this on a CentOS VMWare image that I keep around for this sort of thing. Note that you probably want to avoid putting passwords as command-line arguments, because anybody on the entire machine can read them out of 'ps -ef'.
That said, this will work:
user="$1"
password="$2"
adduser $user
echo $password | passwd --stdin $user
This is the definitive answer for a teradata node admin.
Go to your /etc/hosts file and create a list of IP's or node names in a text file.
SMP007-1
SMP007-2
SMP007-3
Put the following script in a file.
#set a password across all nodes
printf "User ID: "
read MYUSERID
printf "New Password: "
read MYPASS
while read -r i; do
echo changing password on "$i"
ssh root#"$i" sudo echo "$MYUSERID":"$MYPASS" | chpasswd
echo password changed on "$i"
done< /usr/bin/setpwd.srvrs
Okay I know I've broken a cardinal security rule with ssh and root
but I'll let you security folks deal with it.
Now put this in your /usr/bin subdir along with your setpwd.srvrs config file.
When you run the command it prompts you one time for the User ID
then one time for the password. Then the script traverses all nodes
in the setpwd.srvrs file and does a passwordless ssh to each node,
then sets the password without any user interaction or secondary
password validation.
For me on Raspbian it works only this way (old password added):
#!/usr/bin/env bash
username="pi"
password="Szevasz123"
new_ps="Szevasz1234"
passwd ${username} << EOD
${password}
${new_ps}
${new_ps}
EOD
Have you looked at the -p option of adduser (which AFAIK is just another name for useradd)? You may also want to look at the -P option of luseradd which takes a plaintext password, but I don't know if luseradd is a standard command (it may be part of SE Linux or perhaps just an oddity of Fedora).
Sometimes it is useful to set a password which nobody knows. This seems to work:
tr -dc A-Za-z0-9 < /dev/urandom | head -c44 | passwd --stdin $user
echo 'yourPassword' | sudo -S yourCommand
if -S doesnt work try with -kS
You can use the expect utility to drive all programs that read from a tty (as opposed to stdin, which is what passwd does). Expect comes with ready to run examples for all sorts of interactive problems, like passwd entry.

Resources