Why doesn't the login command accept piped stdin? - linux

While
echo pwd | bash -i
works
echo pwd | login -f root
doesn't work. I expected the login command to set some environment variables and start an interactive shell, but apparently it is somehow special.
What does the login command do so the example above doesn't work? And are there any alternatives to the login command which can be used in that way?

The login command checks if it is connected to a tty before working. You can simulate a tty with the script command as answered here https://stackoverflow.com/a/1402389/3235192
echo pwd | script -qc "login -f root" /dev/null
Also works with heredoc.
script -qc "login -f root" /dev/null << EOF
pwd
EOF

Related

Commands don't echo after sudo as another user

I have a single command to ssh to a remote linux host and execute a shell script.
ssh -t -t $USER#somehost 'bash -s' < ./deploy.sh
Inside deploy.sh I have this:
#!/bin/bash
whoami; # I see this command echo
sudo -i -u someoneelse #I see this command echo
whoami; # I DON'T see this command echo, but response is correct
#subsequent commands don't echo
When I run the deploy.sh script locally all commands echo.
How do I get commands to echo after I sudo as another user over ssh?
Had to set -x AFTER sudo as another user
#!/bin/bash
whoami;
sudo -i -u someonelese
set -x #make sure echo on
whoami; #command echoed

shell script for remote connection to other system and execute bunch of command in it

I need a shell script that can take remote login in to a system and i can execute a bunch of commands in that system.
I made a script and actually it's working:
#!/bin/bash
USERNAME=KRUNAL
IP=10.61.162.241
ssh -l ${USERNAME} ${IP} "pwd "
ssh -l ${USERNAME} ${IP} "ls -la"
ssh -l ${USERNAME} ${IP} ./a.out
I have problem that if suppose i made script
ssh -l ${USERNAME} ${IP} "pwd " # this execute in remote system
ls -la # this execute in current system.
so every time i need ssh command to execute file on remote system.
Is there any way that i can run bunch of code in remote system with one time login.
You can send as much commands to ssh as you want, provided that you separate them with ; or linebreaks. So this should work:
ssh -l ${USERNAME} ${IP} "pwd; ls -la"
#Joao's suggestion works fine however its impractical when writing many lines.
If this is the case you can do
ssh -1 ${USERNAME} ${IP} bash << 'EOF'
cd /some/directory
./a.out
who am i
for i in `seq 1 10`
do
echo $i
done
EOF
Anything between 'EOF' and the final EOF will be executed in the server side.
You can also replace bash with csh or python and write code for that interpreter instead
If you want the output of the ssh session be stored in a file (say session.log) then replace
ssh -1 ${USERNAME} ${IP} bash << 'EOF'
with
ssh -1 ${USERNAME} ${IP} bash << 'EOF' > 'session.log'
rest remains unchanged

How to log non-interactive bash command sent through ssh

I'm sending a command through ssh:
ssh server.org 'bash -s' << EOF
ls -al
whoami
uptime
EOF
How to log it in the system (remote server)? I'd like to log those commands in some file (.bash_history or /tmp/log).
I've tried to add the line below to sshd_config:
ForceCommand if [[ -z $SSH_ORIGINAL_COMMAND ]]; then bash; else echo "$SSH_ORIGINAL_COMMAND" >> .bash_history; bash -c "$SSH_ORIGINAL_COMMAND"; fi
But it logs "bash -s" only.
I'll appreciate any help.
When bash shell exits, bash reads and executes commands from the ~/.bash_logout file. Probably you can run the history command at the end in the .bash_logout(of the server) and save it to some location.
If it suffices to work with the given command, we can put the necessary additions to enable and log command history at the beginning and end, e. g.
ssh server.org bash <<EOF
set -o history
ls -al
whoami
uptime
history|sed 's/ *[0-9]* *//' >>~/.bash_history
EOF
Or we could put them into the awfully long ForceCommand line:
… if [[ "$SSH_ORIGINAL_COMMAND" == bash* ]]; then echo "set -o history"; cat; echo "history|sed 's/ *[0-9]* *//' >>~/.bash_history"; else cat; fi | bash -c "$SSH_ORIGINAL_COMMAND"; fi

"stdin: is not a tty" from cronjob

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.
in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi
I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

Change linux password in a script, quietly

As part of trying to implement a security measure in my root ssh session, I'm trying to devise a method of starting a script after n seconds of root user login, and change the user password and logout the user automatically.
I'm getting stuck at trying to change the password silently. I have the following code:
echo -e "new\nnew" | passwd -q
This instead of changing the password "quietly" as mentioned in man pages, outputs this:
~/php-pastebin-v3 #echo -e "new\nnew" | passwd -q
Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully
which doesnt help much.
I tried to pipe stdout and stderr, however I think I have misunderstood piping.
~/php-pastebin-v3 #echo -e "new\nnew" | passwd -q > /dev/null
Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully
~/php-pastebin-v3 #echo -e "new\nnew" | passwd -q /dev/null 2>&1
passwd: user '/dev/null' does not exist
What's the correct method to change the password via a script, quietly?
If you want to redirect both stdout and sterr:
echo "..." | passwd &> /dev/null
which is the equivalent of
echo "..." | passwd > /dev/null 2>&1
which means "redirect stdout to /dev/null and then redirect (duplicate) stderr to stdout". This way you redirect both stdout and stderr to null ... but it might not be enough (it will be in this case I believe). But theoretically the program might write directly to terminal. For example this script
$ cat test.sh
echo stdout
echo stderr 1 1>&2
echo stderr 2 >/dev/stderr
echo stderr 3 >/dev/fd/2
echo bad luck > /dev/tty
$ ./test.sh &> /dev/null
bad luck
To get rid even of this output you must force the program to run in pseudo terminal, for example http://empty.sourceforge.net/ . But that is just a side note &> /dev/null will work fine.
You can also do it that way:
mkpasswd
# Password:blah
# BVR2Pnr3ro5B2
echo "user:BVR2Pnr3ro5B2" | chpasswd -e
so the password is already encrypted in the script.
This worked for me
echo "passssssword" | passwd root --stdin > /dev/null
Notice: --stdin works for root user only

Resources