How to read hidden input from terminal and pipe it to another command - linux

I would like to be able securely type text in terminal and pipe it to another command to:
Not be recorded in terminal history
Be hidden as you type it
Not be recorded in any file or environmental variable
Be in memory for shortest possible time
Ideally:
Using commonly installed tools on linux
Easy to use as echo
Not having to create any scripts/files
Can be piped to other commands
Example of non secure input
echo "secret" | wc -c
Almost what I want:
read -s | wc -c
Basically the same way how you input password to sudo and similar.
My use case
echo "secret" | gpg --encrypt --armor -r 1234567890ABCDEF | xclip
I am looking for a way with restrictions I mentioned in points above. Knowing that what I am looking for doesn't exist is also an answer I will accept and mark.
I created alias from accepted answer
alias secnote="{ read -s; printf %s $REPLY; } | gpg --encrypt --armor -r 123467890ABCDEF | pbcopy"

Is this what you wanted to achieve ?
$ read -s # I type `secret`
$ echo $REPLY
secret
$ printf %s $REPLY | wc -c
6
$ unset REPLY
$ echo $REPLY
# empty now
Or you want one-liner like this :
{ read -s -p "Input a secret: "; printf %s $REPLY; } | wc -c
If you define an alias :
alias readp='{ read -s -p "Input a secret: "; printf %s $REPLY; }'
then you can do readp | wc -c

Related

Run a command on remote machine and store its output in variable on remote machine

I want to capture number of rules of iptables that start with specific pattern in comment and then delete them. This is what I want to achieve. Here is my bash script
ssh -o "StrictHostKeyChecking no" root#$ip_address << EOF
echo "Now Removing your IPTables";
#storing output in input variable
input=$(iptables -nL INPUT --line-number | grep ip.* | cut -d " " -f1 | xargs)
#converting variable into an array
arr1=($input);
#loop through each element of array
echo "length:${#arr1[#]}";
for (( i="${#arr1[#]}"-1;i >=0; i-- ));
do
echo "$i:${arr1[$i]}"
iptables -D INPUT $i;
done;
EOF
Problem is the iptables command is not being executed on the remote machine and the output shows the length of arr1 is 0. But I am sure iptables has rules with my desired pattern.
Error being shown in terminal:
-bash: line 9: 3: command not found
Adding 2>&1 in the end of command also not working:
input=$(iptables -nL INPUT --line-number | grep ip.* | cut -d " " -f1 | xargs 2>&1)
TL;DR: Use <<"EOF" instead of <<EOF.
Your Here-Document will expand all variables and evaluate all subshells before the script is even sent to your ssh server.
Consider the following script:
ssh user#servername <<EOF
echo "$(hostname)"
EOF
This will not print servername (the name of the computer you are connecting to) but the name of your localhost instead (the name of the computer you working on).
Before ssh is executed, the subshell $(hostname) is executed. The resulting string "echo localhostname" is then passed to ssh and executed on the remote server.
To fix the problem you have to escape the $ inside the Here-Document or use a literal Here-Document:
ssh user#servername <<"EOF"
echo "$(hostname)"
EOF

Testing active ssh keys on the local network

I am trying currently to achieve a bash script that will validate if SSH keys on a server are still linked to known hosts that are active on the local area network. You can find below the beginning of my bash script to achieve this:
#!/bin/bash
# LAN SSH KEYS DISCOVERY SCRIPT
# TRYING TO FIND THOSE SSH KEYS NOW
cat /etc/passwd | grep /bin/bash > bash_users
cat bash_users | cut -d ":" -f 6 > cutted.bash_users_home_dir
for bash_users in $(cat cutted.bash_users_home_dir)
do
ls -al $bash_users/.ssh/*id_* >> ssh-keys.txt
done
# DISCOVERING THE KNOWN_HOSTS NOW
for known_hosts in $(cat cutted.bash_users_home_dir)
do
cat $bash_users/.ssh/known_hosts | awk '{print $1}' | sort -u >>
hosts_known.txt
sleep 2
done
hosts_known=$(wc -l hosts_known.txt)
echo "We have $hosts_known known hosts that could be still active via SSH
keys"
# TIME TO TEST WHICH SSH servers are still active with the SSH keys
# AND THIS IS WHERE I AM FROZEN...
# Would love to have bash script that could
# ssh -l $users_that_have_/bin/bash -i $ssh_keys $ssh_servers
# Would also be very nice if it could save active
# SSH servers with the valid keys in output.txt in the format
# username:local-IP:/path/to/SSH_key
Please feel very comfortable to edit/modify the bash script above if it can serve better the goals described.
Any help would be very appreciated,
Thanks
The following works cool:
</etc/passwd \
grep /bin/bash |
cut -d: -f6 |
sudo xargs -i -- sh -c '
[ -e "$1" ] && cat "$1"
' -- {}/.ssh/known_hosts |
cut -d' ' -f1 |
tr ',' '\n' |
sed '
/^\[/{
s/\[\(.*\)\]:\(.*\)/\1 \2/;
t;
};
s/$/ 22/;
' |
sort -u |
xargs -l1 -- sh -c '
if echo "~" | nc -q1 -w3 "$1" "$2" | grep -q "^SSH"; then
echo "#### SUCCESS $1 $2";
else
echo "#### ERROR $1 $2";
fi
' --
So:
Start with /etc/passwd
Filter all "bash_users" as you call them
Filter user home directories only cut -d: -f6
For each user home directory sudo xargs -i -- run
Check if the file .ssh/known_hosts inside the user home directory exists
If it does, print it
Filter only hosts names
Multiple hosts signatures may share same key and are separated by a comma. Replace comma for newline
Now a sed script:
If a line starts with a [ that means it has a format of [host]:port and I want to replace it with host port
If the line does not start with a [ I add 22 to the end of the line so it's host 22
Then I sort -u
Now for each line:
I get the ssh version from ssh echo "~" | nc hostname port returns smth like "SSH-2.0-OpenSSH_6.0" + newline + "Protocol mismatch".
So if the line returned by nc hostname port starts with SSH that means there is ssh running on the other side
I added timeout for unresponsive hosts, but I think nc -w timeout option may also be used. Probably also nc -q 1 should be specified.
Now the real fun is, when you add the max-procs option to the last xargs line, you can check all hosts simultaneously. On my host I have 47 unique addresses and xargs -P30 checks them ALL in like 2 seconds.
But really there are some problems. The script needs root to read from all users known_hosts. But worse, the known_hosts may be hashed. It would be better to firstly know the list of hosts on your network, and then generate known_hosts from it. It would look like ssh-keyscan -f list_of_hosts > ~/.ssh/known_hosts or similar. Generaly ssh-keygen -F hostname should be used if a host exists in known_hosts, sadly there is no listing command. known_hosts file format may be found in ssh documentation.

Why can't this script execute the other script

This script looks for all users that have the string RECHERCHE inside them. I tried running it in sudo and it worked, but then stopped at line 8 (permission denied). Even when removing the sudo from the script, this issue still happens.
#!/bin/bash
#challenge : user search and permission rewriting
echo -n "Enter string to search : "
read RECHERCHE
echo $(cat /etc/passwd | grep "/home" | cut -d: -f5 | grep -i "$RECHERCHE" | sed s/,//g)
echo "Changing permissions"
export RECHERCHE
sudo ./challenge2 $(/etc/passwd) &
The second script then changes permissions of each file belonging to each user that RECHERCHE found, in the background. If you could help me figure out what this isn't doing right, it would be of great service. I
#!/bin/bash
while read line
do
if [-z "$(grep "/home" | cut -d: -f5 | grep -i "$RECHERCHE")" ]
then
user=$(cut -f: -f1)
file=$(find / -user $(user))
if [$(stat -c %a file) >= 700]
then
chmod 700 file 2>> /home/$(user)/challenge.log
fi
if [$(stat -c %a file) < 600]
then
chmod 600 file 2>> /home/$(user)/challenge.log
fi
umask 177 2>> /home/$(user)/challenge.log
fi
done
I have to idea what I'm doing.
the $(...) syntax means command substitution, that is: it will be replaced by the output of the command within the paranthesis.
since /etc/passwd is no command but just a text-file, you cannot execute it.
so if you want to pass the contents of /etc/passwd to your script, you would just call it:
./challenge2 < /etc/passwd
or, if you need special permissions to read the file, something like
sudo cat /etc/passwd | ./challenge2
also in your challenge2 script, you are using $(user) which is wrong as you really only want to expand the user variable: use curly braces for this, like ${user}
/etc/passwd?
not what you were asking, but you probably should not read /etc/passwd directly anyhow.
if you want to get a list of users, use the following command:
$ getent passwd
this will probably give you more users than those stored in /etc/passwd, as your system might use other PAM backends (ldap,...)

Executing a string as a command in bash that contains pipes

I'm trying to list some ftp directories. I can't work out how to make bash execute a command that contains pipes correctly.
Here's my script:
#/bin/sh
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
cmd='echo "ls /mydir/'"$d"'/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1'
$cmd
done
This just outputs:
"ls /mydir/dir1/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
"ls /mydir/dir2/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
How can I make bash execute the whole string including the echo? I also need to be able to parse the output of the command.
I don't think that you need to be using the -b switch at all. It should be sufficient to specify the commands that you would like to execute as a string:
#/bin/bash
dirs=("/dir1" "/dir2")
for d in "${dirs[#]}"
do
printf -v d_str '%q' "$d"
sftp -i ~/mykey user#example.com "ls /mydir/$d_str/*.tar*" 2>&1 | tail -n1
done
As suggested in the comments (thanks #Charles), I've used printf with the %q format specifier to protect against characters in the directory name that may be interpreted by the shell.
First you need to use /bin/bash as shebang to use BASH arrays.
Then remove echo and use command substitution to capture the output:
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
output=$(ls /mydir/"$d"/*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
echo "$output"
done
I will however advise you not use ls's output in sftp command. You can replace that with:
output=$(echo "/mydir/$d/"*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
Don't store the command in a string; just use it directly.
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
echo "ls /mydir/$d/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
done
Usually, people store the command in a string so they can both execute it and log it, as a misguided form of factoring. (I'm of the opinion that it's not worth the trouble required to do correctly.)
Note that sftp reads from standard input by default, so you can just use
echo "ls ..." | sftp -i ~/mykey user#example.com 2>&1 | tail -n1
You can also use a here document instead of a pipeline.
sftp -i ~/mykey user#example.com 2>&1 <<EOF | tail -n1
ls /mydir/$d/*.tar.*
EOF

How to get the command line args passed to a running process on unix/linux systems?

On SunOS there is pargs command that prints the command line arguments passed to the running process.
Is there is any similar command on other Unix environments?
There are several options:
ps -fp <pid>
cat /proc/<pid>/cmdline | sed -e "s/\x00/ /g"; echo
There is more info in /proc/<pid> on Linux, just have a look.
On other Unixes things might be different. The ps command will work everywhere, the /proc stuff is OS specific. For example on AIX there is no cmdline in /proc.
This will do the trick:
xargs -0 < /proc/<pid>/cmdline
Without the xargs, there will be no spaces between the arguments, because they have been converted to NULs.
Full commandline
For Linux & Unix System you can use ps -ef | grep process_name to get the full command line.
On SunOS systems, if you want to get full command line, you can use
/usr/ucb/ps -auxww | grep -i process_name
To get the full command line you need to become super user.
List of arguments
pargs -a PROCESS_ID
will give a detailed list of arguments passed to a process. It will output the array of arguments in like this:
argv[o]: first argument
argv[1]: second..
argv[*]: and so on..
I didn't find any similar command for Linux, but I would use the following command to get similar output:
tr '\0' '\n' < /proc/<pid>/environ
You can use pgrep with -f (full command line) and -l (long description):
pgrep -l -f PatternOfProcess
This method has a crucial difference with any of the other responses: it works on CygWin, so you can use it to obtain the full command line of any process running under Windows (execute as elevated if you want data about any elevated/admin process). Any other method for doing this on Windows is more awkward ( for example ).
Furthermore: in my tests, the pgrep way has been the only system that worked to obtain the full path for scripts running inside CygWin's python.
On Linux
cat /proc/<pid>/cmdline
outputs the commandline of the process <pid> (command including args) each record terminated by a NUL character.
A Bash Shell Example:
$ mapfile -d '' args < /proc/$$/cmdline
$ echo "#${#args[#]}:" "${args[#]}"
#1: /bin/bash
$ echo $BASH_VERSION
5.0.17(1)-release
Another variant of printing /proc/PID/cmdline with spaces in Linux is:
cat -v /proc/PID/cmdline | sed 's/\^#/\ /g' && echo
In this way cat prints NULL characters as ^# and then you replace them with a space using sed; echo prints a newline.
Rather than using multiple commands to edit the stream, just use one - tr translates one character to another:
tr '\0' ' ' </proc/<pid>/cmdline
ps -eo pid,args prints the PID and the full command line.
You can simply use:
ps -o args= -f -p ProcessPid
In addition to all the above ways to convert the text, if you simply use 'strings', it will make the output on separate lines by default. With the added benefit that it may also prevent any chars that may scramble your terminal from appearing.
Both output in one command:
strings /proc//cmdline /proc//environ
The real question is... is there a way to see the real command line of a process in Linux that has been altered so that the cmdline contains the altered text instead of the actual command that was run.
On Solaris
ps -eo pid,comm
similar can be used on unix like systems.
On Linux, with bash, to output as quoted args so you can edit the command and rerun it
</proc/"${pid}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null; echo
On Solaris, with bash (tested with 3.2.51(1)-release) and without gnu userland:
IFS=$'\002' tmpargs=( $( pargs "${pid}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
Linux bash Example (paste in terminal):
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
## recover into eval string that assigns it to argv_recovered
eval_me=$(
printf "argv_recovered=( "
</proc/"${!}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
Solaris Bash Example:
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
pargs "${!}"
ps -fp "${!}"
declare -p tmpargs
eval_me=$(
printf "argv_recovered=( "
IFS=$'\002' tmpargs=( $( pargs "${!}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
If you want to get a long-as-possible (not sure what limits there are), similar to Solaris' pargs, you can use this on Linux & OSX:
ps -ww -o pid,command [-p <pid> ... ]
try ps -n in a linux terminal. This will show:
1.All processes RUNNING, their command line and their PIDs
The program intiate the processes.
Afterwards you will know which process to kill

Resources