How to modify standard linux commands? - linux

I am looking for a way to edit the source code of common Linux commands (passwd, cd, rm, cat)
Ex. Every time the 'cat' command is called (by any user), it performs its regular function, but also prints "done" to stdout after.

If you're only looking to "augment" the commands as in your example, you can create e.g. /opt/bin/cat.sh:
/bin/cat && echo "done"
and then either:
change the default PATH (in /etc/bash.bashrc in Ubuntu) as follows:
PATH=/opt/bin:$PATH
or rename cat to e.g. cat.orig and move cat.sh to /bin/cat.
(If you do the latter, your script needs to call cat.orig not cat).
If you want to actually change the behavior, then you need to look at the sources here:
https://ftp.gnu.org/gnu/coreutils/
and then build them and replaces them.
All this assumes, of course, that you have root permissions, seeing how you want to change that behavior for any user.

The answer to how to modify the source is not to, unless you have a REALLY good reason to. There’s a plethora of reasons why you shouldn’t, but a big one is that you should try to avoid modifying the source of anything that could receive an update. The update breaks, if not erases, your code and you’re left with a lot of work.
Alternatively, you can use things like Alias for quick customizations and write scripts that call and rely upon the command being available, instead of worrying about its implementation. I’ve over explained, but that’s because I’m coming to you as someone with only a little experience with Linux but much more in Development, and what I’ve said extends beyond an Operating Systems CLI capabilities and lands further into general concepts of development.

Related

How to protect my script from copying and modifying in it?

I created expect script for customer and i fear to customize it like he want without returning to me so I tried to encrypt it but i didn't find a way for it
Then I tried to convert it to excutable but some commands was recognized by active tcl like "send" command even it is working perfectly on red hat
So is there a way to protect my script to be reading?
Thanks
It's usually enough to just package the code in a form that the user can't directly look inside. Even the smallest of speed-bump stops them.
You can use sdx qwrap to parcel your script up into a starkit. Those are reasonably resistant to random user poking, while being still technically open (the sdx tool is freely available, after all). You can convert the .kit file it creates into an executable by merging it with a packaged runtime.
In short, it's basically like this (with some complexity glossed over):
tclkit sdx.kit qwrap myapp.tcl
tclkit sdx.kit unwrap myapp.kit
# Copy additional assets into myapp.vfs if you need to
tclkit sdx.kit wrap myapp.exe -runtime C:\path\to\tclkit.exe
More discussion is here, the tclkit runtimes are here, and sdx itself can be obtained in .kit-packaged form here. Note that the runtime you use to run sdx does not need to be the same that you package; you can deploy code for other platforms than the one you are running from. This is a packaging phase action, not a compilation or linking.
Against more sophisticated users (i.e., not Joe Ordinary User) you'll want the Tcl Compiler out of the ActiveState TclDevKit. It's a code-obscurer formally (it doesn't actually improve the performance of anything) and the TDK isn't particularly well supported any more, but it's the main current solution for commercial protection of Tcl code. I'm on a small team working on a true compiler that will effectively offer much stronger protection, but that's not yet released (and really isn't ready yet).
One way is to store the essential code running in your server as back-end. Just give the user a fron-end application to do the requests. This way essential processes are on your control, and user cannot access that code.

Are there any security issues in using environment variables like this?

I'm writing a collection of utilities in bash that has a common library. Each script I write has to have a block of code that determines the path of the library relative to the executable. Not the actual code, but an example:
#!/bin/bash
DIR="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. $DIR/../lib/utilities/functions
Instead of a preamble that tries to search for the library, I had the bright idea to use an environment variable to indicate the libraries location.
#!/bin/bash
. $TOOLS_LIBRARY_PATH
I could use a wrapper program to set that environment variable, or I could set it in my own path. There may be better ways to organize a bash toolset, but question is:
Can I trust my environment variables?
This is one of those, I've-never-really-thought-about-it kind of questions. When programming in other languages, paths are used to find libraries, (e.g. LD_LIBRARY_PATH, PYTHONPATH, PERLLIB, RUBYLIB, CLASSPATH, NODE_PATH), but I've never had to stop and think about how that could be insecure.
In fact, LD_LIBRARY_PATH has Why LD_LIBRARY_PATH is bad to discourage its use. The Ruby and Perl library path environment variables are ignored if their security mechanisms are invoked, $SAFE and -T (taint mode) respectively.
My thoughts so far...
The user could set TOOLS_PATH_LIBRARY to a library of their choosing, but the utility will run under their uid. They could simply run their malicious library directly with bash.
My tools sudo some things. Someone could set their TOOLS_PATH_LIBRARY to something that takes advantage of this. But, the tools are not run via sudo, they only invoke sudo here and there. The user would have to be a sudoer in any case, they could just call sudo directly.
If I can't trust TOOLS_PATH_LIBRARY, then I can't trust PATH. All program invocations must use absolute paths.
I've seen shell programs use aliases for programs that are absolute, so that instead of calling ls, use a variable, like LS=/bin/ls. From what I've read, this is to protect against users redefining program defaults as aliases. See: PATH, functions and security. Bash scripting best practices.
.
Perl's taint mode treats all environment variables as "tainted", which foreboding, which is why I'm trying to reason about the risks of environment.
It is not possible for one users to change another's environment, unless that user is root. Thus, I'm only concerned about a user changing their own environment to escalate privileges. See: Is there a way to change another process's environment variables?
I've rubber ducked this into an answer of sorts, but I'm still going to post it, since it isn't a pat answer.
Update: What are the security issues surrounding the use of environment variables to specify paths to libraries and executables?
While mechanisms exist in various programs to prevent the modification of environment variables, the bottom line is, no you can't trust environment variables. The security concern is very basic:
Anytime a user can change what is expected to be executed, the possibility for a vulnerability to be exploited arises.
Case-in-point, have a look at CVE-2010-3847. With that, an underprivileged attacker with write access to a file system containing a setuid or setgid binaries could use this flaw to escalate their privileges. It involves an environment variable being modified by the user.
CVE-2011-1095 is another example, and doesn't involve SUID binaries. Do a google search for "glibc cve environment" to see the kinds of stuff that people were able to do with environment variable modifications. Pretty crafty.
Your concerns really boil down to your statement of:
The user could set TOOLS_PATH_LIBRARY to a library of their choosing, but the utility will run under their uid. They could simply run their malicious library directly with bash.
Key phrase here - run their malicious library. This assumes their library is owned by their UID as well.
This is where a security framework will do you lots of good. One that I have written that focuses exclusively on this problem you can find here:
https://github.com/cormander/tpe-lkm
The module stops execve/mmap/mprotect calls on files that are writable, or not owned by a trusted user (root). As long as they're not able to put malicious code into a file/directory owned by the trusted user, they can't exploit the system in this way.
If you're using SUID binaries or sudo that include those variables, you might want to consider enabling the "paranoid" and "strict" options to stop even root from trusting non-root owned binaries.
I should mention that this Trusted Path Execution method protects direct execution with binaries and shared libraries. It does little (if anything) against interpreted languages, as they parse bytecode and not execute it directly. So you still need some degree of care with PYTHONPATH, PERLLIB, CLASSPATH, etc and use the language's safety mechanisms that you mentioned.
Short answer:
Assuming the user is able to run programs and code of her own choice anyway, you do not have to trust anything they feed you, including the environment. If the account is limited in some ways (no shell access, no write access to file systems that allow execution), that may change the picture, but as long as your scripts are only doing things the user could do herself, why protect against malicious interference?
Longer answer:
There are at least two separate issues to consider in terms of security problems:
What and whom do we need to guard against?
(closely related question: What can our program actually do and what could it potentially break?)
How do we do that?
If and as long as your program is running under the user ID of the user starting the program and providing all the input (i.e., the only one who could mount any attack), there are only rare situations where it makes sense at all to harden the program against attacks. Anti-copy protection and sharing high-scores comes to mind, but that group of things is not just concerned with inputs, but probably even more so with stopping the user from reading code and memory. Hiding the code of a shell script without some kind of suid/sgid trickery is nothing I'd know how to do; the best I could think of would be obscuring the code.
In this situation, anything the user could trick the program into doing, they could do without the help of the tool, too, so trying to “protect” against attacks by this user is moot. Your description does not read as if you'd actually need any protection against attacks.
Assuming you do need protection, you simply cannot rely on environment variables – and if you fail to reset things like LD_LIBRARY_PATH and LD_PRELOAD, even calling tools with absolute paths like /bin/id or /bin/ls won't give you a reliable answer, unless that tool happens to be statically compiled. This is why sudo has env_reset enabled by default and why running suid programs has to ignore certain environment variables. Note that this means that your point that TOOLS_PATH_LIBRARY and PATH are equally trustworthy may be true in your situation, but is not necessarily true in other situations' border cases: a sysadmin may reset PATH for sudo usage, but let non-standard environment variables pass through.
As pointed out above, argv[0] (or its bash equivalent ${BASH_SOURCE[0]}) is no more reliable than environment variables. Not only can the user simply make a copy or symlink of your original file; execve or bash's exec -a foo bar allows putting anything into argv[0].
You can never trust someone else's environment.
Why not simply create a new user that contains all of this important code. Then, you can either get the information directly from /etc/passwd or use the ~foo syntax to find the user's home directory.
# One way to get home directory of util_user
DIR=$(/usr/bin/awk -F: '$1 == "util_user" {print $6}' /etc/passwd)
# Another way which works in BASH and Kornshell
[ -d ~util_dir ] && DIR=~util_dir
# Make sure DIR is set!
if [ -z "$DIR" ]
then
echo "Something's wrong!"
exit
fi

Shell command completion

User commands can be given the -complete=shellcmd option. This turns out to be quite disappointing, since instead of working in the same way as vim's built-in completion for its :! command, it just repeatedly completes the names of shell commands in the path.
I'd like to write a command that completes command names and then files. Is there a convenient way to do this, either with built-in vim functionality or via an addon?
Why not leverage bash-completion while you're at it? You should be able to fork a bash shell with /etc/bash_completion sourced and talk to it over pipes to interrogate it. That way you'd have all the shell goodies.
Of course, you could do as I do and C-z suspend Vim to drop back to your shell instead.
I ended up writing a vim addon called shell_complete. It's not really super-megazord-awesome, but it does do pretty much what I was looking for, which is to more or less emulate the completion used by :!. This is to say that it's sort of "pseudo shell completion": it completes commands for the first "word", and files from there out. Basic \ escaping of spaces seems to work okay.
pseudo shell completion
Installation of shell_complete is sort of complicated, especially for those not familiar with vim-addon-manager. The addon has a couple of dependencies, and none of them are "published", which is to say that VAM doesn't know about them yet. You can let VAM know about them by installing (via git clone; see the docs for more info) another addon I've called tt_addons. Anyway, once you've done this, you should be able to just :ActivateAddons tt_addons and then :ActivateAddons shell_complete.
If you're not using VAM, you'll have to download (or git clone, more likely) all of the related modules, and then mix them into your vim directory, or make them pathogen bundles, or what have you. If you actually end up wanting to do this, it's likely worth your while to start using VAM. If you're some sort of curmudgeonly Luddite who refuses to do so, let me know and I might put the thing up on vim.org if there's decent interest.
I think it's pretty obvious (at least from the docs) how to use shell_complete, but if you would like to see it In Action you can check out my reput addon, which uses it to do completion on its :RePutShell command. reput is also currently only available through github, and the same caveats apply with respect to installation via VAM.
Actual shell completion using the shell
For the record, I think sehe's suggestion about using the shell itself to do completion is totally flash. I actually spent quite a while figuring out how to do this, and have determined that it's possible, at least in theory. I initially thought it would be easier to do it this way than by doing what shell_complete does, but it turns out that bash (like vim) doesn't provide any programmatic way to access its completion facilities, so you end up basically having to reimplement them in bash, after scraping the configuration using grep and friends. Bash sucks a lot for this sort of thing, so I'm cowardly refusing to fight that battle, at least for the moment.
Should someone be so brave/foolish as to take up this standard, they may avail themselves of the chronicles of my travails. I managed to get it to do completions that are handled by custom completion functions. However, this is only a small part of the puzzle, because bash also provides about 6½ other ways to do completion. It does sort of complement the functionality provided by shell_complete, so it might be worthwhile to synergistically merge the two into a sort of awkward, drunken Voltron of vim shell completion.

How to learn your way through Linux's shell

I want to stop losing precious time when dealing with Linux/Unix's shell.
If I could get to understand it well, that would be great. Otherwise:
I may end up losing a day just for setting up a crontab.
I'll keep wondering why the shebang in this script doesn't work.
I'll keep wondering what the real difference is between:
. run.sh
./run.sh
. ./run.sh
sh run.sh
sh ./run.sh
You see, it's these kind of things that cripple my Linux/Unix life.
As a programmer, I want to get much better at this. I suppose it's best to stay with the widely-used bash shell, but I might be wrong. Whatever tool I use, I need to understand it down to its guts.
What is the ultimate solution?
Just for fun:
. run.sh --- "Source" the code in run.sh. Usually, this is used to get environment variables into the current shell processes. You probably don't want this for a script called run.sh.
./run.sh --- Execute the run.sh script in the current directory. Generally the current directory is not in the default path (see $PATH), so you need to call out the relative location explicitly. The . character is used differently than in item #1.
. ./run.sh --- Source the run.sh script in the current directory. This combines the use of . from items #1 and #2.
sh run.sh --- Use the sh shell interpretor on run.sh. Bourne shell is usually the default for running shell scripts, so this is probably the same as item #2 except it finds the first run.sh in the $PATH rather than the one in the current directory.
sh ./run.sh --- And this is usually the same as #2 except wordier.
Command line interfaces, such as the various shell interpretors, tend to be very esoteric since they need to pack a lot of meaning into a small number of characters. Otherwise typing takes too long.
In terms of learning, I'd suggest using bash or ksh and don't let anyone talk you into something else until you are comfortable. Please don't learn with csh or you will need to unlearn too much when you start with a Bourne-type shell later.
Also, crontab entries are a bit trickier than other uses of shell. My guess is you lost time because your environment was set differently than on the command line. I would suggest starting somewhere else if possible.
Man pages are probably the "ultimate" solution. I'm always amazed at what gems they contain.
You can even use man bash to answer some of the questions you raise in this question.
Try the Linux Documentation Project's BASH pages here [tldp.org].
PS: The difference is that . is the same as the source command. This only requires read permission. To use ./run.sh, you need execute permissions. When you use 'sh', you are explicitly specifying the command you want to run the script with (here, you only need read permission). If you are using a shell to execute it, there shouldn't be a problem there. If you want to use a different program, such as 'python', you could use python run.py. Another trick is to add a line #!<program> to the beginning of your script. In your case, #!/bin/sh would do, and for Python, #/usr/bin/env python is best.
I think immersing yourself in it is the answer. It's like learning to walk, to type, to use an ergonomic keyboard, or type in dvorak. Commit to it completely. And yes, you will absolutely slow down. And you'll be forced to look at your hands or google things constantly. But eventually it will come to you.
Funny story, I killed my apartment's internet access by doing an ifconfig release. Completely didn't know that ifconfig renew was not a command. Had to call a friend when google didn't load ;) dhcpcd later and I was back to googling everything.
Oreily has an older book that teaches you about bash, and bash scripting at the end. Most everything you need to know is on the web, but spread out.
The best way to learn is by reading, and then trying things out. MAN pages are often very helpful, and there are tons of Shell scripting tutorials out there. If shell scripting is what you are after, just read, and then practice the things you read by writing little scripts that do something neat or fun. If you are looking for more info on all of the different command line applications that can be run from a shell, those are more distro dependent, so look in the documentation for your favorite distro.
There are some decent online guides that will help you feel more comfortable in the shell(s). For example:
Steve Parker's Unix/Linux Shell Scripting Tutorial
UNIX Shell Programming, by Stephen G. Kochan and Patrick H. Wood, available online at Google Books. Also available in hardback. Amazon has it.
UNIX shell scripting with sh/ksh. Apparently used as part of a class at Dartmouth College. ksh is the Korn shell, and it's close enough to bash that the info will be useful.
Spend some time reading those kinds of tutorials and, above all, playing in the shell. Pretty soon, it'll start to feel like $HOME. (Okay, sorry for the bad pun...)
use the shell a lot, type "[prog-name] --help" a lot, type "man [prog-name]" a lot, and make notes on what works and what does not -- even if those notes seem obvious at the time. By tomorrow, they might not be so obvious, again. OTOH, in a couple of weeks they definitely should be!
Have a gander at some of the many books on working with shells, like From Bash to Z Shell (which covers Bash and Z), slog through a shell-scripting tutorial, or Gnu's Bash manual.
While sticking with Bash might be best, while just learning, the fish shell might be a little bit easier to use. I played around with it for a few weeks, and while it didn't seem as powerful as bash, it did seem pretty user friendly.
The way I really learned my way around in Linux was to install gentoo. It takes forever but you begin to see how everything ties together.
Go grab the latest instructions and start following them. Do it enough times and it starts to stick.
After a while you get comfortable building everything from scratch.
I find trying to learn a new command using the man pages sometimes a bit overwhelming.
I prefer man pages for refreshing my memory.
When I want to learn a command I look for examples then use the man pages to fine tune what I want it to do.

Are there good reasons not to exploit '#!/bin/make -f' at the top of a makefile to give an executable makefile?

Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.

Resources