I am having trouble getting the ls command file back - linux

I want to move /bin/ls to /root, but I typed a wrong dir:
mv /bin/ls /roo
Now I couldn't find the ls command file, how can I retrieve it?

First of all, why do you want to do that?? Careful with root privilege!!
Unless you have an extremely good reason and know exactly what you're doing, don't move unix commands from /bin. For one thing, other OS components and libraries may depend on them and you could totally hose your system.
ls is used from various binaries in subprocesses to list files.
Do this to recover, if you're sure what you're showing here is what you did to move it exactly.
mv /roo /bin/ls

Related

Execute a bash script without typing ./ [duplicate]

I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.

bash: get path from current directory given sub-directory name

Trying to write a script to clean up environment files after a resource is deleted. The problem is all the script is given as input is the name of the resource (this cannot be changed) with zero identifying information beyond that. How can I find the path of the directory the resource is sitting in?
The directory is set up a bit like the following, although much more extensive. All of these are directories, not files. There can be as many as 40+ directories to search, but the desired one is generally not more than 2-3 directories deep.
foo
aaa
aaa_green
aaa_blue
bbb
ccc
ccc_green
bar
ddd
eee
eee_green
eee_blue
fff
fff_green
fff_blue
fff_pink
I might be handed input like aaa_green or just ddd.
As an example, given eee_blue as input, I need to know eee_blue's path from the working directory so I can cd there and delete the directory. IE, I would expect to return bar/eee/eee_blue/ or bar/eee/, either is acceptable.
The "best" option I can see currently is to cd into the lowest level of each directory via multiple greps, get each's contents and look for a match, and when it does (eventually) match save that cd'ing as the path. This frankly sounds awful and inefficient.
The only other alternative method I could think of was a straight recursive grep, but I tested it and at 8 minutes it still hadn't finished running.
This script needs to run on both mac and linux, although in a desperate pinch I could go linux only.
The standard Unix tool for doing this sort of task is the find command. The GNU version of find has more extensive options than the POSIX specification (by quite a margin). The version on macOS Sierra (and Mac OS X) is similar to the GNU version. I found an online manual for OS X 10.9 at Apple find, but there's probably a better location somewhere.
It looks like you might want to run:
find . -name 'eee_blue'
which will print the names of matching files or directories, or perhaps:
find . -name 'eee_blue' -exec rm -fr {} +
which will run the rm -fr command on each name. You can run a custom script you create in place of rm -fr if you prefer; if the logic is complex, it's what I do.
Be extremely cautious before using rm -fr automatically!

How to list recently deleted files from a directory?

I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.

When running a sh file in linux, why do I have to run ./name.sh?

I have a file called x.sh that I want to execute. If I run:
x.sh
then I get:
x.sh: command not found
If I run:
./x.sh
then it runs correctly. Why do I have to type in ./ first?
Because the current directory is not into the PATH environment variable by default, and executables without a path qualification are searched only inside the directory specified by PATH. You can change this behavior by adding . to the end of PATH, but it's not common practice, you'll just get used to this UNIXism.
The idea behind this is that, if executables were searched first inside the current directory, a malicious user could put inside his home directory an executable named e.g. ls or grep or some other commonly used command, tricking the administrator to use it, maybe with superuser powers. On the other hand, this problem is not much felt if you put . at the end of PATH, since in that case the system directories are searched first.
But: our malicious user could still create his dangerous scripts named as common typos of often used commands, e.g. sl for ls (protip: bind it to Steam Locomotive and you won't be tricked anyway :D).
So you see that it's still better to be safe that, if you type an executable name without a path qualification, you are sure you're running something from system directories (and thus supposedly safe).
Because the current directory is normally not included in the default PATH, for security reasons: by NOT looking in the current directory all kinds of nastiness that could be caused by planting a malicious program with the name of a legitimate utility can be avoided. As an example, imagine someone manages to plant a script called ls in your directory, and that script executes rm *.
If you wish to include the current directory in your path, and you're using bash as your default shell, you can add the path via your ~/.bashrc file.
export PATH=$PATH:.
Based on the explanation above, the risk posed by rogue programs is reduced by looking in . last, so all well known legitimate programs will be found before . is checked.
You could also modify the systemwide settings via /etc/profile but that's probably not a good idea.
Because current directory is not in PATH (unlike cmd in Windows). It is a security feature so that malicious scripts in your current directory are not accidentally run.
Though it is not advisable, to satisfy curiosity, you can add . to the PATH and then you will see that x.sh will work.
If you don't explicitly specify a directory then the shell searches through the directories listed in your $PATH for the named executable. If your $PATH does not include . then the current directory is not searched.
$ echo $PATH
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
This is on purpose. If the current directory were searched then the command you type could potentially change based on what directory you're in. This would allow a malicious user to place a binary named ls or cp into directories you frequent and trick you into running a different program.
$ cat /tmp/ls
rm -rf ~/*
$ cd /tmp
$ ls
*kaboom*
I strongly recommend you not add . to your $PATH. You will quickly get used to typing ./, it's no big deal.
You can't execute your file by typing simply
x.sh
because the present working directory isn't in your $PATH. To see your present working directory, type
$ pwd
To see your $PATH, type
$ echo $PATH
To add the current directory to your $PATH for this session, type
$ PATH=$PATH:.
To add it permanently, edit the file .profile in your home directory.

Inject parameter in hardcoded tar command

I'm using a linux software solution that uses the tar command to backup huge amounts of data.
The command which is hardcoded into the binary which calls the tar is:
/bin/tar --exclude "/backup" --exclude / --ignore-failed-read -cvjf - /pbackup 2>>'/tar_err.log' | split -b 1000m - '/backup/temp/backup.tar.bz2'
There is no chance to change the command, as it is harcoded. It uses bzip2 to compress the data. I experienced a strong performance improvement (up to 60%) when using the parameter --use-compress-prog=pbzip2 which utilizes all CPU cores.
By symlinking the bzip2 from /bin/bzip2 to the pbzip2 binary I tried to trick the software, however when monitoring the process it still uses bzip2 as I tink this is built into tar.
I know it is a tricky question but is there any way to utilize pbzip2 without changing this command that is externally called?
My system is Debian Sequeeze.
Thanks very much!
Danger: ugly solution ahead; backup the binary before proceeding
First of all, check if the hardcoded string is easily accessible: use strings on your binary, and see if it displays the string you said (probably it will be in several pieces, e.g. /bin/tar, --exclude, --ignore-failed-read, ...).
If this succeeds, grab your hex editor of choice, open the binary and look for the hardcoded string; if it's split in several pieces, the one you need is the one containing /bin/tar; overwrite tar with some arbitrary three-letter name, e.g. fkt (fake tar; a quick Google search didn't turn up any result for /usr/bin/fkt, so we should be safe).
The program should now call your /usr/bin/fkt instead of the regular tar.
Now, put in your /bin a script like this:
#!/bin/sh
/bin/tar --use-compress-prog=pbzip2 $*
call it with the name you chose before (fkt) and set the permissions correctly (they should be 755 and owned by root). This script just takes all the parameters it gets and call the real tar, adding in front of them the parameter you need.
Another solution, that I suggested in the comments, may be creating a chroot just for the application, renaming tar to some other name (realtar, maybe?) and calling the script above tar (obviously now you should change the /bin/tar inside the script to /bin/realtar).
If the program is not updated very often and the trick worked at the first try I would probably go with the first solution, setting up and maintaining chroots is not fun.
Why not move /bin/tar to (say) /bin/tar-original
Then create a script /bin/tar to do whatever you want it to do.

Resources