How to prevent accidently deleting important files as root user [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I need to use root for the installation and maintenance for the database and application server that I am playing around with. The database and application server create files and directories in the /usr directory that I need to manipulate.I have accidentally deleted important files in /usr in the past , making the system act funny.I was wondering how to prevent this from happening in the future. I have a habit of rm -rf developed from years of non root user use.
I thought of moving /usr to /usr_bkp and creating a soft_link /usr to point to /usr_bkp. I am afraid that moving or removing /usr even temporarily will have unpredictable consequences. What is the best known method to avoid such errors.

The purpose of root is to do everything he wants. However you could use the -i option to have to confirm to delete the files.

"I mostly work as root user these days and have accidentally deleted important files": those two don't work together. Removing files by mistake is one of the actions that have immediate and visible consequences (there are others that might appear later and require lots of time to debug).
Create a separate user (this is what I'd suggest until gaining more XP) and give it sudo rights (the good thing is that extreme care is only required in sudo mode, not by default).
As for the 2nd part:
I'm pretty sure that symlinking /usr is not something that the Linux admin manual recommends
rm -rf /usr/SOME_FILES_OR_FOLDERS when /usr is a symlink to usr_bkp (or whatever its name could be), will still delete the files/folders located in the /usr folder, so no protection there.
The only change I see is that every time something from /usr will be accessed, instead of the direct access, the symlink will have to be resolved (each time ?); (more operations, require more time -> performance decrease)
(if i'm not missing something obvious,) doing the symlink thing would introduce some "traps" since mv, ln,... are ELFs that (by default) reside in /usr/bin, and the 1st command (mv) will succeed, but then you'll no longer have ln (unless specifying its full new path), or in other words "ROUText: ti-ai taiat craca :d ". Regarding running apps, I think you'll be OK (at most a reboot required).
So, considering the 4 items I advise not to use root (at least for now).

What x4rkz said but also if you're concerned of 'rm'ing files as root, create an alias in your shell's rc file to always confirm the removal of files. In Bourne derivatives this can be accomplished with:
alias rm='/usr/bin/rm -i '
Test:
# alias rm='/usr/bin/rm -i'
# rm test
# /usr/bin/rm: remove regular empty file ‘test’? y
Of course, this won't help you if you write a script that doesn't first load that alias.
Another failure mode I've seen with new admins is misusing output redirection and overwriting a file. In Bourne derivatives, you can set the 'NOCLOBBER' option in shell scripts to prevent this:
# set NOCLOBBER
This is not 100% portable so ensure this is supported before relying on it.

OK so you are working with a database and an application.
Write a script. Then don't touch the files unless you use the script.
Just put into the script every operation that you need to do.
./my-script clean
./my-script install
./my-script run
etc...
This way the files will be deleted by clean, installed, and the application restarted the same way every time with no danger of deleting the wrong thing.
Also, consistency is something you want as a developer. No more wondering why the update failed because you forgot to copy the right files into place or forgot to remove the old binaries.

Related

Open a terminal ends in an infinte loop [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
So I've been looking around on the web for some while now but this seems to be a tricky task.
I intended to change my default terminal on a Mint system from gnome to alacrity.
I had alacrity installed before on the same system and it seemed to be work fine.
I have not set up my root user or know the password for it so this makes this extra hard!
To change the default global behavior (e.g. pressing Crtl+Alt+T) modifying the /etc/passwd seemed reasonably to me.
This is what the last line looks now: user:x:1000:1000:User,,,:/home/user:/usr/bin/alacritty
But: If I want to open a shell now almost a thousand instances do appear once the command is triggered and after a short while the whole system crashes.
I don't know how to reset to the default setting since I need a shell and that tool is broken...
Here is what I tried so far
Try to use the shell env available at user log in: Login ends in an infinite loop
Try to open the /etc/passwd in graphical environment: Cannot modify the file (read only)
So here what I wish: Make this undone without reinstalling the operational system.
Thanks for your help and advice!
The field you are trying to change in /etc/passwd is used to set the per user shell (usually /bin/bash on Linux). The terminal emulator you want to use is can either be done with update-alternatives (system wise if you have root) on Debian based systems, or Window Manager specific configuration in general (GNOME, KDE, Xmonad etc).
Login in as root, and change the /etc/passwd file back to using a valid shell for the user in question. Not sure how you don't have a root user. If you don't have the root password then follow the normal password recovery process. Boot from a live or rescue cd. If it doesn't mount the file system for you, mount it manually then edit /mnt/etc/password (where /mnt is where the original file system was mounted). Unmount and reboot your system normally.

What is the quickest way to change your current location to your home directory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm using Linux. I'm currently trying to set my current path location, as my home directory. Anyone have a clue as to what command I need to use?
The home directory has two different meanings, which usually are the same. The value of environment variable HOME (see environ(7)), and the field pw_dir given by password related API like getpwuid_r(3) on your current user id (obtained by getuid(2)).
At login, the HOME environment variable is set to the pw_dir and the effective and real user ids are changed.
To change your working directory to your HOME use chdir(2) on the result of getenv("HOME"). Notice that the working directory is not related to your PATH variable (which might mention .; but this could be a security issue), and each process (including your shell) has its own working directory (see also credentials(7), fork(2), execve(2), path_resolution(7), glob(7)).
To change your home directory (a very unusual requirement) you could edit -with root permissions- the /etc/passwd file carefully (see passwd(5)) then reboot your machine (or at least restart some login shell).
The bash cd builtin is doing that (changing the working directory of your shell process with the chdir system call). And when you use it without arguments you are changing your working directory to your home directory obtained by caching the result of getenv("HOME").
If performance matters that much inside some C program, you might cache (keep in some global variable, initialized once) the result of getenv("HOME") and use chdir on that.
If your question is simply about using your bash shell, just type:
cd
and that should (unless cd is badly aliased or redefined as some function) change the working directory of your shell to your home directory. It is done in a few milliseconds (so should be quick enough) at most (I can't easily think of a way to measure reliably how fast the cd shell builtin is; you could try time bash -c 'cd; pwd' or time bash -c 'cd; times' but that measures much more than just the cd and gives at most a few milliseconds on my desktop PC).
PS. the use of "quickest way" and "current path location" in your question is unclear and confusing. I strongly invite you to edit your question to improve its wording and motivate it and give more context.
I'm currently trying to set my current path location as my home directory. ? you can use EXPORT PATH.
for e.g Run below command in command prompt
export PATH=$PATH:$HOME/user
This changes happens only in current session, to make it permanent add it to .bashrc file.

On Linux, could any system event prevent the copy command from working? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Trying to determine if there is a scenario in which the copy command may fail.
I don't mean something like $PATH not set or file missing, but more in line with if the file is being edited, if the file is a binary file and being accessed by a system process or if its a database file which is being accessed.
Some basic testing seems to indicate the cp command works fine even if the file is being edited but not sure if there are any OS commands or scenarios in which a cp would fail. For example, what if its a database file and being updated/saved as the exact time the cp takes place. Something like this would be hard to test yet may occur.
Would there be a list of scenarios in which the system prevents a cp command from executing?
There are plenty of ways cp might not do what you want.
Particular example that comes to mind: If you have a process that can read the destination of cp at any given time, there is no possible way to guarantee the reader won't start reading before cp is done copying and end up reading a partially written file. On small-ish files, this race condition may always work out in your favor, but it's still there.
The only way you can have a file that is always updated "atomically" from the perspective of readers such they always get either the old version or the new version, never a partial new version is via the rename system call. Which should be what mv uses for files on the same volume/partition.
Implementing cp is, at the very least 5 system calls: 2x open, 1x sendfile, 2x close.
So just be aware that even if cp succeeds, there can still be race conditions and unpredictable behavior.

How to store data permanently in /tmp directory in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Is there any way to store any data in Linux tmp directory. As i know Linux clear its /tmp directory when system is rebooted. But I want to store data permanently.
Like #bereal said, this defeats the purpose of the /tmp directory. Let me quote the Linux Filesystem Hierarchy Standard:
The /tmp directory must be made available for programs that require temporary files.
Programs must not assume that any files or directories in /tmp are preserved between invocations of the program.
You'll find a better place to store permanent data.
Since it's linux you are free to do what you want to do (as root). When /tmp is cleared depends on your system and can be changed; there is no particular magic involved. A good summary seems to be here: https://serverfault.com/questions/377348/when-does-tmp-get-cleared.
Of course if you are root you can set up an entirely different global directory, say "/not-quite-tmp" or such. But I assume that some progs not under your control write to tmp and you want to inspect or in any case persist those files.
While you are trying to do wrong things, it’s still possible.
/tmp directory is cleared accordigly to TMPTIME setting. The default is apparently 0, what means “clear on every startup”.
The value might be changed in /etc/default/rcS (value is to be set in days.)

Creating temp files in scripts: Advantages of mktemp over touch-ing a file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Although I can achieve creating a temp file with either mktemp and touch, how specifically does mktemp benefit reliability and/or security in scripting over just manually touching a file?
mktemp randomizes the name. It is very important from the security point of view.
Just imagine that you do something like:
echo something > /tmp/temporary-file
in your root-running script.
And someone (who has read your script) does
ln -s /etc/passwd /tmp/temporary-file
before.
This results in /etc/passwd being overwritten, and potentially it can mean different unpleasant things starting from the system becomes broken, and ending with the system becomes hacked (when the input something could be carefully crafted).
The mktemp command could help you in this situation:
TEMP=$(mktemp /tmp/temporary-file.XXXXXXXX)
echo something > ${TEMP}
Now this ln /etc/passwd attack will not work.
A brief insight into the history of mktemp: The mktemp command was invented by the OpenBSD folks, and first appeared in OpenBSD 2.1 back in 1997. Their goal was to improve the security of shell scripts. Previously the norm had been to add $$ to temporary file names, which was absolutely insecure. Now all UNIX/Linux systems have either mktemp or its alternatives, and it became standard de-facto. Funny enough, the mktemp C function was deprecated for being unsecure.
You often want a "scratchpad file" (or directory). Moreover, you might need several such files at the same time, and you don't want to bother figuring out how to name them so there's no conflict.
"mktemp" fits the bill :)
One more extra reason: not all systems use /tmp as temporary directory.
For example https://termux.com/ due to technical reasons (it runs as processes inside Android), has different long path as it's tmp directory.
Scripts that create temporary files or directories using mktemp will be portable and also work in such special environments.
Ok actually it is written clearly in man pages.
mktemp - create a temporary file or directory.
Create a temporary file or directory, safely, and print its name.
It create a file or directory safely means no other user can access it, that's why its permission is 600
touch - change file timestamps
It simply change the timestamps of a file if already created and create a file if does not exist. But file permission is still 644 by default.
For more detail check following man pages:
http://linux.die.net/man/1/mktemp
http://linux.die.net/man/1/touch
At least in the bash shell you can do something like:
dirpath="/tmp/dir1-$$/dir2-$$"
mkdir -p $dirpath
chmod -R 0700 /tmp/dir1-$$
for instance.

Resources