How to protect Linux server from accidental file deletion - linux

Like a week ago a run this command on my server mv /* .. I was trying to move all the files from my current directory to the parent directory but ended up screwing all my server :)
Is there a way to prevent this from happening again?

I can recommend using a minimalistic file-manager like midnight-commander to transfer files.
However the answer to your question is no, if you're working with root permissions you have all abilities to destroy your system.
With great power comes great responsibility - Benjamin Franklin Parker known as Uncle Ben
you can limit the access rights by using a different user than root.

Backups!!!
As mentioned by Chris, you can setup a system with user permissions, group permissions, .... In top of this, I have the impression that you missed a dot: mv ./... instead of mv /... (I hope you did not set root directory / as your home directory? In case you did, change this immediately).
But most of all: regular backups!!! UNIX/Linux doesn't have a system restore, as Windows has, nor is there a recycle bin. Therefore, regular backups (to another machine, obviously) are a must.

Related

VSCode - what exactly --user-data-dir is specifiying

What exactly is --user-data-dir specifiying?
From --help parameter:
--user-data-dir <dir> Specifies the directory that user data is kept in. Can be used to open multiple distinct instances of Code.
Is it storing some temporary files there?
Is it about the access path to config files?
I am asking as I want to run VSCode (or Codium to be more exact) with sudo (I want to edit system config file that is read restricted) which requires this parameter for reasons unclear to me.
Since sudo-ing VS Code at command-line launch is only a thing on Linux, this question assumes you're on Linux, and restricts its context to Linux.
TL;DR
To answer your question directly: the user-data-dir parameter points to a folder where all personalisation except extensions resides — unique to each user.
Why does sudo-ing Code need --user-data-dir?
In typical installations of either OS and VS Code, this folder owned by the regular user cannot be accessed by the superuser.
Hence a VS Code session running with effective UID=0 tries but fails to write to the invoking user's (not the superuser's) config folder. This is what the error message prevents from happening, by forcing the user to provide an explicit root-accessible folder.
Detailed Explanation
There are two main folders that VS Code uses to store configuration data:
An extensions folder (self explanatory) — contained in ~/.vscode[1]
user-data-dir; a folder for all other personalisable things (settings, keybindings, GitHub/MS account credential caches, themes, tasks.json, you name it)[2]
On Linux the latter is located in ~/.config/Code, and has file permissions mode 0700 (unreadable and unwritable by anybody other than the owner).
This causes issues, as VS Code tries and fails to access said directory. The logical solution is to either modify the permissions (recursively) of ~/.config/Code to allow root access, or — arguably saner and objectively more privacy-respecting — to use a separate directory altogether for the sudo'ed VS Code to access.
The latter strategy is what the community decided to adopt at large; this commit from 2016 started making it compulsory to pass an explicit --user-data-dir when sudo-ing VS Code on Linux.
Should You be Doing This in the First Place?
Probably not! If your goal is to modify system config files, then you could stick to an un-elevated instance of Code, which would prompt you to Save as Admin... when you try to save. See this answer on Ask Ubuntu on why you probably want to avoid elevating VS Code without reason (unless you understand the risks and/or have to), and this one on the same thread on what you could do instead.
However, if the concerned file is read-restricted to root as well, as in the O.P’s case, then you hardly have a choice 😕; sudo away! 😀
[1] & [2]: If you want to know more about the above two folder paths on different OSes, see [1] and [2]
Hope this was helpful!
It might be helpful to easily find the default location of the user-data-dir on any OS. It can be found with this command:
Developer: Open User Data Folder
workbench.action.openUserDataFolder
which is in the Insiders Build v1.75 now, Stable soon. Opens your OS file explorer app to the location.

Catching rm -rf (mac/linux) in script - protecting via password - not letting delete files

I am looking at building a script, which eventually needs to install some application - copying folders/files to certain places in Mac & Linux. User should not be able to delete those folders/files or the script i pass to user. Is there any way to achive this?
For mac - sudo chflags schg /path/to/file - can be set.
For Linux - i can set sticky bit via chmod +s /path/to/file.
Problem is, if the user knows administrator password ( root password i meant), then they can change the permissions and then delete using rm -rf.
Question is how do i catch them? How do i make sure, if in case user as root runs rm -rf, its catched, and my files/folders does not get deleted.
Any pointers to this are greatly helpful.
Thanks
EDITED:
Due to a clarifying note by the OP, the purpose here is to control network users who somehow got the root password, rather than subvert the will of the lawful owner of the machine.
You cannot do what you're trying to do, nor should you.
If the user has the root password, it means you trust her with the computer. If you want someone not to be able to do something, don't give them the root password.
The attributes you mention are good ways to prevent accidental deletion of files, and it is all you can expect to achieve.
Again, if you want them not to delete the files, don't give them root.

WordPress unzip_file() results in mkdir_failed (permissions)

I am creating a WordPress framework that has an auto update facility. When the system updates the framework, it downloads a .zip file (works ok, stored in a temp folder), and afterwards tries to extract that zip file to a place within the theme. When unzipping, it throws an error complaining about not being able to create a directory ("mkdir_failed").
The parent of target folder has permission "775" for user "bitnami" and group "bitnami";
root#linux:/home/bitnami# ls -al /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus
...
drwxrwxr-x 6 bitnami bitnami 4096 Oct 23 14:02 nexusframework
...
And I tried to put the "daemon" user in the "bitnami" group;
usermod -a -G bitnami daemon
Which indeed is assigned correctly I would say, as i see:
root#linux:/home/bitnami# id daemon
uid=1(daemon) gid=1(daemon) groups=1(daemon),1000(bitnami)
So; if the "daemon" user is in the "bitnami" group and the folder has 775 access rights, then why does it fail with "mkdir_failed"?
(note; assigning "777" to the parent folder solves the problem, but this is not an option because of security).
Thanks!
- Gert-Jan
update;
After doing more investigation on Linux in general, I read that Linux automatically creates a 'private' group for each user (so bitnami group for the bitnami user, etc.). I don't know if the problem is caused by the fact that I was trying (and apparently succeeded?) to add other users to the same group or not.
update;
See my answer below on how I resolved my issue.
Ok, thanks for all the comments. I eventually decided not to continu my investigation but to head for another direction, as having to rely on the container's folder to have "775" permission would be unwise for the framework (many clients would have 755 instead, so getting this to work for a group is nice but would eventually not solve my problem).
Instead I further investigated how WordPress themselves download and unzip themes and decided to follow that route.
The key problem i was trying to tackle, was to not have the unzipped files be owned by the 'daemon' user, but by the 'bitnami' user. The reason why it "impersonated" to the daemon user, was because i manually told the code to use the "direct" fs_method (as it appears, WP offers various ways to interact with the filesystem, where the easiest one is 'direct', see here). However, using the 'direct' FS_METHOD is the core reason why I have this problem, as that one will use the credentials of the webserver (the 'daemon' user in my case). So by using a different FS_METHOD, I know am able to unzip the files in the folder, using the correct 'bitnami' user (since the container is owner and has permissions (775, or 755 wouldn't matter) now my problem is solved. Note that instead of writing directly to the filesystem, now PHP will use FTP (see here).
Does it work if you change the group of the folder to daemon?
chgrp -R daemon /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus

Where to put SVN repository directory in Linux?

I am setting up a new SVN server on Ubuntu Linux. Where is a good place (best practice) to put the repositories? Should I create a new user? The server will be accessed via http:// so no need to create user accounts etc (as was the case for svn://).
Many thanks in advance
I like putting things under /srv, as it seems to match the definition in the FHS.
The new location for service data according to the FHS is /srv, so under there would probably be best.
I've always used /var/svn or /var/lib/svn. While it doesn't quite line up with FHS, it matches closer to what the other apps actually do (On RHEL5, Apache uses /var/www; PostgreSQL uses /var/lib/pgsql). As suggested, /srv/svn looks like another good spot. And you get to say "Look, I'm following the standard!"
Using either /usr/svn or /usr/local/svn would probably be considered bad form, and all your Linux friends will laugh at you behind your back :-)
I host my SVN via the apache module, so I usually put it under my apache user, at the same level as my htdocs, and setup a specific authentication just for SVN users. Not under htdocs, but same level.
If you have a lot of projects, dedicate another volume to SVN since it will grow.
I guess I'm kind of old school but I like to put things (apache,tomcat,...etc) in /usr/local. So I will usually create repositories in /usr/local/svn and have the Apache module reference that path in the httpd.conf
/home/username/Dropbox
this way you backup the svn and can access it on a windows machine as well.

Not able to delete directory

I am having a frequent problems with my web hosting (its shared)
I am not able to delete or change permission for a particular directory. The response is,
Cannot delete. Directory may not be empty
I checked the permissions and it looks OK. There are 100's of files in this folder which I don't want.
I contacted my support and they solved it saying it was permission issue. But it reappeared. Any suggestions?
The server is Linux.
You can't rmdir a directory with files in it. You must first rm all files and subdirectories. Many times, the easiest solution is:
$ rm -rf old_directory
It's entirely possible that some of the files or subdirectories have permission limitations that might prevent them from being removed. Occasionally, this can be solved with:
$ chmod -R +w old_directory
But I suspect that's what your support people did earlier.
This could also be because your FTP client might not be showing the hidden files (like cache, or any hiddn files that your application might create), while the hidden files are preventing you from deleting the directory. (though, in your case, I am not sure if this is the cause .. .it could be permission issue with your hosting provider.. Webserver running as another user (like apache or www) combined with your directories having global write perms).
I assume that's a response from an FTP server?
Usually, a message from an FTP server really means it. If it says the directory is not empty, there might be certain files you cannot see that exists in the directory which maybe one of:
Your PHP/JSP/ASP/whatever scripts may run under a different user account thus creating files which you may not be able to see/delete
Is your hosting's web interface run under your FTP account? There might be conflicting permissions there if you manage some files from the web interface and then later via FTP.
Hosting server/operating system files created unintentionally e.g. from the hosting's web interface
If it comes from a script, write a one-time throw-away script that delete the files and that directory and then uploads and executes it.
And just to be sure, some FTP server doesn't support direct directory deletion, you need all the files first, is that the case?

Resources