Catching rm -rf (mac/linux) in script - protecting via password - not letting delete files - linux

I am looking at building a script, which eventually needs to install some application - copying folders/files to certain places in Mac & Linux. User should not be able to delete those folders/files or the script i pass to user. Is there any way to achive this?
For mac - sudo chflags schg /path/to/file - can be set.
For Linux - i can set sticky bit via chmod +s /path/to/file.
Problem is, if the user knows administrator password ( root password i meant), then they can change the permissions and then delete using rm -rf.
Question is how do i catch them? How do i make sure, if in case user as root runs rm -rf, its catched, and my files/folders does not get deleted.
Any pointers to this are greatly helpful.
Thanks
EDITED:
Due to a clarifying note by the OP, the purpose here is to control network users who somehow got the root password, rather than subvert the will of the lawful owner of the machine.

You cannot do what you're trying to do, nor should you.
If the user has the root password, it means you trust her with the computer. If you want someone not to be able to do something, don't give them the root password.
The attributes you mention are good ways to prevent accidental deletion of files, and it is all you can expect to achieve.
Again, if you want them not to delete the files, don't give them root.

Related

Adding cent os user by editing configuration file

Im using cent os 6 for my work. For educational purposes I want to add user to the cent os by only editing configuration files.I know we can easly add user by useradd command and change their password using passwd command. But I need to use exactly above mentioned way. To do this first I need to understand what are the files I have to change
By searching I found that following files are responsible for handling user
/etc/passwd
/etc/group
/etc/shadow
/etc/gshadow
What I did is first I add the user using useradd command and then study the strings that commands created in above files. And then I try to replicate it with manually editing files using VI editor. After I replicate every line of string I make a directory for my new user in /home. the I reboot the VM and try to login as manually created user. I can log in without any problems but the terminal showing bash-4.1$ instead of my username. but when I use whoami terminal prints my username correctly.
My question are
Is there any other files do I need to modify to add user successfully?
By adding user manually what are the functionalities that user lost ?
How to create MD5 hashed password for manually created user ?
I know to you this is may be little bit odd. but I need to do this exactly this way. If this question is inappropriate please let me know without down voting
thanks
Those are the essentials, obviously you'll need to create a home directory for that user with proper permissions, as well as any additional user specific resources.
You might want to also read up on the Pluggable Authentication Module or PAM. This provides increased authentication functionality to Linux beyond passwd, group, shadow files.
Also check out the GETPWNAM() system call.
=D Enjoy the Posix!
Serverfault on password hash creation below.
REF: https://unix.stackexchange.com/questions/81240/manually-generate-password-for-etc-shadow

Directories owned by my user have directories in them owned by someone else

On my school directory when I ls -l (running Fedora) I see that I have a ton of files and directories owned by me, but one specific directory is owned by someone else. I recall a few months ago I tried copying that directory to my own as it had 744 privileges by that user. For some reason that user owns the directory in my home directory with 700 privileges so I cannot delete the directory. My home directory has 700 privileges.
Anyone know why something like this could have happened and how I can prevent it form happening in the future? Also, how should I go about deleting these files in my home directory? If needed I can contact IT but I want to see if there is anything I can do without contacting them.
Yellow is my user, red is the foreign user
Two possible options IMHO:
Check the permissions of your /home/YELLOW folder, if it
has o+w, or g+w, someone (the user
listed as the owner of the directory) may have created it there.
root did it. It doesn't make much sense for you, so probably if he/she did, it was by mistake (for example, performing some backup-and-restore administration and so on).
Normally permission for deletion of things in unix filesytems are grabbed from
the parent folder, so, you need to have "write permission" in a folder
to create or remove files (unless there is sticky bit in action);
directories are just special type of files so the rules still applies.
If the directory is empty, a simple rmdir p2Testing or rm -rf p2Testing would be enough. But, if the directory has files and
sub-directories, you won't have permissions to modify or delete them
(look at the drwx------), and only someone with more powers will be
able to do it for you (e.g. root, or the owner if he still have +w in
/home/YELLOW).

WordPress unzip_file() results in mkdir_failed (permissions)

I am creating a WordPress framework that has an auto update facility. When the system updates the framework, it downloads a .zip file (works ok, stored in a temp folder), and afterwards tries to extract that zip file to a place within the theme. When unzipping, it throws an error complaining about not being able to create a directory ("mkdir_failed").
The parent of target folder has permission "775" for user "bitnami" and group "bitnami";
root#linux:/home/bitnami# ls -al /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus
...
drwxrwxr-x 6 bitnami bitnami 4096 Oct 23 14:02 nexusframework
...
And I tried to put the "daemon" user in the "bitnami" group;
usermod -a -G bitnami daemon
Which indeed is assigned correctly I would say, as i see:
root#linux:/home/bitnami# id daemon
uid=1(daemon) gid=1(daemon) groups=1(daemon),1000(bitnami)
So; if the "daemon" user is in the "bitnami" group and the folder has 775 access rights, then why does it fail with "mkdir_failed"?
(note; assigning "777" to the parent folder solves the problem, but this is not an option because of security).
Thanks!
- Gert-Jan
update;
After doing more investigation on Linux in general, I read that Linux automatically creates a 'private' group for each user (so bitnami group for the bitnami user, etc.). I don't know if the problem is caused by the fact that I was trying (and apparently succeeded?) to add other users to the same group or not.
update;
See my answer below on how I resolved my issue.
Ok, thanks for all the comments. I eventually decided not to continu my investigation but to head for another direction, as having to rely on the container's folder to have "775" permission would be unwise for the framework (many clients would have 755 instead, so getting this to work for a group is nice but would eventually not solve my problem).
Instead I further investigated how WordPress themselves download and unzip themes and decided to follow that route.
The key problem i was trying to tackle, was to not have the unzipped files be owned by the 'daemon' user, but by the 'bitnami' user. The reason why it "impersonated" to the daemon user, was because i manually told the code to use the "direct" fs_method (as it appears, WP offers various ways to interact with the filesystem, where the easiest one is 'direct', see here). However, using the 'direct' FS_METHOD is the core reason why I have this problem, as that one will use the credentials of the webserver (the 'daemon' user in my case). So by using a different FS_METHOD, I know am able to unzip the files in the folder, using the correct 'bitnami' user (since the container is owner and has permissions (775, or 755 wouldn't matter) now my problem is solved. Note that instead of writing directly to the filesystem, now PHP will use FTP (see here).
Does it work if you change the group of the folder to daemon?
chgrp -R daemon /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus

Copy files from one user to another permissions issue

I have a user 'git' that owns a git repository. I'm trying to setup a post-update hook that copies the files in the repository to /var/www/site/. I'm still getting a hang of users and permissions in linux, what is the best scheme for this situation?
You could either have git own /var/www/site/, and give it 660 permissions on it (read and write, no execute), or make /var/www/site/ world writeable (which is not entirely a good idea, as then any user can copy, edit, etc. files in /var/www/site/). chown could help you change the owner, and chmod can help you change file permissions.
You can also add git to the group who owns /var/www/site/, and make sure that the group has read/write permissions on /var/www/site/ as well.
And if you ever need help with chown, chmod, or any other linux command, man can help you out.

Linux file permissions and Java problems (permission retention)

I run servers on my Linux Server (Ubuntu) and there's a bit of a problem. It may seem simple to fix, however I don't think it is. The servers run in my username (server), however, others access certain files with different users via FTP. Because the server is running in my username, whenever a plugin creates new files, they do not have permission to edit etc.
I have tried putting the users into groups and then allowing group access to that folder (even for new files), but had no luck. Every time they need to edit the files, I need to chmod -R 777 it.
I thought about running the servers in their usernames, however that would produce complications. Is it actually possible to make new files retain the permissions of the parent (or a top folder)? All the solutions I've found doesn't seem to work.
Not for users but for groups. You can:
chmod g+s parent_dir
chgrp shared_group parent_dir
If you create files inside it, that files will have the group of the folder (shared_group).

Resources