Directories owned by my user have directories in them owned by someone else - linux

On my school directory when I ls -l (running Fedora) I see that I have a ton of files and directories owned by me, but one specific directory is owned by someone else. I recall a few months ago I tried copying that directory to my own as it had 744 privileges by that user. For some reason that user owns the directory in my home directory with 700 privileges so I cannot delete the directory. My home directory has 700 privileges.
Anyone know why something like this could have happened and how I can prevent it form happening in the future? Also, how should I go about deleting these files in my home directory? If needed I can contact IT but I want to see if there is anything I can do without contacting them.
Yellow is my user, red is the foreign user

Two possible options IMHO:
Check the permissions of your /home/YELLOW folder, if it
has o+w, or g+w, someone (the user
listed as the owner of the directory) may have created it there.
root did it. It doesn't make much sense for you, so probably if he/she did, it was by mistake (for example, performing some backup-and-restore administration and so on).
Normally permission for deletion of things in unix filesytems are grabbed from
the parent folder, so, you need to have "write permission" in a folder
to create or remove files (unless there is sticky bit in action);
directories are just special type of files so the rules still applies.
If the directory is empty, a simple rmdir p2Testing or rm -rf p2Testing would be enough. But, if the directory has files and
sub-directories, you won't have permissions to modify or delete them
(look at the drwx------), and only someone with more powers will be
able to do it for you (e.g. root, or the owner if he still have +w in
/home/YELLOW).

Related

Catching rm -rf (mac/linux) in script - protecting via password - not letting delete files

I am looking at building a script, which eventually needs to install some application - copying folders/files to certain places in Mac & Linux. User should not be able to delete those folders/files or the script i pass to user. Is there any way to achive this?
For mac - sudo chflags schg /path/to/file - can be set.
For Linux - i can set sticky bit via chmod +s /path/to/file.
Problem is, if the user knows administrator password ( root password i meant), then they can change the permissions and then delete using rm -rf.
Question is how do i catch them? How do i make sure, if in case user as root runs rm -rf, its catched, and my files/folders does not get deleted.
Any pointers to this are greatly helpful.
Thanks
EDITED:
Due to a clarifying note by the OP, the purpose here is to control network users who somehow got the root password, rather than subvert the will of the lawful owner of the machine.
You cannot do what you're trying to do, nor should you.
If the user has the root password, it means you trust her with the computer. If you want someone not to be able to do something, don't give them the root password.
The attributes you mention are good ways to prevent accidental deletion of files, and it is all you can expect to achieve.
Again, if you want them not to delete the files, don't give them root.

Set permission to create a folder that contains a website

I am creating a folder to have my website in it. How do I add permission to create this folder?
MKDIR 777 MyWebSite
Will give Rear, Write & eXecute access to everyone.
So what is the best set of permission that i should use when creating this folder ?
Best is 755 for everything (dirs) 644 for files. Some directories (like uploads in WordPress) need to have 777 as any user can send file there. But that BIG permission should be set only for such directory not for everything. Normally READ for files and execute for dirs (with read) would be enought for almost any directory and file (only that upload dirs needs more).
For a web document root, the best permissions are 755 for directories/folders & 644 for files. The key is making sure the directories/folders as well as the files are owned by the same user connected to the web server. Which in Linux systems is usually www-data. I actually gave a fairly detailed explanation on why 777 permissions are not good for any reason over here and here is an edited version for your question.
When you set permissions to 777 it means that 100% anyone with access to your machine on any level can read, write & execute the file. Meaning if your site gets hacked, a hacker than then use the execute permissions to launch scripts to get deeper in your system. Or someone else on the system—since they can read, write & execute the file—can simply delete your files without you ever knowing.
Setting directories to 755 and setting files to 644 is the best way to go as long as the ownership of the file is solid & correct. 644 permissions basically break down as follows:
The first 6 means the owner of the file can read & write to it.
The next 4 means members of the group connected to that file can only read it.
The next 4 means others—who are neither the owner or a member of the group—can read it.
As for 755 they are best for directories/folders because directories/folders need to have execute rights to allow you to view the contents inside of them. So it breaks down like this:
The first 7 means the owner of the file can read, write & execute it.
The next 5 means members of the group connected to that directory/folder can only read & execute it.
The next 5 means others—who are neither the owner or a member of the group—can can only read & execute it.

Copy files from one user to another permissions issue

I have a user 'git' that owns a git repository. I'm trying to setup a post-update hook that copies the files in the repository to /var/www/site/. I'm still getting a hang of users and permissions in linux, what is the best scheme for this situation?
You could either have git own /var/www/site/, and give it 660 permissions on it (read and write, no execute), or make /var/www/site/ world writeable (which is not entirely a good idea, as then any user can copy, edit, etc. files in /var/www/site/). chown could help you change the owner, and chmod can help you change file permissions.
You can also add git to the group who owns /var/www/site/, and make sure that the group has read/write permissions on /var/www/site/ as well.
And if you ever need help with chown, chmod, or any other linux command, man can help you out.

Linux file permissions and Java problems (permission retention)

I run servers on my Linux Server (Ubuntu) and there's a bit of a problem. It may seem simple to fix, however I don't think it is. The servers run in my username (server), however, others access certain files with different users via FTP. Because the server is running in my username, whenever a plugin creates new files, they do not have permission to edit etc.
I have tried putting the users into groups and then allowing group access to that folder (even for new files), but had no luck. Every time they need to edit the files, I need to chmod -R 777 it.
I thought about running the servers in their usernames, however that would produce complications. Is it actually possible to make new files retain the permissions of the parent (or a top folder)? All the solutions I've found doesn't seem to work.
Not for users but for groups. You can:
chmod g+s parent_dir
chgrp shared_group parent_dir
If you create files inside it, that files will have the group of the folder (shared_group).

ubuntu: share a folder to be used by all user in group

I want to share a folder among all users of a group : dev. So that all files are regardless of the owner can be edited by anyone in the group.
I have created the shared folder and set the respective permissions to the folder.
When a user creates a new file in that folder it belongs to owner:dev
But the permission for the files are rw-r--r--
So other users who belong the same group are not able to edit the files.
Like default group become "dev" how can I set the default permission for the files created in that directory.
I don't want to use "umask" technique because the user will upload files into that directory throuh ftp and other tools.
This really belongs on serverfault and I already mentioned there's almost an exact duplicate there, but anyway there's a nice little solution you can use, which is the FUSE bindfs module (there's a package in ubuntu). You use it to mount one directory onto another mountpoint and can set things such as the default permissions of any files created here, their owner, group and the permissions of files already in the directory (which is what you seem to want).
I don't want to use "umask" technique because the user will upload files into that directory throuh ftp and other tools.
That's the only way to do it, unless those "other tools" are themselves able to adjust permissions.
If you have root access, you can set the default umask for everyone to 002 from /etc/bashrc (assuming bash the default shell for the users in question).
A hack (and this is less preferable to umask) is to setup a cron job that will run every minute and do a chmod -R g+w <dir>.

Resources