Whenever I do a "zfs diff" on certain zfs file systems, the output is cluttered by "modified" user files that get "changed" by running chmod over them (in a cron, to ensure some security aspects).
Question: is there an easy way that I missed to force (POSIX) permissions and ownership on file hierarchies without chmod/chown touching them when the permissions are already as I want them to be?
You could do something like
find dir/ -type f -perm /0111 -exec chmod a-x {} +
instead of an unconditional chmod to remove the permissions. (all the x permissions here.)
The fact aside that security by cron sounds like a bad idea the simple answer is "No". Neither chmod nor chown have a flag to do a modify a file/directory only when your desired state doesn't match.
You have two options:
write a patch for the tools
write a wrapper, as larsks suggested in the comments above
Depending on the size of your filesystem / directory structure that may increases the runtime of your cron job quite dramatically, though.
Related
I'm currently doing a course on Linux Essentials, and recently I came across the setuid, setgid and sticky bit permissions.
I tried to make a practical example and run the commands to apply them on a file and a directory that I created.
I noticed that the numeric command to remove them is 'chmod 0775 ', and I thought, if all three are applied, what happens if I run the numeric command to remove them. I supposed that it would remove them sequentially but in the case of the file, it removed all of them at once.
Then I tried on the directory. The result was different there. Only the last applied permission (which was the Sticky Bit) was removed. I run the command once more and didn't do anything. None of the remaining permissions (setuid, setgid) was removed.
Why is this happening?
Thanks!
Sticky bits for files and directories are slightly different...
For example, to remove the stick bits from a directory called "Testy" you would type:
sudo chmod g-s Testy/
Note that typing the following WOULD NOT WORK:
sudo chmod 777 Testy.
This below tutorial gives good worked examples and explanations, my advice would be to practice some of these examples a good few times and then it will all eventually make sense. The key thing to understand (in my opinion anyway) is the Octal system involved here in setting the permissions/bits etc, once you understand that it all falls into place.
Here is the Tutorial Link: Access Control / Sticky Bit Tutorial
A quick search in man chmod revealed that you need to append an extra 0 or = in front. For instance like this:
chmod 00775 target
or like this:
chmod =775 target
If you want to remove setuid, setgid and sticky permissions for all files recursively you can use this command:
chmod -R 00775 {.,.*}
I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.
I have a directory and I'd like for any file added to that directory to automatically have chmod performed with a specific set of permissions.
Is there a way to do this?
Reacting to filesystem events (in linux) can be done using inotify.
There are many tools built on inotify which allow you to call commands in reaction to file system events. One such tool is incron. You might like it since it can be configured in a way similar to the familiar cron daemon.
Files moved into a monitored directory generate an IN_MOVED_TO event.
So the incrontab file would contain an entry like
/path/to/watch IN_MOVED_TO /bin/chmod 0644 $#
You can create a cron that checks/chmods files in that directory.
Something like this will work:
find /path/to/directory -type f -print0 | xargs -0 chmod 0644
(Of course you have to edit the path and set the permissions you need)
The question is too unspecified and it is dangerous to give any answer as it is.
Who (/what) creates files in aforementioned directory? What rights do you want to set and why do you think this is needed? Why whatever creates them cannot put expected rights on its own?
For instance, all these "find | chmod" or inotify watchers and other tools mentioned in other comments are a huge security hole if this is a directory everyone can put files to and such a chmoding command would be run with root privs, as it can be tricked into a following a symlink and chmoding stuff like /etc/shadow.
This /can/ be implemented securely of course, but chances are the actual problem does not require any of this.
I am trying to figure out how to recursively change user and group on an entire directory while leaving the nobody user intact
chown -vR user:group /home/mydir
will change the ownership of every file under /mydir when I would like to leave all files that belong to nobody:nobody unchanged
(this makes sense when you are trying to move a subdomain to a new domain on a cPanel server and don't have the option to use the Modify an Account feature since there are several other subdomains that need to belong to the own user:group) Thank you!
I don't think chown(1) alone will do, but with find you can do what you want.
find /your/directory \! -user nobody -exec echo chown user:group {} \;
Replace /your/directory and user:group with values of your choice. Then run this and when you're sure it does what you want, Remove echo from -exec and things get actually done.
A good practice to first echo on terminal what would be done and then proceed either with corrections or removing the echo, in case output seems to be what actually should be done.
Have a source repository that I run through doxygen every now and then, which generates html in my public_html directory. Find myself having to change umask and hack the primary group in bash like this, which works:
echo "umask $UMASK; doxygen include_config.conf" | newgrp $GROUP
But it seems clunky and I can't help wondering if there's some configuration setting or option switch for doxygen to set UID/group and permissions directly on all the files/directories it generates? It's so frequently used for generating HTML on websites that almost everybody will need to e.g. have the output world-readable. Have searched the web, config file and man page to no avail.
Update: Was hoping to find some builtin feature, but looks like there is none. After some iterations this wrapper seems to do the job:
#!/bin/bash
OUTPUT_PATH=/path/to/output
CONFIG_PATH=/path/to/include_config.conf
GROUP=somegroup
PERM=750
UMASK=027
if [[ ! -e $OUTPUT_PATH ]]; then mkdir $OUTPUT_PATH; fi
chmod $PERM $OUTPUT_PATH
chmod g+s $OUTPUT_PATH
chgrp $GROUP $OUTPUT_PATH
umask $UMASK
doxygen $CONFIG_PATH
It's a bit more robust, portable and less clunky than the original script, while still working in one pass and without race conditions.
To my knowledge, there's no way to tell Doxygen to set the ownership details of the generated files. Considering that Doxygen runs on systems that don't have any notion of Linux-style filesystem permissions, I'd be surprised if that sort of thing was built into the application. It should be trivial, though, to write a simple script that builds the documentation and automatically adjusts the permissions:
#!/bin/bash
doxygen include_config.conf
chgrp -R $GROUP $PATH_TO_OUTPUT_FOLDER
chmod -R $UMASK $PATH_TO_OUTPUT_FOLDER
Update:
In response to your comments (I admit it's off-topic a bit):
I recommend against using newgrp to do this. It's an obsolete command that hearkens back to the old UNIX days when you could only be in one group at a time. It's possible to run into some strange problems when using it on modern systems. If you add the following before the doxygen call, anything created in the directory will inherit the group of the parent folder (which is essentially what you want):
mkdir $PATH_TO_OUTPUT_FOLDER
chgrp $GROUP $PATH_TO_OUTPUT_FOLDER
chmod g+s $PATH_TO_OUTPUT_FOLDER
The chgrp after running Doxygen will no longer be needed. As a bonus, it doesn't alter the group ID of your current login session or of running processes and doesn't fork a sub-shell (newgrp will usually do one of those two).