Can doxygen set custom permissions on the files it creates? - linux

Have a source repository that I run through doxygen every now and then, which generates html in my public_html directory. Find myself having to change umask and hack the primary group in bash like this, which works:
echo "umask $UMASK; doxygen include_config.conf" | newgrp $GROUP
But it seems clunky and I can't help wondering if there's some configuration setting or option switch for doxygen to set UID/group and permissions directly on all the files/directories it generates? It's so frequently used for generating HTML on websites that almost everybody will need to e.g. have the output world-readable. Have searched the web, config file and man page to no avail.
Update: Was hoping to find some builtin feature, but looks like there is none. After some iterations this wrapper seems to do the job:
#!/bin/bash
OUTPUT_PATH=/path/to/output
CONFIG_PATH=/path/to/include_config.conf
GROUP=somegroup
PERM=750
UMASK=027
if [[ ! -e $OUTPUT_PATH ]]; then mkdir $OUTPUT_PATH; fi
chmod $PERM $OUTPUT_PATH
chmod g+s $OUTPUT_PATH
chgrp $GROUP $OUTPUT_PATH
umask $UMASK
doxygen $CONFIG_PATH
It's a bit more robust, portable and less clunky than the original script, while still working in one pass and without race conditions.

To my knowledge, there's no way to tell Doxygen to set the ownership details of the generated files. Considering that Doxygen runs on systems that don't have any notion of Linux-style filesystem permissions, I'd be surprised if that sort of thing was built into the application. It should be trivial, though, to write a simple script that builds the documentation and automatically adjusts the permissions:
#!/bin/bash
doxygen include_config.conf
chgrp -R $GROUP $PATH_TO_OUTPUT_FOLDER
chmod -R $UMASK $PATH_TO_OUTPUT_FOLDER
Update:
In response to your comments (I admit it's off-topic a bit):
I recommend against using newgrp to do this. It's an obsolete command that hearkens back to the old UNIX days when you could only be in one group at a time. It's possible to run into some strange problems when using it on modern systems. If you add the following before the doxygen call, anything created in the directory will inherit the group of the parent folder (which is essentially what you want):
mkdir $PATH_TO_OUTPUT_FOLDER
chgrp $GROUP $PATH_TO_OUTPUT_FOLDER
chmod g+s $PATH_TO_OUTPUT_FOLDER
The chgrp after running Doxygen will no longer be needed. As a bonus, it doesn't alter the group ID of your current login session or of running processes and doesn't fork a sub-shell (newgrp will usually do one of those two).

Related

chmod/chown always writing updates even if not required

Whenever I do a "zfs diff" on certain zfs file systems, the output is cluttered by "modified" user files that get "changed" by running chmod over them (in a cron, to ensure some security aspects).
Question: is there an easy way that I missed to force (POSIX) permissions and ownership on file hierarchies without chmod/chown touching them when the permissions are already as I want them to be?
You could do something like
find dir/ -type f -perm /0111 -exec chmod a-x {} +
instead of an unconditional chmod to remove the permissions. (all the x permissions here.)
The fact aside that security by cron sounds like a bad idea the simple answer is "No". Neither chmod nor chown have a flag to do a modify a file/directory only when your desired state doesn't match.
You have two options:
write a patch for the tools
write a wrapper, as larsks suggested in the comments above
Depending on the size of your filesystem / directory structure that may increases the runtime of your cron job quite dramatically, though.

Sharing dotfiles with the root user

Current method
Actually, I'm sharing some of my dotfiles with the root user using symbolic links:
ln -s ~user/.vimrc /root/
ln -s ~user/.zshenv /root/
ln -s ~user/.zlogin /root/
ln -s ~user/.zshrc /root/
Former method
Before, I was using the sudo command with the -E which preserves the environment. So, the root user, when in an interactive shell, use the standard user home directory and read the corresponding dotfiles.
It works, but :
Some files may be created in the standard user directory with root as owner
Some commands does not allow (or warn me) using files on directory which the owner is not the current user (it's obviously for security reasons). So, executing those commands as root is problematic.
Better method ?
The simplest method is to put shared settings in the system-wide configuration files (/etc/zshrc, /etc/vimrc).
But I want to keep all the settings in my home directory, where I can keep them synchronized with a Git remote repository. This way, I can deploy them easily on a new computer.
As my current method is tedious and the former was pleasant but problematic,
is there a better method to make root use my current configuration file ?
What I usually do is to include a deployment script in the git repository. I then invoke that script using sudo. The script then runs with root credentials and updates the dotfiles, either in the root account or globally.
I keep the install script as light as possible with no dependencies beyond shell and the core utilities (so no rsync).

Automatically assign permissions to any file copied into directory

I have a directory and I'd like for any file added to that directory to automatically have chmod performed with a specific set of permissions.
Is there a way to do this?
Reacting to filesystem events (in linux) can be done using inotify.
There are many tools built on inotify which allow you to call commands in reaction to file system events. One such tool is incron. You might like it since it can be configured in a way similar to the familiar cron daemon.
Files moved into a monitored directory generate an IN_MOVED_TO event.
So the incrontab file would contain an entry like
/path/to/watch IN_MOVED_TO /bin/chmod 0644 $#
You can create a cron that checks/chmods files in that directory.
Something like this will work:
find /path/to/directory -type f -print0 | xargs -0 chmod 0644
(Of course you have to edit the path and set the permissions you need)
The question is too unspecified and it is dangerous to give any answer as it is.
Who (/what) creates files in aforementioned directory? What rights do you want to set and why do you think this is needed? Why whatever creates them cannot put expected rights on its own?
For instance, all these "find | chmod" or inotify watchers and other tools mentioned in other comments are a huge security hole if this is a directory everyone can put files to and such a chmoding command would be run with root privs, as it can be tricked into a following a symlink and chmoding stuff like /etc/shadow.
This /can/ be implemented securely of course, but chances are the actual problem does not require any of this.

Ideal way to use wget to download and install using temp directory?

I am trying to work out the proper process of installing with Wget, in this example I'll use Nginx.
# Download nginx to /tmp/ directory
wget http://nginx.org/download/nginx-1.3.6.tar.gz -r -P /tmp
# Extract nginx into /tmp/nginx directory
tar xzf nginx-1.3.6.tar.gz -C /tmp/nginx
# Configure it to be installed in opt
./configure --prefix=/opt/nginx
# Make it
make
# Make install
make install
# Clean up temp folder
rm -r /tmp/*
Is this the idealised process? Is there anything I can improve on?
First of all, you definitely seem to reinvent the wheel: if the problem that you want to solve is automated packaging / building software on target systems, then there are myriads of solutions available, in form of various package management systems, port builders, etc.
As for your shell script, there are a couple of things you should consider fixing:
Stuff like http://nginx.org/download/nginx-1.3.6.tar.gz or nginx-1.3.6.tar.gz are constants. Try to extract all constants in separate variables and use them to make maintaining this script a little bit easier, for example:
NAME=nginx
VERSION=1.3.6
FILENAME=$NAME-$VERSION.tar.gz
URL=http://nginx.org/download/$FILENAME
TMP_DIR=/tmp
INSTALL_PREFIX=/opt
wget "$URL" -r -P "$TMP_DIR"
tar xzf "$FILENAME" -C "$TMP_DIR/nginx"
You generally can't be 100% sure that wget exists on target deployment system. If you want to maximize portability, you can try to detect popular networking utilities, such as wget, curl, fetch or even lynx, links, w3m, etc.
Proper practices on using a temporary directory is a long separate question, but, generally, you'll need to adhere to 3 things:
One should somehow find out the temporary directory location. Generally, it's wrong to assume that /tmp is always a temporary directory, as it can be not mounted, it can be non-writable, if can be tmpfs filesystem which is full, etc, etc. Unfortunately, there's no portable and universal way to detect what temporary directory is. The very least one should do is to check out contents of $TMPDIR to make it possible for a user to point the script to proper temporary dir. Another possibly bright idea is a set of heuristic checks to make sure that it's possible to write to desired location (checking at least $TMPDIR, $HOME/tmp, /tmp, /var/tmp), there's decent amount of space available, etc.
One should create a temporary directory in a safe manner. On Linux systems, mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX is usually enough. On BSD-based systems, much more manual work needed, as default mktemp implementation is not particularly race-resistant.
One should clean up temporary directory after use. Cleaning should be done not only on a successful exit, but also in a case of failure. This can be remedied with using a signal trap and a special cleanup callback, for example:
# Cleanup: remove temporary files
cleanup()
{
local rc=$?
trap - EXIT
# Generally, it's the best to remove only the files that we
# know that we have created ourselves. Removal using recursive
# rm is not really safe.
rm -f "$LOCAL_TMP/some-file-we-had-created"
[ -d "$LOCAL_TMP" ] && rmdir "$LOCAL_TMP"
exit $rc
}
trap cleanup HUP PIPE INT QUIT TERM EXIT
# Create a local temporary directory
LOCAL_TMP=$(mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX)
# Use $LOCAL_TMP here
If you really want to use recursive rm, then using any * to glob files is a bad practice. If your directory would have more than several thousands of files, * would expand to too much arguments and overflow shell's command line buffer. I might even say that using any globbing without a good excuse is generally a bad practice. The rm line above should be rewritten at least as:
rm -f /tmp/nginx-1.3.6.tar.gz
rm -rf /tmp/nginx
Removing all subdirectories in /tmp (as in /tmp/*) is a very bad practice on a multi-user system, as you'll either get permission errors (you won't be able to remove other users' files) or you'll potentially heavily disrupt other people's work by removing actively used temporary files.
Some minor polishing:
POSIX-standard tar uses normal short UNIX options nowadays, i.e. tar -xvz, not tar xvz.
Modern GNU tar (and, AFAIR, BSD tar too) doesn't really need any of "uncompression" flags, such as -z, -j, -y, etc. It detects archive/compression format itself and tar -xf is sufficient to extract any of .tar / .tar.gz / .tar.bz2 tarballs.
That's the basic idea. You'll have to run the make install command as root (or the whole script if you want). Your rm -r /tmp/* should be rm -r /tmp/nginx because other commands might have stuff they're working on in the tmp directory.
It should also be noted that the chances that building from source like that will work with no modifications for a decently sized project is fairly low. Generally you will find you need to specify a path to a library explicitly or some code doesn't quite compile correctly on your distribution.

Setting default permissions for newly created files and sub-directories under a directory in Linux?

I have a bunch of long-running scripts and applications that are storing output results in a directory shared amongst a few users. I would like a way to make sure that every file and directory created under this shared directory automatically had u=rwxg=rwxo=r permissions.
I know that I could use umask 006 at the head off my various scripts, but I don't like that approach as many users write their own scripts and may forget to set the umask themselves.
I really just want the filesystem to set newly created files and directories with a certain permission if it is in a certain folder. Is this at all possible?
Update: I think it can be done with POSIX ACLs, using the Default ACL functionality, but it's all a bit over my head at the moment. If anybody can explain how to use Default ACLs it would probably answer this question nicely.
To get the right ownership, you can set the group setuid bit on the directory with
chmod g+rwxs dirname
This will ensure that files created in the directory are owned by the group. You should then make sure everyone runs with umask 002 or 007 or something of that nature---this is why Debian and many other linux systems are configured with per-user groups by default.
I don't know of a way to force the permissions you want if the user's umask is too strong.
Here's how to do it using default ACLs, at least under Linux.
First, you might need to enable ACL support on your filesystem. If you are using ext4 then it is already enabled. Other filesystems (e.g., ext3) need to be mounted with the acl option. In that case, add the option to your /etc/fstab. For example, if the directory is located on your root filesystem:
/dev/mapper/qz-root / ext3 errors=remount-ro,acl 0 1
Then remount it:
mount -oremount /
Now, use the following command to set the default ACL:
setfacl -dm u::rwx,g::rwx,o::r /shared/directory
All new files in /shared/directory should now get the desired permissions. Of course, it also depends on the application creating the file. For example, most files won't be executable by anyone from the start (depending on the mode argument to the open(2) or creat(2) call), just like when using umask. Some utilities like cp, tar, and rsync will try to preserve the permissions of the source file(s) which will mask out your default ACL if the source file was not group-writable.
Hope this helps!
It's ugly, but you can use the setfacl command to achieve exactly what you want.
On a Solaris machine, I have a file that contains the acls for users and groups. Unfortunately, you have to list all of the users (at least I couldn't find a way to make this work otherwise):
user::rwx
user:user_a:rwx
user:user_b:rwx
...
group::rwx
mask:rwx
other:r-x
default:user:user_a:rwx
default:user:user_b:rwx
....
default:group::rwx
default:user::rwx
default:mask:rwx
default:other:r-x
Name the file acl.lst and fill in your real user names instead of user_X.
You can now set those acls on your directory by issuing the following command:
setfacl -f acl.lst /your/dir/here
in your shell script (or .bashrc) you may use somthing like:
umask 022
umask is a command that determines the settings of a mask that controls how file permissions are set for newly created files.
I don't think this will do entirely what you want, but I just wanted to throw it out there since I hadn't seen it in the other answers.
I know you can create directories with permissions in a one-liner using the -m option:
mkdir -m755 mydir
and you can also use the install command:
sudo install -C -m 755 -o owner -g group /src_dir/src_file /dst_file

Resources