Using autotools ./configure in a shared source tree - file-permissions

I am working on building a large and complex source-tree based on autotools ./configure on a shared supercomputer (named, OpenModelica https://github.com/OpenModelica/OpenModelica/blob/master/README.md). Because it is a large build, we would like to log in and use the same source tree while we are working through the compilation issues, even though we have different user accounts (but the 'chgrp' group ownership is a group that we are both members of).
When running ./configure, we have various problems, for example ./configure tries and fails to change the ownership of the file 'config.guess'. And there are various other files that are created with ownership settings that seem to prevent shared use of the source code tree.
I feel like there should be some simple 'chmod' or 'chown' setting of ownership for this source tree that would overcome this problem, but I haven't been able to figure it out. Any suggestions?
To clarify, we also want to share the build output files, not just the source code. We want to avoid having to duplicate the files (limited storage quota) or repeat the build (slow, tedious). Also, we do not have the liberty to create additional accounts on this system, or sharing a single account at least not without making a 'special request'.

Related

Solution for shared home directory across separate machines - change $HOME variable?

I have an account on a set of machines which have separate processors, memory, and storage but share the same home directory. So, for instance, if I ssh into either machine, they will both source the same bashrc file.
The sysadmin does not install all of the software I wish to use so I have compiled some from source and store it in bin, lib, etc. directories in the home directory and change my PATH, LD_LIBRARY_PATH variables to include these. Each machine, at least up until recently, had different operating systems (or at least versions) installed and I was told that compiled code on one machine would not necessarily give the same result on the other. Therefore my (very hacky) solution was the following:
Create two directories in $HOME: ~/server1home and ~/server2home,
each with their own set of bin, lib, etc. with separately
compiled libraries.
Edit my .bashrc to check which server I am on and set the path variables to look in the correct directories for binaries and libraries for the server.
Lately, we moved building and the servers were rebooted and I believe they both run the same OS now. Most of my setup was broken by the reboot so I have to remake it. In principle, I don't need anything different on each machine and they could be identical apart from the fact that there are more processors and memory to run code on. They don't have the same hardware, as far as I'm aware, so I still don't know if they can safely run the same binaries. Is such a setup safe to run code that needs to be numerically precise?
Alternatively, I would redo my hack differently this time. I had a lot of dotfiles that still ended up going into $HOME, rather than my serverXhome directories and the situation was always a little messy. I want to know if it's possible to redefine $HOME on login, based on hostname and then have nothing but the two serverXhome directories inside the shared $HOME, with everything duplicated inside each of these new home directories. Is this possible to set up without administrative privileges? I imagine I could make a .profile script that runs on login and changes $HOME to point at the right directory and then sources the new .bashrc within that directory to set all the rest of the environment variables. Is this the correct way of going about this or are there some pitfalls to be wary of?
TL;DR: I have the same home directory for two separate machines. Do binary libraries and executables compiled on one machine run safely on the other? Otherwise, is there a strategy to redefine $HOME on each machine to point to a subdirectory of the shared home directory and have separate binaries for each?
PS: I'm not sure if this is more relevant in superuser stackexchange or not. Please let me know if I'd have better luck posting there.
If the two machines have the same processor architecture, in general compiled binaries should work on both.
Now there is a number of factors that come into play, and the result will hugely depend on the type of programs that you want to run, but in general, if the same shared libraries are installed, then your programs will work identically.
On different distributions, or different versions of a given distribution, it is likely that the set of installed libraries, or that the version of them will differ, which means that your programs will work on the machine on which they are built, but probably not on another one.
If you can control how they are built, you can rebuild your applications to have static linkage instead of dynamic, which means that they will embed all the libraries they need when built, resulting in a much bigger executable, but providing a much improved compatibility.
If the above doesn't work and you need to use a different set of programs for each machine, I would recommend leaving the $HOME environment variable alone, and only change your $PATH depending on the machine you are on.
You can have a short snipper in your .bashrc like so:
export PATH=$HOME/$(hostname)/bin:$PATH
export LD_LIBRARY_PATH=$HOME/$(hostname)/lib:$LD_LIBRARY_PATH
Then all you need is a folder bearing the machine's hostname for each machine you can connect to. If several machines share the same operating system and architecture, making symbolic links will save you some space.

Cross-compiler cant see shared library

I have successfully cross compiled a C project for a microcontroller from my Linux machine. The compiler needed some shared library which I have carefully placed in /opt/foo-folder. With the same care, I have set permissions for foo-folder so that it can be fully accessed.
Despite all this, I cannot cross compile any C file without sudo access. I assume that this can be a file ownership problem, yet I have set the current user (which is also admin) to be the owner of foo-folder.
It is not stopping me to do what I want, but it is very annoying.
Any advice or suggestion is welcomed. And please keep in mind that I am a rather average Linux user.

Developmental testing of programs using Linux's POSIX capabilities

I'm developing a project where the executables use Linux's POSIX capabilities rather than being setuid root. So far I've had to keep one root shell open so that each time I recompile I can redo the setcap command to give the needed capability to the executable file so that I can test the results. That's getting tedious, plus if I ever hope that anyone else would want to contribute to the project's development I'll have to come up with a better way of doing it.
So far I've come up with two ways of dealing with this:
1) Have a single make target to be run as root to create a special setuid program which will be used to by the makefiles to give the capability to the executables. The program will be compiled from a template modified via sed so that it will only run if used by the non-root user the developer is working as, and will only modify files owned by the developer (and which are sitting in directories owned by the developer which aren't world writeable).
The problem with this is that I'm using GNU autotools to generate my make files, and I can't figure out how to get the makefiles to run a program on a linked executable after it's been linked. I could create a setcap-all target which has all the executables as its dependencies, with a rule that runs the setuid program on them, but then you can't simply do make executable-1 if that's all you want to build.
2) Have a single make target to be run as root to create a setuid daemon which will use inotify to monitor the src directory and grant the capability to any new executables (and which has security consideration similar to the setuid program from #1).
My problem with this is that I can't figure out how to get the build system to automatically and transparently start up the daemon, plus my intuition that This Is Not The Way Things Are Done in a proper build system.
Are there any better ways of doing this?
Maybe I'm a bit confused about the question, but it seems you're trying to use the build-system to solve an installation problem.
Whether you're packaging your project using dpkg, rpm or anything else, there should be a rule to enforce usage of setcap, which will set the capabilities of the installed binary using the Filesystem Extended Attributes (xattrs).
# Post-install rule example
setcap cap_net_raw=+pe /usr/bin/installed-binary
However, of you're installing a system daemon, you may count on the init-script to already have all the capabilities, so it's a matter of letting your process to drop unneeded capabilities.

How can I sandbox filesystem activity, particularly writes?

Gentoo has a feature in portage, that prevents and logs writes outside of the build and packaging directories.
Checkinstall is able to monitor writes, and package up all the generated files after completion.
Autotools have the DESTDIR macro that enables you to usually direct most of the filesystem activity to an alternate location.
How can I do this myself with the
safety of the Gentoo sandboxing
method?
Can I use SELinux, rlimit, or
some other resource limiting API?
What APIs are available do this from
C, Python?
Update0
The mechanism used will not require root privileges or any involved/persistent system modification. This rules out creating users and using chroot().
Please link to the documentation for APIs that you mention, for some reason they're exceptionally difficult to find.
Update1
This is to prevent accidents. I'm not worried about malicious code, only the poorly written variety.
The way Debian handles this sort of problem is to not run the installation code as root in the first place. Package build scripts are run as a normal user, and install scripts are run using fakeroot - this LD_PRELOAD library redirects permission-checking calls to make it look like the installer is actually running as root, so the resulting file ownership and permissions are right (ie, if you run /usr/bin/install from within the fakeroot environment, further stats from within the environment show proper root ownership), but in fact the installer is run as an ordinary user.
Builds are also, in some cases (primarily for development), done in chroots using eg pbuilder - this is likely easier on a binary distribution however, as each build using pbuilder reinstalls all dependencies beyond the base system, acting as a test that all necessary dependencies are specified (this is the primary reason for using a chroot; not for protection against accidental installs)
One approach is to virtualize a process, similar to how wine does it, and reinterpret file paths. That's rather heavy duty to implement though.
A more elegant approach is to use the chroot() system call which sets a subtree of the filesystem as a process's root directory. Create a virtual subtree, including /bin, /tmp, /usr, /etc as you want the process to see them, call chroot with the virtual tree, then exec the target executable. I can't recall if it is possible to have symbolic links within the tree reference files outside, but I don't think so. But certainly everything needed could be copied into the sandbox, and then when it is done, check for changes against the originals.
Maybe get the sandbox safety with regular user permissions? So the process running the show has specific access to specific directories.
chroot would be an option but I can't figure out how to track these tries to write outside the root.
Another idea would be along the lines of intercepting system calls. I don't know much about this but strace is a start, try running a program through it and check if you see something you like.
edit:
is using kernel modules an option? because you could replace the write system call with your own so you could prevent whatever you needed and also log it.
It sounds a bit like what you are describing is containers. Once you've got the container infrastructure set up, it's pretty cheap to create containers, and they're quite secure.
There are two methods to do this. One is to use LD_PRELOAD to hook library calls that result in syscalls, such as those in libc, and call dlsym/dlopen. This will not allow you to directly hook syscalls.
The second method, which allows hooking syscalls, is to run your executable under ptrace, which provides options to stop and examine syscalls when they occur. This can be set up programmatically to sandbox calls to restricted areas of the filesystem, among other things.
LD_PRELOAD can not intercept syscalls, but only libcalls?
Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs
Write Yourself an Strace in 70 Lines of Code

Is there an API to set a NTFS ACL only on a particular folder without flowing permissions down?

In my environment, I have several projects that involve running NTFS ACL audit reports and various ACL cleanup activities on a number of file servers. There are two main reasons why I cannot perform these activities locally on the servers:
1) I do not have local access to the servers as they are actually owned and administered by another company.
2) They are SNAP NAS servers which run a modified Linux OS (called GuardianOS) so even if I could get local access, I'm not sure of the availability of tools to perform the operations I need.
With that out of the way, I ended up rolling my own ACL audit reporting tool that would recurse down the filesystem starting at a specified top-level path and would spit out an HTML report on all the groups/users it encountered on the ACLs as well as showing the changes in permissions as it descended the tree. While developing this tool, I found out that the network overhead was the worst part of doing these operations and by multi-threading the process, I could achieve substantially greater performance.
However, I'm still stuck for finding a good tool to perform the ACL modifications and cleanup. Your standard out of the box tools (cacls, xcacls, Explorer) seem to be single-threaded and suffer significant performance penalty when going across the network. I've looked at rolling my own ACL setting program that is multithreaded but the only API I'm familiar with is the .NET FileSystemAccessRule stuff and the problem is that if I set the permissions at a folder, it automatically wants to "flow" the permissions down. This causes a problem because I want to do the "flowing" myself using multi-threading.
I know NTFS "allows" inherited permissions to be inconsistent because I've seen it where a folder/file gets moved on the same volume between two parent folders with different inherited permissions and it keeps the old permissions as "inherited".
The Questions
1) Is there a way to set an ACL that applies to the current folder and all children (your standard "Applies to files, folders, and subfolders" ACL) but not have it automatically flow down to the child objects? Basically, I want to be able to tell Windows that "Yes, this ACL should be applied to the child objects but for now, just set it directly on this object".
Just to be crystal clear, I know about the ACL options for applying to "this folder only" but then I lose inheritance which is a requirement so that option is not valid for my use case.
2) Anyone know of any good algorithms or methodologies for performing ACL modifications in a multithreaded manner? My gut feeling is that any recursive traversal of the filesystem should work in theory especially if you're just defining a new ACL on a top-level folder and just want to "clean up" all the subfolders. You'd stamp the new ACL on the top-level and then recurse down removing any explicit ACEs and then "flowing" the inherited permissions down.
(FYI, this question is partially duplicated from ServerFault since it's really both a sysadmin and a programming problem. On the other question, I was asking if anyone knows of any tools that can do fast ACL setting over the network.)
Found the answer in a MS KB article:
File permissions that are set on files
and folders using Active Directory
Services Interface (ADSI) and the ADSI
resource kit utility, ADsSecurity.DLL,
do not automatically propagate down
the subtree to the existing folders
and files.
The reason that you cannot use ADSI to
set ACEs to propagate down to existing
files and folders is because
ADSSecurity.dll uses the low-level
SetFileSecurity function to set the
security descriptor on a folder. There
is no flag that can be set by using
SetFileSecurity to automatically
propagate the ACEs down to existing
files and folders. The
SE_DACL_AUTO_INHERIT_REQ control flag
will only set the
SE_DACL_AUTO_INHERITED flag in the
security descriptor that is associated
with the folder.
So I've got to use the low-level SetFileSecurity Win32 API function (which is marked obsolete in its MSDN entry) to set the ACL and that should keep it from automatically flowing down.
Of course, I'd rather tear my eyeballs out with a spoon rather than deal trying to P/Invoke some legacy Win32 API with all its warts so I may end up just using an old NT4 tool called FILEACL that is like CACLS but has an option to use the SetFileSecurity API so changes don't automatically propagate down.

Resources