I have successfully cross compiled a C project for a microcontroller from my Linux machine. The compiler needed some shared library which I have carefully placed in /opt/foo-folder. With the same care, I have set permissions for foo-folder so that it can be fully accessed.
Despite all this, I cannot cross compile any C file without sudo access. I assume that this can be a file ownership problem, yet I have set the current user (which is also admin) to be the owner of foo-folder.
It is not stopping me to do what I want, but it is very annoying.
Any advice or suggestion is welcomed. And please keep in mind that I am a rather average Linux user.
Related
I am working on building a large and complex source-tree based on autotools ./configure on a shared supercomputer (named, OpenModelica https://github.com/OpenModelica/OpenModelica/blob/master/README.md). Because it is a large build, we would like to log in and use the same source tree while we are working through the compilation issues, even though we have different user accounts (but the 'chgrp' group ownership is a group that we are both members of).
When running ./configure, we have various problems, for example ./configure tries and fails to change the ownership of the file 'config.guess'. And there are various other files that are created with ownership settings that seem to prevent shared use of the source code tree.
I feel like there should be some simple 'chmod' or 'chown' setting of ownership for this source tree that would overcome this problem, but I haven't been able to figure it out. Any suggestions?
To clarify, we also want to share the build output files, not just the source code. We want to avoid having to duplicate the files (limited storage quota) or repeat the build (slow, tedious). Also, we do not have the liberty to create additional accounts on this system, or sharing a single account at least not without making a 'special request'.
I have an account on a set of machines which have separate processors, memory, and storage but share the same home directory. So, for instance, if I ssh into either machine, they will both source the same bashrc file.
The sysadmin does not install all of the software I wish to use so I have compiled some from source and store it in bin, lib, etc. directories in the home directory and change my PATH, LD_LIBRARY_PATH variables to include these. Each machine, at least up until recently, had different operating systems (or at least versions) installed and I was told that compiled code on one machine would not necessarily give the same result on the other. Therefore my (very hacky) solution was the following:
Create two directories in $HOME: ~/server1home and ~/server2home,
each with their own set of bin, lib, etc. with separately
compiled libraries.
Edit my .bashrc to check which server I am on and set the path variables to look in the correct directories for binaries and libraries for the server.
Lately, we moved building and the servers were rebooted and I believe they both run the same OS now. Most of my setup was broken by the reboot so I have to remake it. In principle, I don't need anything different on each machine and they could be identical apart from the fact that there are more processors and memory to run code on. They don't have the same hardware, as far as I'm aware, so I still don't know if they can safely run the same binaries. Is such a setup safe to run code that needs to be numerically precise?
Alternatively, I would redo my hack differently this time. I had a lot of dotfiles that still ended up going into $HOME, rather than my serverXhome directories and the situation was always a little messy. I want to know if it's possible to redefine $HOME on login, based on hostname and then have nothing but the two serverXhome directories inside the shared $HOME, with everything duplicated inside each of these new home directories. Is this possible to set up without administrative privileges? I imagine I could make a .profile script that runs on login and changes $HOME to point at the right directory and then sources the new .bashrc within that directory to set all the rest of the environment variables. Is this the correct way of going about this or are there some pitfalls to be wary of?
TL;DR: I have the same home directory for two separate machines. Do binary libraries and executables compiled on one machine run safely on the other? Otherwise, is there a strategy to redefine $HOME on each machine to point to a subdirectory of the shared home directory and have separate binaries for each?
PS: I'm not sure if this is more relevant in superuser stackexchange or not. Please let me know if I'd have better luck posting there.
If the two machines have the same processor architecture, in general compiled binaries should work on both.
Now there is a number of factors that come into play, and the result will hugely depend on the type of programs that you want to run, but in general, if the same shared libraries are installed, then your programs will work identically.
On different distributions, or different versions of a given distribution, it is likely that the set of installed libraries, or that the version of them will differ, which means that your programs will work on the machine on which they are built, but probably not on another one.
If you can control how they are built, you can rebuild your applications to have static linkage instead of dynamic, which means that they will embed all the libraries they need when built, resulting in a much bigger executable, but providing a much improved compatibility.
If the above doesn't work and you need to use a different set of programs for each machine, I would recommend leaving the $HOME environment variable alone, and only change your $PATH depending on the machine you are on.
You can have a short snipper in your .bashrc like so:
export PATH=$HOME/$(hostname)/bin:$PATH
export LD_LIBRARY_PATH=$HOME/$(hostname)/lib:$LD_LIBRARY_PATH
Then all you need is a folder bearing the machine's hostname for each machine you can connect to. If several machines share the same operating system and architecture, making symbolic links will save you some space.
I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.
I'm developing a project where the executables use Linux's POSIX capabilities rather than being setuid root. So far I've had to keep one root shell open so that each time I recompile I can redo the setcap command to give the needed capability to the executable file so that I can test the results. That's getting tedious, plus if I ever hope that anyone else would want to contribute to the project's development I'll have to come up with a better way of doing it.
So far I've come up with two ways of dealing with this:
1) Have a single make target to be run as root to create a special setuid program which will be used to by the makefiles to give the capability to the executables. The program will be compiled from a template modified via sed so that it will only run if used by the non-root user the developer is working as, and will only modify files owned by the developer (and which are sitting in directories owned by the developer which aren't world writeable).
The problem with this is that I'm using GNU autotools to generate my make files, and I can't figure out how to get the makefiles to run a program on a linked executable after it's been linked. I could create a setcap-all target which has all the executables as its dependencies, with a rule that runs the setuid program on them, but then you can't simply do make executable-1 if that's all you want to build.
2) Have a single make target to be run as root to create a setuid daemon which will use inotify to monitor the src directory and grant the capability to any new executables (and which has security consideration similar to the setuid program from #1).
My problem with this is that I can't figure out how to get the build system to automatically and transparently start up the daemon, plus my intuition that This Is Not The Way Things Are Done in a proper build system.
Are there any better ways of doing this?
Maybe I'm a bit confused about the question, but it seems you're trying to use the build-system to solve an installation problem.
Whether you're packaging your project using dpkg, rpm or anything else, there should be a rule to enforce usage of setcap, which will set the capabilities of the installed binary using the Filesystem Extended Attributes (xattrs).
# Post-install rule example
setcap cap_net_raw=+pe /usr/bin/installed-binary
However, of you're installing a system daemon, you may count on the init-script to already have all the capabilities, so it's a matter of letting your process to drop unneeded capabilities.
Gentoo has a feature in portage, that prevents and logs writes outside of the build and packaging directories.
Checkinstall is able to monitor writes, and package up all the generated files after completion.
Autotools have the DESTDIR macro that enables you to usually direct most of the filesystem activity to an alternate location.
How can I do this myself with the
safety of the Gentoo sandboxing
method?
Can I use SELinux, rlimit, or
some other resource limiting API?
What APIs are available do this from
C, Python?
Update0
The mechanism used will not require root privileges or any involved/persistent system modification. This rules out creating users and using chroot().
Please link to the documentation for APIs that you mention, for some reason they're exceptionally difficult to find.
Update1
This is to prevent accidents. I'm not worried about malicious code, only the poorly written variety.
The way Debian handles this sort of problem is to not run the installation code as root in the first place. Package build scripts are run as a normal user, and install scripts are run using fakeroot - this LD_PRELOAD library redirects permission-checking calls to make it look like the installer is actually running as root, so the resulting file ownership and permissions are right (ie, if you run /usr/bin/install from within the fakeroot environment, further stats from within the environment show proper root ownership), but in fact the installer is run as an ordinary user.
Builds are also, in some cases (primarily for development), done in chroots using eg pbuilder - this is likely easier on a binary distribution however, as each build using pbuilder reinstalls all dependencies beyond the base system, acting as a test that all necessary dependencies are specified (this is the primary reason for using a chroot; not for protection against accidental installs)
One approach is to virtualize a process, similar to how wine does it, and reinterpret file paths. That's rather heavy duty to implement though.
A more elegant approach is to use the chroot() system call which sets a subtree of the filesystem as a process's root directory. Create a virtual subtree, including /bin, /tmp, /usr, /etc as you want the process to see them, call chroot with the virtual tree, then exec the target executable. I can't recall if it is possible to have symbolic links within the tree reference files outside, but I don't think so. But certainly everything needed could be copied into the sandbox, and then when it is done, check for changes against the originals.
Maybe get the sandbox safety with regular user permissions? So the process running the show has specific access to specific directories.
chroot would be an option but I can't figure out how to track these tries to write outside the root.
Another idea would be along the lines of intercepting system calls. I don't know much about this but strace is a start, try running a program through it and check if you see something you like.
edit:
is using kernel modules an option? because you could replace the write system call with your own so you could prevent whatever you needed and also log it.
It sounds a bit like what you are describing is containers. Once you've got the container infrastructure set up, it's pretty cheap to create containers, and they're quite secure.
There are two methods to do this. One is to use LD_PRELOAD to hook library calls that result in syscalls, such as those in libc, and call dlsym/dlopen. This will not allow you to directly hook syscalls.
The second method, which allows hooking syscalls, is to run your executable under ptrace, which provides options to stop and examine syscalls when they occur. This can be set up programmatically to sandbox calls to restricted areas of the filesystem, among other things.
LD_PRELOAD can not intercept syscalls, but only libcalls?
Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs
Write Yourself an Strace in 70 Lines of Code