gdb appears to ignore executable capabilities - linux

I am debugging a program that makes use of libnetfilter_queue. The documentation states that a userspace queue-handling application needs the CAP_NET_ADMIN capability to function. I have done this using the setcap utility as follows:
$ sudo setcap cap_net_raw,cap_net_admin=eip ./a.out
I have verified that the capabilities are applied correctly as a) the program works and b) getcap returns the following output:
$ getcap ./a.out
./a.out = cap_net_admin,cap_net_raw+eip
However, when I attempt to debug this program using gdb (e.g. $ gdb ./a.out) from the command line, it fails on account of not having the correct permissions set. The debugging functionality of gdb works perfectly otherwise and debugs as per normal.
I have even attempted to apply these capabilities to the gdb binary itself to no avail. I did this as it seemed (as documented by the manpages that the "i" flag might allowed the debugee to inherit the capability from the debugger.
Is there something trivial I am missing or can this really not be done?

I run into same problem and at beginning I thought the same as above that maybe gdb is ignoring the executable's capability due to security reason. However, reading source code and even using eclipse debugging gdb itself when it is debugging my ext2fs-prog which opens /dev/sda1, I realize that:
gdb is no special as any other program. (Just like it is in the matrix, even the agents themselves they obey the same physical law, gravity etc, except that they are all door-keepers.)
gdb is not the parent process of debugged executable, instead it is grand father.
The true parent process of debugged executable is "shell", i.e. /bin/bash in my case.
So, the solution is very simple, apart from adding cap_net_admin,cap_net_raw+eip to gdb, you have also apply this to your shell. i.e. setcap cap_net_admin,cap_net_raw+eip /bin/bash
The reason that you have also to do this to gdb is because gdb is parent process of /bin/bash before create debugged process.
The true executable command line inside gdb is like following:
/bin/bash exec /my/executable/program/path
And this is parameter to vfork inside gdb.

For those who have the same problem, you can bypass this one by executing gdb with sudo.

A while ago I did run into the same problem. My guess is that running the debugged program with the additional capabilities is a security issue.
Your program has more privileges than the user that runs it. With a debugger a user can manipulate the execution of the program. So if the program runs under the debugger with the extra privileges then the user could use these privileges for other purposes than for which the program intended to use them. This would be a serious security hole, because the user does not have the privileges in the first place.

For those running GDB through an IDE, sudo-ing GDB (as in #Stéphane J.'s answer) may not be possible. In this case, you can run:
sudo gdbserver localhost:12345 /path/to/application
and then attach your IDE's GDB instance to that (local) GDBServer.
In the case of Eclipse CDT, this means making a new 'C/C++ Remote Application' debug configuration, then under the Debugger > Connection tab, entering TCP / localhost / 12345 (or whatever port you chose above). This lets you debug within Eclipse, whilst your application has privileged access.

I used #NickHuang's solution until, with one of system updates, it broke systemd services (too much capabilities on bash for systemd to start it or some such). Switched to leaving bash alone and instead pass a command to gdb to invoke the executable directly. The command is
set startup-with-shell off

OK, so I struggled a bit with this so I thought I'd combine answers and summarise.
The easy solution is just to sudo gdb as suggested but just be a bit careful. What you're doing here is running the debugged program as root. This may well cause it to operate differently than when you run it from the command line as a normal user. Could be a bit confusing. Not that I would EVER fall into this trap... Oopsies.
This will be fine if you're running the debugged program as root with sudo OR if the debugged program has the setuid bit set. But if the debugged program is running with POSIX capabilities (setcap / getcap) then you need to mirror these more granular permissions in bash and gdb as Nick Huang suggested rather than just brute forcing permissions with 'sudo'.
Doing anything else may lead you to a bad place of extreme learning.

Related

Running gvfs after building

I am trying to run a local build of gvfs. I have followed the Newcomers document to set up a working build environment, built gvfs from sources and am now trying to figure out how to run it.
The docs have instructions on running applications or the GNOME shell, which say I need to kill the current instance, then launch the newly-built binary with jhbuild run, as in:
$ killall gnome-weather
$ jhbuild run gnome-weather
or, in the case of the shell,
$ jhbuild run gnome-shell --replace
For gvfs, I see that it spawns a bunch of processes (all children of P1 running under my account), the first of them (lowest PID) being gvfsd. So I tried the following:
$ killall gvfsd
$ jhbuild run gvfs
Which gives me the error message:
jhbuild run: Unable to execute the command 'gvfs': [Errno 2] No such file or directory
If instead I try
$ jhbuild run gvfsd
I get the same message. Same when I try any of the above two with --replace.
Since gvfs is a daemon rather than an application, I searched around a bit and came across this post, which suggests launching daemons with
jhbuild run dbus-launch --exit-with-session name-of-daemon
No joy either... no matter whether I use gvfs or gvfsd for the name, I get the error message
Couldn't exec gvfs: No such file or directory
(reporting the name I specified in the command).
Is this the correct way to launch gvfs at all? If not, what is? If it is, how can I find out what's going wrong?
EDIT: Apparently, the code I intend to modify is part of the gvfs-mtp-volume-monitor binary – but essentially the same goes here. How do I launch my own version of the binary rather than the one that came with my OS distro?
jhbuild run can be used for gvfs in the same manner.
For gvfsd do the following:
jhbuild run ~/jhbuild/install/libexec/gvfsd -r
The -r switch tells gvfsd to replace any running version. gvfsd will also start gvfsd-fuse if it was built and you didn't disable it via a command-line switch.
You will also need to replace any volume monitors (and other processes you need), such as:
killall gvfs-mtp-volume-monitor
jhbuild run ~/jhbuild/install/libexec/gvfs-mtp-volume-monitor
Care must be taken with anything that is invoked over dbus:
Namespaces may change between versions. If that happened between the version shipped with your OS and the current one, the latter will not work unless you tweak your dbus config to reflect that.
If dbus is used to spawn processes, it will fall back to the binaries shipped with your OS. Again you would need to modify your dbus config (specifically .service entries) to point to your binaries.

Starting a program in a chroot environment returns immediately

I am working in a virtual environment, trying to start open vm tools in a chroot environment.
I tested with bash and it seems to work fine.
I used ./configure --options --prefix=/home/chroot_env to install the program, then using ldd on vmtoolsd, i copied the corresponding libraries to the /lib directory.
Now when I start chroot /home/chroot_env /bin/vmtoolsd, nothing happens, the chroot returns directly. Launching the same binary in the normal environment does work.
Does someone have an idea why it isn't working, the correct libraries are there, and it works with bash.
EDIT : strace showed that vmtoolsd is trying to access /dev/console, I added mount --bind /dev/ /home/chroot_env/dev/ but it is still failing.
EDIT2 : another strace showed it was looking for another plugin loaded dynamically, i added it and it worked, conclusion strace is great for debugging such issue!
When you run a program and nothing happens, you can always run it with strace in order to see which syscalls are made. This is an easy way to obtain the list of the files (regular or not) that are opened. In your case, check that your program doesn't try to access a file that is not in the chroot.

Different environment when running sudo as root?

I'm trying to install the Xeon Phi coprocessor. The specific behavior is probably related to the tools involved - my question is of a more general nature.
When I execute a command as root, I get a segmentation fault. When I execute it as root but (in my opinion unnecessarily) use sudo, it works:
i72:~ # whoami
root
i72:~ # micctrl -s
Segmentation fault
i72:~ # sudo micctrl -s
[no segfault]
What differences are there in the environments micctrl is being run in?
(Edit:) I think we ruled out environment variables as an option below.
The system is a SLES 11.2.
Thank you!
sudo removes LD_LIBRARY_PATH, LD_PRELOAD from the environment (I suspect it does it for root as well as ordinary users).
This may cause different libraries to be loaded for the program.
sudo can be configured on what variables it resets/clears - see http://brandonhutchinson.com/wiki/Sudo_and_environment_variables
Turns out that sudo just hides the "Segmentation fault" message. It still happens, but doesn't get displayed on the terminal. We found out because micctrl never gave us any output, even when it should have.
Edit: Also, if someone should run into the problem with micctrl: In our case, the Phi was not properly recognized by the system. lspci found it, but it was not listed in /sys/class/mic.

Is it possible to attach to an already running gdb process?

Good morning, I started a gdb debug session several hours ago. Is possible to use gdb to attach to a process already being debugged by gdb?
I tried to attach as root but I get the following error message:
[root#localhost lirh5g_deb]# gdb ./MatchUpAccurate.exe 12327
ptrace: Operation not permitted.
/home/frank/DQT/MatchUpTest/lirh5g_deb/12327: No such file or directory.
We are using Centos Linux Version 5.5. Thank you.
Unfortunately, not directly. Your only option, if you didn't use screen/tmux, is to search for a tty hijacker (it's possible to "steal" tty's - this is an ugly solution though) and grab the tty which has your existing gdb session

problems during linux kernel init

What are the possible culprits for crashes during kernel init?
I am running a kernel that has initramfs, the inittab is very basic rcS (as sysinit) and getty (respawn). While booting I don't get any error message, however the init gives me this message:
S0 respawning too fast: disabled for 5 minutes, where S0 is actually the respawn::getty line(it seems as getty keep crashing), also none of the messages generated by the rcS are seen on the console (I assume that rcS commands also crashe).
If I force the kernel to go to /bin/sh (instead of /init) I can call rcS manually and I get no errors, same happens for getty (if I call getty with the same params from inittab it works fine).
I am wondering what are the difference between the way init spawns processes and the way /bin/sh does.
Some OS's log init respawns to wtmp, you might want to check there. Turning up your syslog might also help.
When you kick off getty via /bin/sh, does it stay running? AFAIK, the trick with init respawn is that the PID it generates is monitored and if it goes down it kicks off another one.
The stock /bin/sh is not built static, nor is getty. You need to look at the
shared library dependencies of /bin/sh and getty that all the libraries are present.
You can use ldd or 'readelf -a' to see the shared library dependencies.
Maybe nothing is setting up /dev/tty1, /dev/tty2, etc, but stuff is running ok on /dev/console (which is not the same thing as /dev/tty1). If you're depending on a /dev directory being in your initramfs or root filesystem, check those.
Probably the main difference between init=/bin/sh and letting init spawn stuff is the /dev/console vs. /dev/ttyx. I can't think of anything else that would be relevant. Keep in mind that the initramfs does run first, I think.
And BTW, you're obviously past the kernel init stage if init(8) or /bin/sh can run.

Resources