I'm trying to install the Xeon Phi coprocessor. The specific behavior is probably related to the tools involved - my question is of a more general nature.
When I execute a command as root, I get a segmentation fault. When I execute it as root but (in my opinion unnecessarily) use sudo, it works:
i72:~ # whoami
root
i72:~ # micctrl -s
Segmentation fault
i72:~ # sudo micctrl -s
[no segfault]
What differences are there in the environments micctrl is being run in?
(Edit:) I think we ruled out environment variables as an option below.
The system is a SLES 11.2.
Thank you!
sudo removes LD_LIBRARY_PATH, LD_PRELOAD from the environment (I suspect it does it for root as well as ordinary users).
This may cause different libraries to be loaded for the program.
sudo can be configured on what variables it resets/clears - see http://brandonhutchinson.com/wiki/Sudo_and_environment_variables
Turns out that sudo just hides the "Segmentation fault" message. It still happens, but doesn't get displayed on the terminal. We found out because micctrl never gave us any output, even when it should have.
Edit: Also, if someone should run into the problem with micctrl: In our case, the Phi was not properly recognized by the system. lspci found it, but it was not listed in /sys/class/mic.
Related
I'm running arch linux and am making an attempt to run DaVinci Resolve. Initially startup said nothing, it just timed out and closed. Then I found a recommendation to run it with /opt/resolve/bin/resolve this got me an error saying
libGLU.so.1: cannot open shared object file: No such file or directory
This has sent me on a wild goose chase trying to install libGLU.so.1 on my system. I heard somewhere it is part of mesa so I sudo pacman -S mesa and I've tried to find a AUR package that might have it but no luck. Even trying variations of yay libGLU and yay libGLU-mesa, no luck so far.
Additionally find / -name 'libLGU*' returned nothing even when ran with sudo, meaning it isn't already on my system in the wrong directory.
This might unfortunately be an instance where I download the file and place it where it needs to go but that's probably not in the best interest of the long term longevity of my system.
I'm probably fairly novice when compared to most others on linux but I think I've gotten a lot of the basics down. Would love any insight you may have on this issue!
While an outdated forum post said that /usr/lib/libGLU.so.1 is owned by the mesa package, it is now currently owned by glu.
pacman -S glu ought to give you your needed library.
For future reference, you can reverse search filename->package using pkgfile, which works even if you don't have the respective files/packages locally.
https://wiki.archlinux.org/index.php/pkgfile
$ sudo pkgfile --update
$ pkgfile libGLU.so.1
extra/glu
Alternatively there's the built-in pacman -F, but it's generally slower than pkgfile.
Currently I'm experimenting with the Cell/BE CPU under Linux. What I'm trying to do is running simulations in the near future, e.g. about the weather or black holes.
Problem is, Linux only discovers the main CPU of the Cell (the PPE), all other SPUs (7 should be available to Linux) are "sleeping". They just don't work out of the box.
What works is the PPE and it's recognized as a two-threaded CPU with one core by the OS. Also, the SPEs are shown at every boot (with small penguins showing a red "PPE" in them), but afterwards are shown nowhere.
Is it possible to "free" these specialised cores for use by the Linux OS? If so, how?
As noone seems to be interested or can answer this question I'll provide the details myself.
In fact there exists a workaround:
First, create an entry point for the SPUFS:
# sudo mkdir /spu
Create a mount point for the filesystem so you won’t have to manually mount after a reboot. Add this line to /etc/fstab
spufs /spu spufs defaults 0 0
Now reboot and test to make sure the SPUFS is mounted (in a terminal):
spu-top
You should see the 7 SPEs running with 0% load average.
Now Google for the following package to get the runtime library and headers you need for SPE development:
libspe2-2.3.0.135.tar.gz
You should find it on the first hit. Just unpack, build, and install it:
./configure
make
sudo make install
You can ignore the build warnings (or fix them if you have obsessive compulsive disorder).
You can use pkg-config to find the location of the runtime and headers though they are in /usr/local if I recall.
You of course need the gcc-spe compiler and the rest of the PPU and SPU toolchains but those you can install with apt-get as they are in the repos.
Source: comment by Exillis via redribbongnulinux.000webhostapp.com
I am working in a virtual environment, trying to start open vm tools in a chroot environment.
I tested with bash and it seems to work fine.
I used ./configure --options --prefix=/home/chroot_env to install the program, then using ldd on vmtoolsd, i copied the corresponding libraries to the /lib directory.
Now when I start chroot /home/chroot_env /bin/vmtoolsd, nothing happens, the chroot returns directly. Launching the same binary in the normal environment does work.
Does someone have an idea why it isn't working, the correct libraries are there, and it works with bash.
EDIT : strace showed that vmtoolsd is trying to access /dev/console, I added mount --bind /dev/ /home/chroot_env/dev/ but it is still failing.
EDIT2 : another strace showed it was looking for another plugin loaded dynamically, i added it and it worked, conclusion strace is great for debugging such issue!
When you run a program and nothing happens, you can always run it with strace in order to see which syscalls are made. This is an easy way to obtain the list of the files (regular or not) that are opened. In your case, check that your program doesn't try to access a file that is not in the chroot.
In CentOS 6.5, yum install zssh, but when I execute zssh, it gives an error showing: out of pty's.
What does this mean? How to solve this?
You can see the list of used ptys with
ls /dev/pts
The maximum number of ptys is given by
cat /proc/sys/kernel/pty/max
That value can be configured in
/etc/sysctl.conf
(see man pty)
Note that some versions of the kernel were buggy.
The ptys, or pseudo terminals are the 'channels' through which a process interacts with the user console (keyboard and screen)
There seems to be some weird library mismatch bug that cropps up in some binary distributions. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=769366. Has not been tracked down, but a simple recompile seems to be a workaround.
I am debugging a program that makes use of libnetfilter_queue. The documentation states that a userspace queue-handling application needs the CAP_NET_ADMIN capability to function. I have done this using the setcap utility as follows:
$ sudo setcap cap_net_raw,cap_net_admin=eip ./a.out
I have verified that the capabilities are applied correctly as a) the program works and b) getcap returns the following output:
$ getcap ./a.out
./a.out = cap_net_admin,cap_net_raw+eip
However, when I attempt to debug this program using gdb (e.g. $ gdb ./a.out) from the command line, it fails on account of not having the correct permissions set. The debugging functionality of gdb works perfectly otherwise and debugs as per normal.
I have even attempted to apply these capabilities to the gdb binary itself to no avail. I did this as it seemed (as documented by the manpages that the "i" flag might allowed the debugee to inherit the capability from the debugger.
Is there something trivial I am missing or can this really not be done?
I run into same problem and at beginning I thought the same as above that maybe gdb is ignoring the executable's capability due to security reason. However, reading source code and even using eclipse debugging gdb itself when it is debugging my ext2fs-prog which opens /dev/sda1, I realize that:
gdb is no special as any other program. (Just like it is in the matrix, even the agents themselves they obey the same physical law, gravity etc, except that they are all door-keepers.)
gdb is not the parent process of debugged executable, instead it is grand father.
The true parent process of debugged executable is "shell", i.e. /bin/bash in my case.
So, the solution is very simple, apart from adding cap_net_admin,cap_net_raw+eip to gdb, you have also apply this to your shell. i.e. setcap cap_net_admin,cap_net_raw+eip /bin/bash
The reason that you have also to do this to gdb is because gdb is parent process of /bin/bash before create debugged process.
The true executable command line inside gdb is like following:
/bin/bash exec /my/executable/program/path
And this is parameter to vfork inside gdb.
For those who have the same problem, you can bypass this one by executing gdb with sudo.
A while ago I did run into the same problem. My guess is that running the debugged program with the additional capabilities is a security issue.
Your program has more privileges than the user that runs it. With a debugger a user can manipulate the execution of the program. So if the program runs under the debugger with the extra privileges then the user could use these privileges for other purposes than for which the program intended to use them. This would be a serious security hole, because the user does not have the privileges in the first place.
For those running GDB through an IDE, sudo-ing GDB (as in #Stéphane J.'s answer) may not be possible. In this case, you can run:
sudo gdbserver localhost:12345 /path/to/application
and then attach your IDE's GDB instance to that (local) GDBServer.
In the case of Eclipse CDT, this means making a new 'C/C++ Remote Application' debug configuration, then under the Debugger > Connection tab, entering TCP / localhost / 12345 (or whatever port you chose above). This lets you debug within Eclipse, whilst your application has privileged access.
I used #NickHuang's solution until, with one of system updates, it broke systemd services (too much capabilities on bash for systemd to start it or some such). Switched to leaving bash alone and instead pass a command to gdb to invoke the executable directly. The command is
set startup-with-shell off
OK, so I struggled a bit with this so I thought I'd combine answers and summarise.
The easy solution is just to sudo gdb as suggested but just be a bit careful. What you're doing here is running the debugged program as root. This may well cause it to operate differently than when you run it from the command line as a normal user. Could be a bit confusing. Not that I would EVER fall into this trap... Oopsies.
This will be fine if you're running the debugged program as root with sudo OR if the debugged program has the setuid bit set. But if the debugged program is running with POSIX capabilities (setcap / getcap) then you need to mirror these more granular permissions in bash and gdb as Nick Huang suggested rather than just brute forcing permissions with 'sudo'.
Doing anything else may lead you to a bad place of extreme learning.