How to check the vsyscall mode - linux

I am struggling to find out how to check how the [vsyscall] table is configured (to native or emulate). The setting should be set in a variable called vsyscall_mode. Can anyone shed any light on how to check this setting?
By re-running cat /proc/self/maps I have observed that the memory mapped area for [vsyscall] does not change, which the [vdso] does. Does this mean that the setting for vsyscall is set to native?

vsyscall mode is set in kernel configuration so you can be able to choose between native and emulation.
for fish-shell:
cat /usr/src/linux-headers-(uname -r)/.config | grep VSYSCALL
for bash:
cat /usr/src/linux-headers-$(uname -r)/.config | grep VSYSCALL
output on debian 8 (as example):
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_X86_VSYSCALL_EMULATION=y

Usually available in the /proc/config.gz file, it will contain the current kernel config.

Related

linux kallsyms R symbol not showing

I wan't to find the kernel address of system call table.
I usually do this by grepping sys_call
but in one system, I can see the address
but in other, it doesn't show the entry.
root#ubuntu:~# cat /proc/kallsyms | grep sys_call
ffffffff8122aa90 t proc_sys_call_handler
ffffffff81726432 t ret_from_sys_call
ffffffff81726644 T int_ret_from_sys_call
ffffffff81728146 t sysexit_from_sys_call
ffffffff81728386 t sysretl_from_sys_call
ffffffff8172858e t ia32_ret_from_sys_call
**ffffffff81801400 R sys_call_table**
ffffffff81809cc0 R ia32_sys_call_table
root#ubuntu:~#
no system call table... why not showing the R type symbol??
/ $ cat /proc/kallsyms | grep sys_call
ffffffff8119c230 t proc_sys_call_handler
ffffffff817a1a57 t ret_from_sys_call
ffffffff817a1c50 T int_ret_from_sys_call
ffffffff817a2cb8 t sysexit_from_sys_call
ffffffff817a2ed8 t sysretl_from_sys_call
ffffffff817a30be t ia32_ret_from_sys_call
/ $
/ $
in what case does this could happen?
some advice would be nice
thank you
You should look into the version of the kernel in both cases, check with uname -r.
This was initially exported in the earlier versions of the kernel 2.4.x. This initially had "EXPORT_SYMBOL(sys_call_table);" line from linux/kernel/ksyms.c for
sys_call_table from being exported properly and later was made static and removed IMU.
Now this has been exported again in of some of latest kernels (in some version > 3.3.x). I would recommend digging into the LXR to check out the details.
You need to check whether your current kernel is compiled with the option CONFIG_KALLSYMS_ALL=y

Beaglebone Linux: issues appending a line to a file

I am working to enable spi on my beaglebone black (Angstrom distribution), using instructions here.
I am at the point where i need to add BB-SPI1-01 to /sys/devices/bone_capemgr.*/slots to enable the drivers.
issuing the command echo BB-SPI1-01 > /sys/devices/bone_capemgr.*/slots or echo BB-SPI1-01 >> /sys/devices/bone_capemgr.*/slots, however, yields the error echo: Write error: file exists
Trying to edit in the line with nano also fails. I'm able to open the file and edit it, but when I save it gives me Error writing slots: no such file or directory
I've set permissions on the file to 777.
Does anybody know why I cannot edit the file? if it's not possible, is there a workaround?
I, too have battled with this dilemma while trying to port ILI9340C display stuff to Beaglebone Black. The way /dev/devices/bone_capemgr.* works is that anything which you echo to its slots directory it goes and searches for a Device Tree overlay for that device, a new thing in Linux Kernel 3.0 and higher. For anyone who does not know (it took me forever to find this) Device Trees are basically a driver that tells Linux how to deal with a device, but instead of containing any code, they are simply a configuration file, per-se, that tells Linux what to put where in order to talk to a device, and what to expect in return. That being said, BB-SPIx-01 is a compiled Device Tree file, a .dts in /lib/firmware/ which points to the SPI device, and tells spidev what to do with it.
BB-SPI1-01 happens to be connected to the HDMI port already for some audio thing (I think) and, therefore, unless you disable HDMI entirely, SPI1 is always tied up by the HDMI framer. This explains why writing BB-SPI1-01 to /sys/devices/bone_capemgr.*/slots fails. This is a special file, and when you write to it, a kernel process reads your input and proceeds to attempt to make a 'device' file elsewhere, and since BB-SPI1-01 is already enabled, that file already exists, and so the kernel process that handles those things returns an error and pipes it through whatever process initiated it, in this case, you, the user, typing echo BB-SPI1-01 > /sys/devices/bone_capemgr.*/slots.
On the bright side, SPI0 is left unused. Therefore, in order to use it, all you have to do is enable it in userland. To do that, (and you have figured this out already, but for everyone else) type echo BB-SPI0-01 > /sys/devices/bone_capemgr.*/slots at the command line, and then just to be sure that spidev is running, type modprobe spidev as root. Now, to verify, type ls /dev | grep spi and see what comes up. /dev/spidevX.Y is your SPI bus, for me that would be /dev/spidev1.0.
I'm sorry that was really long winded, but I'm culminating my research thus far into one spot in the hopes that it will help someone.
If you have any questions, please feel free to ask!
For those who are curious, while I haven't found the exact answer, I did find some more information.
The SPI1 interface on the beaglebone black can't be enabled unless the hdmi interface has been turned off, which I have not done. I'm instead using the SPI0 interface now. Interestingly, that same command works if BB-SPI0-01 is used instead of BB-SPI1-01. Therefore the error in question is probably not coming from the base command, but rather the system in response to the command (which can't allocate the resources requested due to conflicts with hdmi).
While I haven't tested SPI1 with hdmi turned off, I can only assume that my errors would go away.
Might it be because you're trying to access more than one file at a time with echo BB-SPI1-01 > /sys/devices/bone_capemgr.*/slots ?
Try selecting single path to the slots file and see if that works
Based on PyroAVR's answer, here is the concrete solution. You need to disable HDMI, that's easily done by editing the following file: /boot/uEnv.txt
You can uncomment the line which causes HDMI to be disabled by running the following command as root:
sed -i.bck '/cape_disable=capemgr.disable_partno=BB-BONELT-HDMI,BB-BONELT-HDMIN$/ s/^#//' /boot/uEnv.txt
as Mixaz mentioned in a comment, the real errors are found in the dmesg output; "no such file or directory" is a red herring, and even strace doesn't give any clues as to the real problem. in my case I found:
[26858.517893] bone_capemgr bone_capemgr: slot #5: override
[26858.517937] bone_capemgr bone_capemgr: Using override eeprom data at slot 5
[26858.517986] bone_capemgr bone_capemgr: slot #5: 'Override Board Name,00A0,Override Manuf,jc_gpio_test'
[26924.230357] bone_capemgr bone_capemgr: part_number 'jc_gpio_test', version 'N/A'
from that I guessed that it didn't like "0000" as a version number, changed to "00A0" and recompiled, then it worked.
here's the Makefile I wrote to help automate the process, in case it helps.
%.install: %-00A0.dtbo
cp -f $< /lib/firmware
echo $* > /sys/devices/platform/bone_capemgr/slots
%-00A0.dtbo: %.dts
dtc -O dtb -o $# -b 0 -# $<
use it as: make jc_gpio_test.install, assuming your .dts file name is jc_gpio_test.dts.
it turns out, my guess was probably wrong. the change that more likely fixed it was adding the -00A0 part to the dtbo file. apparently the "dash-versionnumber" is required by the slot loader.

How can I get perf to find symbols in my program

When using perf report, I don't see any symbols for my program, instead I get output like this:
$ perf record /path/to/racket ints.rkt 10000
$ perf report --stdio
# Overhead Command Shared Object Symbol
# ........ ........ ................. ......
#
70.06% ints.rkt [unknown] [.] 0x5f99b8
26.28% ints.rkt [kernel.kallsyms] [k] 0xffffffff8103d0ca
3.66% ints.rkt perf-32046.map [.] 0x7f1d9be46650
Which is fairly uninformative.
The relevant program is built with debugging symbols, and the sysprof tool shows the appropriate symbols, as does Zoom, which I think is using perf under the hood.
Note that this is on x86-64, so the binary is compiled with -fomit-frame-pointer, but that's the case when running under the other tools as well.
This post is already over a year old, but since it came out at the top of my Google search results when I had the same problem, I thought I'd answer it here. After some more searching around, I found the answer given in this related StackOverflow question very helpful. On my Ubuntu Raring system, I then ended up doing the following:
Compile my C++ sources with -g (fairly obvious, you need debug symbols)
Run perf as
record -g dwarf -F 97 /path/to/my/program
This way perf is able to handle the DWARF 2 debug format, which is the standard format gcc uses on Linux. The -F 97 parameter reduces the sampling rate to 97 Hz. The default sampling rate was apparently too large for my system and resulted in messages like this:
Warning:
Processed 172390 events and lost 126 chunks!
Check IO/CPU overload!
and the perf report call afterwards would fail with a segmentation fault. With the reduced sampling rate everything worked out fine.
Once the perf.data file has been generated without any errors in the previous step, you can run perf report etc. I personally like the FlameGraph tools to generate SVG visualizations.
Other people reported that running
echo 0 > /proc/sys/kernel/kptr_restrict
as root can help as well, if kernel symbols are required.
In my case the solution was to delete the elf files which contained cached symbols from previous builds and were messing things up.
They are in ~/.debug/ folder
You can always use the '$ nm ' command.
here is some sample output:
Ethans-MacBook-Pro:~ phyrrus9$ nm a.out
0000000100000000 T __mh_execute_header
0000000100000f30 T _main
U _printf
0000000100000f00 T _sigint
U _signal
U dyld_stub_binder
I had this problem too, I couldn't see any userspace symbol, but I saw some kernel symbols. I thought this was a symbol loading issue. After tried all the possible solutions I could find, I still couldn't get it work.
Then I faintly remember that
ulimit -u unlimited
is needed. I tried and it magically worked.
I found from this wiki that this command is needed when you use too many file descriptors.
https://perf.wiki.kernel.org/index.php/Tutorial#Troubleshooting_and_Tips
my final command was
perf record -F 999 -g ./my_program
didn't need --call-graph
Make sure that you compile the program using -g option along with gcc(cc) so that debugging information is produced in the operating system's native format.
Try to do the following and check if there are debug symbols present in the symbol table.
$objdump -t your-elf
$readelf -a your-elf
$nm -a your-elf
How about your dev host machine? Is it also running the x86_64 OS?
If not, please make sure the perf is cross-compiled, because the perf depends on the objdump and other tools in toolchain.
I got the same problem with perf after overriding the name of my program via prctl(PR_SET_NAME)
As I can see your case is pretty similar:
70.06% ints.rkt [unknown]
Command you have executed (racket) is different from the one perf have seen.
you can check the value of kptr_restrict by cat /proc/kallsyms. If the addresses of the symbols in the result are all 0x000000, you can fix it by command echo 0 > sys/kernel/kptr_restrict . After this , you may get a wanted result of the perf report

Compressing the core files during core generation

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Location of Linux Kernel Module

Is there any utiliy, that shows where the location of the module I have loaded.
If you want to know the base memory address for a module in the kernel's virtual address space, it can be found as the last field in /proc/modules; search for the module in question:
$ grep '^ext3' /proc/modules
ext3 125513 1 - Live 0xf88ce000
If you want to know the file path it was loaded from, the original path is not actually stored anywhere, but you can ask modprobe to search for the module again and display the path using modprobe -l:
$ /sbin/modprobe -l ext3
/lib/modules/2.6.18-194.el5PAE/kernel/fs/ext3/ext3.ko
Assuming you haven't changed anything in the module search path in the intervening time, this should give you the original load path.
EDIT:
As of 2015, the information isn't correct (not only that ext4 doesn't exist as a kernel module). Get information about the module, including the path of the image with:
modinfo floppy
No. This information is not retained when the module is loaded.
The information above isn't correct, for 2015.
modinfo will now give you information about the module. for example:
modinfo floppy

Resources