Not able to compile C program which is including <linux/leds.h> - linux

I am trying to compile my led wrapper function program file with including linux/leds.h
using including kernel space header files
gcc -I /usr/src/linux-headers-3.13.0-44-generic/include/ example.c
by compiling it flooded the console with errors in many headers file those are depended on leds.h. Can any one please help me to compile this C file which is using kernel space header files in user space.
Thanks in advance. :)

This won't work.
First of all, don't use kernel-mode headers in user-mode programs, except for the (processed?) ones provided for userspace after kernel compilation. Kernel-mode headers depend on the kernel build system to work.
I tried this, just for curiosity, although I did already knew why it won't work (tl;dr, I use the Ubuntu-patched 3.13.0-24 kernel):
$ cd /usr/src/linux-headers-3.13.0-24/
$ echo '#include <linux/leds.h>' | gcc -E -x c -o - - -Iinclude
The preprocessor claims that <asm/linkage.h> is missing, and, correct me if I'm wrong, that header is generated by the kernel build system.
If you want, you can solve this by creating a kernel module that uses <linux/leds.h> et al, then export a userspace API through the module (usually done through /proc or /sys) and use that API to implement your usermode code's logic.
Hope this helps!

Thanks KemyLand, You were right that we can not use kernel space header file in user space. But your approach couldn't work for me. firstly it asked for asm/linkage.h, i included the path of it explicitly but again compilation terminated on another header file and i did same. But at last i blocked on some errors in headers files, which were not expected as i didn't make any changes in those files. but finally i got the solution. basically we have to do Interfacing functions between kernel space and the hardware device. I had to generate make file for it. obj-m :=file_name.o and compiled it by following command make -C /usr/src/linux-headers-3.13.0-44-generic/ -C /usr/include/ M=pwd modules it generated 4 files file_name.mod.o , file_name.o, file_name.ko, file_name.mod.c. and then loaded the module as root by insmod file_name.ko. for checking the loaded module type command lsmod. I can also execute it by typing command insmod ./file_name.o or can remove by rmmod file_name

Related

Call Matlab from Intel Fortran (Linux)

I am trying to integrate a Matlab program I wrote into some Fortran code. I tried to follow the example Mathworks provides. But I can't get it to compile because I can't find the header files it requests.
Does anyone know of an example of someone getting it to work on Linux using an Intel compiler. I think that might be part of the problem because Matlab only supports GNU Fortran on Linux.
And I realize this is a simple question, I just don't understand how to do anything in compiling more complicated than including multiple files with defined paths.
Disclaimer: I'm currently using OS X so I can only provide output from OS X but everything should transfer easily over to Linux due to the Unix base. I also don't have the Intel Fortran compiler on OS X (only the C/C++ compiler).
Note: You will need to substitute the paths I use for the correct paths on your system depending on your MATLAB installation directory.
This issue isn't specific to the Intel Compiler, I also receive errors with the GCC Fortran compiler.
$ gfortran fengdemo.F
fengdemo.F:1:0:
#include "fintrf.h"
^
Fatal Error: fintrf.h: No such file or directory
compilation terminated.
You can use the Unix locate command to find files.
$ locate fintrf.h
/Applications/Matlab R2014a.app/extern/include/fintrf.h
In the directory where fengdemo.F is we can then pass the correct directory in using the -I option
-I../../include/
However, this produces linking errors as we haven't specified where the libraries for fintrf.h can be found. We can do this with the -L option (you will need to replace maci64 with the correct option for Linux - I can't remember it off the top of my head but you should be able to see it in the bin directory)
-L../../../bin/maci64/
Now we need to tell it what libraries to use with -leng -lmx and so the completed command is
$ ifort fengdemo.F -I../../include/ -L../../../bin/maci64/ -leng -lmx
and it should compile correctly.
We aren't finished yet though as it won't execute. We need to set up our PATH and DYLD_LIBRARY_PATH environment variables correctly. Specifically we need to add the bin and bin/maci64 directories of our MATLAB installation to PATH
$ export PATH=$PATH:/Applications/Matlab\ R2014a.app/bin/maci64:/Applications/Matlab\ R2014a.app/bin
and the bin/maci64/ and sys/os/maci64/ to DYLD_LIBRARY_PATH
$ export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/Applications/Matlab\ R2014a.app/bin/maci64/:/Applications/Matlab\ R2014a.app/sys/os/maci64/
Note: On Linux DYLD_LIBRARY_PATH should be LD_LIBRARY_PATH. Thanks to Vladimir F for correcting me.
Now you can execute the program using
$ ./a.out

Objdump -S does not show the source code listing of Linux kernel module

I am trying to debug a crash from one of my kernel module ; I am trying to get source code listing along with output of objdump but it is not listing. Is there something I am missing ?
mips-linux-objdump -S <filename.o> > temp
Most likely either a) all debugging information was stripped off the kernel module object file at some point during the build or b) even if the debugging information is there, objdump can't locate the source code files, in which case you might try to cd to where the source files are before running objdump.
You need to compile your kernel module with the debug information to have the interleaved source code in the dumped output. Recompile your kernel module with -g -ggdb for CFLAGS.

how can I check Linux kernel compiler optimisation level

I'm trying to verify which optimisation level (-O?) is my linux kernel built. How can I do that?
The only thing I can find is CONFIG_CC_OPTIMIZE_FOR_SIZE=y in the kernel config file. Does it imply -Os? Does it override anything (having multiple optimisations in one gcc line makes the last -O the winner)? I have found some parts of the kernel built with -O2, but too few lines for all of the kernel.
Where is such optimisation centrally set?
Note: I'm using CentOS 5.5.
Run with make V=1 and you can see the command lines in all their glory.
If your kernel config contains CONFIG_CC_OPTIMIZE_FOR_SIZE you may assume it was compiled using -Os see the kernel makefile e.g. at http://lxr.linux.no/linux+v3.12/Makefile#L573 for the place where this get set this also shows that if CONFIG_CC_OPTIMIZE_FOR_SIZE is not set that -O2 is used.
As blueshift already said, building with make V=1 forces make to display the full compiler output including optimization flags.

What does this make command do?

make -C $(KERNEL_DIR) SUBDIRS=pwd` modules
Can someone elaborate this?
The -C means "change to this directory before starting make." The SUBDIRS is a variable used by the Linux kernel make system. SUBDIRS=`pwd` means that the kernel makefiles should look in the current directory, because pwd means "print working directory." And modules means to make modules.
It probably uses the kernel make system to build an out-of-tree module. Out-of-tree means a kernel module that is not shipped with the kernel.
In fact, I seem to recall seeing this command used as part of the Nvidia binary module installer.

linux 'cannot execute binary file' on every executable I compile, chmod 777 doesn't help

I am running red had linux 7.3 (old, I know), and for the past few months I've been learning assembly programming, writing small programs and compiling with nasm. For months, things have been going fine, and now for some unknown reason, I cannot execute any programs that I compile.
nasm file.s //used to work just fine, then I'd execute ./file
now, when I run ./file, first I get "permission denied", which never used to happen before. then, once i chmod +777 file, I get "cannot execute binary file".
I have NO IDEA why this is happening, but it is extremely frustrating since NOTHING I compile will run anymore.
Logging in as root doesn't change anything.
All suggestions are welcome, THANK YOU!!
nasm does not produce an executable, but just an object file (like gcc -c would). You still need to run the linker on it.
N.B.: “0777 is almost always wrong.”
Run the file command on your binaries and make sure they're identified correctly as executables.
Also try the ldd command. It will very likely fail for the exact same reason, but it's worth a shot.
This can happen if the file system you operate on is mounted with the noexec option. You could check that by doing mount | grep noexec and see if your current working directory suffers from that.
"Cannot execute binary file" is the strerror(3) message for the error code ENOEXEC. That has a very specific meaning: (quoting the manpage for execve(2))
[ENOEXEC] The new process file has the appropriate access
permission, but has an unrecognized format
(e.g., an invalid magic number in its header).
So what that means is, your nasm invocation is not producing an executable, but rather something else. As John Kugelman suggests, the file command will tell you what it is (user502515 is very likely to be right that it's an unlinked object file, but I have never used nasm myself so I don't know).
BTW, you'll do yourself a favor if you learn GAS/"AT&T" assembly syntax now, rather than when you need to rewrite your assembly code for an architecture that doesn't do Intel bizarro-world syntax. And I do hope you're using assembly only for inner-loop subroutines that actually need to be hand-optimized.
This just happened to me. After running
file <executable name>
it output <file name> ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped
And the problem was that I was trying to run a 64 bit app on a 32 bit machine!
You may try looking into /var/log for some change in the system from this start to happen.

Resources