The Jungo WinDriver need a linux symbolic link , what does it mean? - linux

Its manual says:
To run GUI WinDriver applications (e.g., DriverWizard [5]; Debug Monitor [7.2]) you must also
have version 5.0 of the libstdc++ library — libstdc++.so.5. If you do not have this file, install it from the relevant RPM in your Linux distribution (e.g., compat-libstdc++).
Before proceeding with the installation, you must also make sure that you have a linux symbolic link. If you do not, create one by typing : /usr/src$ ln -s 'target kernel'/linux
For example, for the Linux 2.4 kernel type :
/usr/src$ ln -s linux-2.4/ linux
what does this symbolic link mean ? what do the <target kernel> and linux preset ?
If I install WinDriver in Ubuntu 13.10 , how should specify these two parameters ?

When installing WinDriver on a Linux machine, you must make sure that you are compiling WinDriver with the same header files that were used to build your kernel. #uname -a will tell you your kernel version number.
You should verify that the directory /usr/src/linux (normally a symbolic ink) is pointing to the correct kernel header sources and that the header files are using exactly the same version numbers as your running kernel.
A refers to the location of the kernel headers and refers to the Linux kernel name-number.
To fix this:
Become super user: $su ;
Change directory to: /usr/src/: # cd /usr/src/ ;
Delete the previous link you created (if any): # rm linux ;
And create a new symbolic: # ln -s linux-2.4/ linux.
I recommend following the Linux installation procedure from the Windriver manual at:
http://www.jungo.com/st/support/documentation/windriver/11.5.0/wdpci_manual.mhtml/wd_install_process.html#wd_install_linux
Regards,
Nadav, Jungo support manager

Related

Use Linux setcap command to set capabilities during Yocto build

I'm using Yocto 1.8 to build a linux system.
I need to use the command "setcap" to set files capabilities during build, which is introduced via libcap package recipe: http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-support/libcap/libcap_2.25.bb?h=master
The problem is that the recipe provides libcap package, which is only the library, and another subpackage called libcap-bin which contains the binaries I need to use. But I couldn't build or use the libcap-bin-native package inside my recipe as a dependancy (using DEPENDS variable). so everytime I call "setcap" binary, Yocto uses the host binaries (Ubuntu 14.04 64-bit) not the build system ones (as it's not there).
I need to know how to include the native binaries built from libcap-bin package in my native sysroot buildsystem to be used during recipe execution.
Example recipe to use setcap command:
DESCRIPTION = "Apply CAPs on files"
SECTION = "bin"
LICENSE = "CLOSED"
do_install() {
install -d ${D}${bindir}
touch ${D}${bindir}/testacl
}
DEPENDS = "libcap libcap-native"
#New task will be added to each recipe to apply attributes inside ipks
fakeroot do_setcaps() {
setcap 'cap_sys_admin,cap_sys_rawio+ep' ${WORKDIR}/packages-split/${PN}${bindir}/testacl
}
#Adding the new task just before do_package_write_ipk task
addtask setcaps before do_package_write_ipk after do_packagedata
This recipe is working fine, except that it uses the setcap command from my host system (Ubuntu 14.04 64-bit) which is located "/sbin/setcap"
The dependency package libcap-native only includes the library files inside my native sysroot, but not the binaries.
If I used this inside my recipe:
DEPENDS = "libcap-bin"
I got this error:
ERROR: Nothing PROVIDES 'libcap-bin'
I also saw this thread talking about the same topic:
Linux capabilities with yocto
But he uses Yocto > 2.3 and I'm using Yocto 1.8 , and I can't update it right now.
Any help?
PS: I already updated my yocto build system to preserve ACLs and extended attributes during IPK creation, and it's working and being preserved inside the IPK, inside the rootfs, and on the target after flashing.
I found the solution.
I had to add this to the libcap recipe
PACKAGECONFIG_class-native = "attr"
As the generated binaries (setcap & getcap) are depending on libattr, and this has to be configured manually.
I found that it's already configured for the target package
PACKAGECONFIG ??= "attr ${#bb.utils.contains('DISTRO_FEATURES', 'pam', 'pam', '', d)}"
Sorry for disturbing.
I can't comment yet so comment here.
The command setcap should be provided by libcap-native. And please double check whether it exists in tmp/work/x86_64-linux/libcap-native/2.25-r0/image/:
$ find tmp/work/x86_64-linux/libcap-native/2.25-r0/sysroot-destdir/ -name setcap
tmp/work/x86_64-linux/libcap-native/2.25-r0/sysroot-destdir/buildarea3/kkang/cgp9/builds/qemumips64-Apr24/tmp/sysroots/x86_64-linux/usr/sbin/setcap
You can find setcap here after remove the prefix:
$ ls /buildarea3/kkang/cgp9/builds/qemumips64-Apr24/tmp/sysroots/x86_64-linux/usr/sbin/setcap
/buildarea3/kkang/cgp9/builds/qemumips64-Apr24/tmp/sysroots/x86_64-linux/usr/sbin/setcap

building /lib/modules/$(uname -r)/build while compiling a kernel

I am cross-compiling 3.4.0 kernel for an embedded device. Then I would like to install compat-wireless drivers which require /lib/modules/3.4/build directory and sub-files. Could anyone explain how can I build that directory so that when I do INSTALL_MOD_PATH=newmodules make modules_install it would load /lib/modules/$(uname -r)/build directory as well? I would appreciate for a clear explanation.
I am using debian distro. I know I can install the kernel headers by apt-get install linux-headers-$(uname -r), but I doubt it would be a good idea since the kernel sources might not be identical.
Typically /lib/modules/$(uname -r)/build is a soft-link to the directory where performed the build. So the way to do this is to simply do a
make modules_install INSTALL_MOD_PATH=/some/root/
in the build directory of the kernel where /some/root is where you want your cross compile pieces to end up. This will create a link to your kernel build path in /some/root/lib/modules/$(uname -r) ... verify that.
Now when you build the compat_wireless drivers specify the kernel build directory in the Makefile as /some/root using the KLIB_BUILD variable (read the Makefile)
make modules KLIB_BUILD=/some/root/lib/modules/$(uname -r)/build
this should do the trick for you.
EDIT A
In answer to your comment below:
Keep "newmodules" outside the kernel directory it's a bad idea to put it in the kernel directory. so mkdir newmodules somewhere like /home/foo or /tmp or something. This is one of the reasons your build link is screwed up
ALSO .../build is a soft link /to/kernel/build/location it will only copy over as a soft-link. You also need to copy over the actual kernel source / kernel build directory to your microSD, using the same relative location. For example,
Let's say your kernel source is in:
/usr/src/linux-3.5.0/
Your kernel build directory is:
/usr/src/linux-3.5.0-build/
Your newmodules (after following 1.) is in:
/tmp/newmodules/
So under /tmp/newmodules/ you see the modules installed in a tree like:
lib/modules/$(uname -r)/
when you do an ls -al in this directory, you'll see that build is a soft link to:
build -> /usr/src/linux-3.5.0-build/
Now let's say your microSD is mounted under /mnt/microSD
then you need to do the following
mkdir -p /mnt/microSD/usr/src
cp -a /usr/src/linux-3.5.0 /usr/src/linux-3.5.0-build /mnt/microSD/usr/src
cp -a /tmp/newmodules/lib /mnt/microSD/lib
Now you have all the content you need to bring over to your embedded environment. I take it you are doing the compat_wireless build on your target system rather than cross compiling it?
NOTE
If your kernel build is the same as the kernel source then just copy over the kernel source and ignore the linux-3.5.0-build in copy instructions above
This is old, but some people will need this information.
I have spent many hours to figure out where build folder comes from, and why it is just a link when I compile my own kernel. Finally figured it out;
Linux kernel usually just links the build and source folders to the source folder.
But!
Arch linux (probably some other distros too); has a manual script for deleting those links, and adding (filtered) files to build folder.
https://git.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/linux
I've extracted that script to work standalone (in a kernel source tree) here: https://gist.github.com/furkanmustafa/9e73feb64b0b18942047fd7b7e2fd53e

chroot into other arch's environment

Following the Linux from Scratch book I have managed to build a toolchain for an ARM on
an ARM. This is till chapter 6 of the book, and on the ARM board itself I could go on further with no problems.
My question is if I can use the prepared environment to continue building the soft from chapter 6 on my x86_64 Fedora 16 laptop?
I thought that while I have all the binaries set up I could just copy them to laptop, chroot inside and feel myself as on the ARM board, but using the command from the book gives no result:
`# chroot "$LFS" /tools/bin/env -i HOME=/root TERM="$TERM" PS1='\u:\w\$
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin /tools/bin/bash --login +h
chroot: failed to run command `/tools/bin/env': No such file or directory`
The binary is there, but it doesn't belong to this system:
`# ldd /tools/bin/env
not a dynamic executable`
The binary is compiled as per the book:
# readelf -l /tools/bin/env | grep interpreter
[Requesting program interpreter: /tools/lib/ld-linux.so.3]
So I wonder if there is a way, like using proper environment variables for CC LD READELF, to continue building for ARM using these tools on x86_64 host.
Thank you.
Yes, you certainly can chroot into an ARM rootfs on an x86 box.
Basically, like this:
$ sudo chroot /path/to/arm/rootfs /bin/sh
sh-4.3# ls --version 2>&1 | head
/bin/ls: unrecognized option '--version'
BusyBox v1.22.1 (2017-03-02 15:41:43 CST) multi-call binary.
Usage: ls [-1AaCxdLHRFplinsehrSXvctu] [-w WIDTH] [FILE]...
List directory contents
-1 One column output
-a Include entries which start with .
-A Like -a, but exclude . and ..
sh-4.3# ls
bin css dev home media proc sbin usr wav
boot data etc lib mnt qemu-arm sys var
My rootfs is for a small embedded device, so everything is BusyBox-based.
How is this working? Firstly, I have the binfmt-misc support running in the kernel. I didn't have to do anything; it came with Ubuntu 18. When the kernel sees an ARM binary, it hands it off to the registered interpreter /usr/bin/qemu-arm-static.
A static executable by that name is found inside my rootfs:
sh-4.3# ls /usr/bin/q*
/usr/bin/qemu-arm-static
I got it from a Ubuntu package. I installed:
$ apt-get install qemu-user-static
and then copied /usr/bin/qemu-arm-static into the usr/bin subdirectory of the rootfs tree.
That's it; now I can chroot into that rootfs without even mentioning QEMU on the chroot command line.
Nope. You can't run ARM binaries on x86, so you can't enter its chroot. No amount of environment variables will change that.
You might be able to continue the process by creating a filesystem image for the target and running it under an emulator (e.g, qemu-system-arm), but that's quite a different thing.
No you cannot, at least not using chroot. What you have in your hands is a toolchain with an ARM target for an ARM host. Binaries are directly executable only on architectures compatible with their host architecture - and x86_64 is not ARM-compatible.
That said, you might be able to use an emulated environment. qemu, for example, offers two emulation modes for ARM: qemu-system-arm that emulates a whole ARM-based system and qemu-arm that uses ARM-native libraries to provide a thinner emulation layer for running ARM Linux executables on non-ARM hosts.

What is -lnuma and what program uses it for compilation?

I am compiling a message passing program using openmpi with mpicxx on a Linux desktop. My makefile does the following:
mpicxx -c readinp.cpp
mpicxx -o exp_fit driver.cpp readinp.o
at which point i get the following error:
/usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: cannot find -lnuma
My questions are:
what is -lnuma? what is using it? how should i go about linking to it?
Thanks Jonathan Dursi!
On Ubuntu, the package name is libnuma-dev.
apt-get install libnuma-dev
The build script can't find the numa library - NUMA (Non Uniform Memory Access). The -l option tells the linker to link the library, but your system ether doesn't have the right one installed or your search path for the linker is incomplete/wrong.
Try querying your package-manager (apt or rpm) for a package libnuma.
OpenMPI, and I think mpich2, uses libnuma (`a simple programming interface to the NUMA (Non Uniform Memory Access) policy supported by the Linux kernel') for memory affinity -- to ensure that the memory for a particular MPI task stays close to the core that the task is running on, as vs. being kept in cache on another socket entirely. This is important for performance on multicore nodes.
You may need to use YaST to install libnuma-devel if your linker can't find the library.
I got the same error working on a remote server, which had the NUMA library installed. In particular, the file /usr/lib64/libnuma.so.1 existed. It appears that the linker only looked for the file under the name libnuma.so. Creating the symlink
ln -s /usr/lib64/libnuma.so.1 /usr/lib64/libnuma.so
as described here might have worked, but in my case I did not have permission to create files in /usr/lib64. I got around this by creating the symlink in some other location of which I have write permission:
ln -s /usr/lib64/libnuma.so.1 /some/path/libnuma.so
and then add this path to the compilation flags. In your case this would be
mpicxx -L/some/path -o exp_fit driver.cpp readinp.o
In my case of a larger build process (compiling fftw), I added the path to the LDFLAGS environment variable,
export LDFLAGS="${LDFLAGS} -L/some/path"
which fixed the issue.

How to debug my Cross compiled Linux Kernel?

I 've cross compiled a Linux Kernel (for ARM on i686 - using Cross-LFS).
Now I'm trying to boot this Kernel using QEMU.
$ qemu-system-arm -m 128 -kernel /mnt/clfs-dec4/boot/clfskernel-2.6.38.2 --nographic -M versatilepb
Then, it shows this line and waits for infinite time !!
Uncompressing Linux... done, booting the kernel.
So, I want to debug the kernel, so that I can study what exactly is happening.
I'm new to these kernel builds, Can someone please help me to debug my custom built kernel as it is not even showing anything after that statement. Is there any possibility of the kernel being broken? ( I dont think so, b'se it didnot give any error while compiling )
And my aim is to generate a custom build very minimal Linux OS. Any suggestions regarding any tool-chains etc which would be easy & flexible depending on my requirements like drivers etc.,
ThankYou
You can use GDB to debug your kernel with QEMU you can use -s -S options. If you want a simple and reliable toolchain, you can use ELDK from DENX (http://www.denx.de/wiki/DULG/ELDK).
You can install it like this (It's not the last version, but you got the idea):
wget http://ftp.denx.de/pub/eldk/4.2/arm-linux-x86/iso/arm-2008-11-24.iso
sudo mkdir -p /mnt/cdrom (if necessary)
sudo mount -o loop arm-2008-11-24.iso /mnt/cdrom
/mnt/cdrom/install -d $HOME/EMBEDDED_TOOLS/ELDK/
The command above should install the toolchain under $HOLE/EMBEDDED_TOOLS/ELDK (modify it if you need)
echo "export PATH=$PATH:$HOME/EMBEDDED_TOOLS/ELDK/ELDK42/usr/bin" >> $HOME/.bashrc
You can then see the version of your ARM toolchain like this:
arm-linux-gcc -v
You can test a hello_world.c program like this:
arm-linux-gcc hello_world.c -o hello_world
And you type: file hello_wrold to see the target architecture of the binary, it should be something like this:
hello_wrold: ELF 32-bit LSB executable, ARM, version 1 (SYSV)
Now if you want to compile a production kernel, you need to optimize it (i suggest using busybox) and if you want just one for testing now, try this steps:
Create a script to set your chain tool set_toolchain.sh:
#! /usr/bin/sh
PATH=$PATH:$HOME/EMBEDDED_TOOLS/ELDK/ELDK42/usr/bin
ARCH=arm
CROSS_COMPILE=arm-linux-gnueabi-
export PATH ARCH CROSS_COMPILE
And run your script (source ./set_toolchain.sh)
Download a linux kernel and unzip it (Let's assume 2.6.x, it's an old kernel, but there are a lot of chances that it work without compilation errors).
Inside your unzipped kernel:
cd ~/linux-2.6.29/arch/arm/configs
make versatile_defconfig
Here we use versatile chip, you may need to use make menuconfig to modify the option OABI and set it to ARM EABI, this option is under Kernel features menu
After all this steps, you can compile you kernel:
make
if you want verbose compilation make v=1
After this you got your kernel under arch/arm/boot/zImage.
Hope this help.
Regards.
I would suggest to build your kernel by activating the option in the section Kernel hacking of your configuration file.
Then you may use kdb or kgdb which is easier to use but requires another machine running gdb.
`
You can also connect Qemu and GDB. Qemu has the -s and -S options that run a GDB server and allow you to connect to it via TCP to localhost:1234. Then you can load your kernel image (the unzipped one) in GDB and see how far your kernel boots.

Resources