Yocto layer for ADIS16475 - linux

Context
Lately I've had trouble including an adis driver in my Yocto build. I've asked the question on the adis forum but found the solution before anyone answered it. The adis forum not being very user friendly and the answer being quiet long, I've decided to answer my question here and link it in the forum.
The question
I'm building my OS with yocto and I'd like to add a driver for IMU ADIS1607 (anything but the ease of use is a plus) as part of this build.
I'm building a prototype on a raspberryPi4. My application uses an ADIS IMU (ADIS16507). Since the board for the final product is not fixed I'm building my OS with Yocto.
I'm currently trying to add a driver for ADIS16507. Ideally I'd like to add ADIS16475, but I could do with something else provided it works nicely.
First off I have checked that I'm able to build the whole ADIS linux kernel for RaspberryPi as specified in the doc. With that kernel the driver works nicely.
However the ADIS linux kernel is too heavy for what I intend to do. Hence I'm trying to build a lighter OS with yocto (also, as aforementioned, I'll likely need to port my work on different platforms).
I have a few ideas on how to do it, however none of them worked easily and I'm not sure which one to pursue.
I assumed the simplest way would be to compile the ADIS16475 on its own and add it to the Yocto build. However I'm struggling a little with Yocto and haven't been able to compile sucessfully
Alternatively I did find the libiio layer for yocto but it seems to require quiet a lot of code to on top of it to be able to read the IMU values.
I am aware of the python bindings of libiio layer and of Pyadi-iio and it seems easy to use (although I did not try adding it with yocto) but computational speed is an issue for me and I'd rather use C/C++ instead of python.
Finally I also found the meta-adi layer. I think that in my case it is not relevant but the documentation I found is rather sparse and I'm not certain what this layer is supposed to do.
Any pointer on which method I should use (maybe I've missed an obvious easy one) is welcome.

So here is the solution I found. We will use the ADIS16475 which is a part of the standard libc since version 5.10. Then we'll get the corresponding device tree overlay from ADIS and add the relevant variables to have it compiled with the other device tree overlays.
Getting the correct versions
For this to work we need libc 5.10 or later since earlier version do not include the adis16475.c source file. However only the latest yocto release, Hardknott, uses the 5.10 version. So simplest thing to do is to switch to the Hardknott branch of Yocto. Keep in mind that if you use other layers, you might also have to change release for compatibility reasons.
Configuring the kernel to compile ADIS16475 drivers as built in
Here we're gonna dab into kernel compilation so there are a few things to know
Short and approximate theory
If you don't know much about kernel compilation don't worry, neither do I. But here is an approximate quick explanation in order for you to understand what we're doing: Kernel makefiles hold list of files the have to compile. For drivers there are 2 important lists. obj-y that holds the files that will be compiled and built in the kernel and obj-m that holds the files that will be compiled as loadable modules. In order to make these list configurable, they are defined in makefiles with statements such as:
obj-${CONFIG_MYDRIVER} += my_driver.o
The idea is that you can set CONFIG_MYDRIVER in an external file to 'm' or 'y'. If you set it to anything else the line in the makefile will be ignored and your file won't be compiled. So basically all we have to do is to set the right variables to the right values in order for yocto to compile our drivers.
Custom kernel configuration
Basically what we want to do is to create our own configuration fragment file. To do this you will need to set your yocto layer (I'll assume if you're reading this post you know how to do it). In your layer you should add the following folders:
meta-my-layer
|
- recipe-kernel
|
- linux
|
- linux-raspberrypi_5.10.bbappend
|
-linux-raspberrypi
|
-adis16475.cfg
(Note that we are appending linux-raspberrypi_5.10 because it's the one in meta-raspberrypi that is responsible for libc compilation. If you're compiling for another machine look what files are in recipe-kernel/linux. Your BSP might even actually use the ones in poky/meta/recipe-kernel/linux)
Now all that is left to do is to add adis16475.cfg to our sources by adding the following lines in linux-raspberrypi_5.10.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://adis16475.cfg"
And to actually setup your configuration in adis16475.cfg by adding to the file:
CONFIG_SPI_MASTER=y
CONFIG_IIO=y
CONFIG_ADIS16475=y
CONFIG_IIO_KFIFO_BUF=y
CONFIG_SPI_BCM2835=y
To the best of my knowledge and experience, this is the minimal amount of drivers you have to build for it to work.
Once this is done you will have your drivers built in but you won't see any effect on the Pi. That's because we're lacking the correct overlay.
Building adis16475-overlay.dts
To solve this problem I've been widely inspired by this post, so if you're having trouble you might want to check it out.
The issue here is that to my knowledge, the overlay corresponding to the driver is not provided by libc, at least not the 5.10 version. So we will have to be a little crafty. First off get the overlay from adis and copy it into recipe-kernel/linux/linux-raspberrypi.
Once this is done we will source it for yocto to find by modifying recipe-kernel/linux/linux-raspberrypi_5.10.bbappend to look like this:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://adis16475.cfg \
file://adis16475-overlay.dts;subdir=git/arch/${ARCH}/boot/dts/overlays"
The last line we added tells yocto where to get our .dts file and where to put it (with the others overlays).
Finally what we have to do is to add our overlay to the list of overlays and device trees to be compiled. To do so we will use the machine configuration file raspberrypi4-64.conf. More details about it here.
First in your layer conf folder, create machine/raspberrypi4-64-extra.conf. This file will hold all the additional configuration we want to make for this driver. In it first add:
# Create the device trees and overlay list with necessary drivers
KERNEL_DEVICETREE = " \
${RPI_KERNEL_DEVICETREE} \
${RPI_KERNEL_DEVICETREE_OVERLAYS} \
overlays/adis16475.dtbo \
"
This sets the list of device trees to compile. It add the RPI device trees and overlays necessary for the pi (two first variables) and our adis16475 overlay.
Then we'll take advantage of this file to enable a few necessary raspberrypi specific configuration options. Add:
# Enable RPI specific options in config.txt
ENABLE_SPI_BUS = "1"
RPI_EXTRA_CONFIG = '\ndtoverlay=adis16475\n'
Finally you need to tell yocto to use this extra config. The most verstile way is to add the following line in your local/local.conf:
# Enables the DIP hardware support
require conf/machine/${MACHINE}-dip-extra.conf
Once this is done, provided you did add your layer to your bblayers.conf file, and your machine is set to raspberrypi4-64 in your local.con file, then everything else should work nicely.
Notes on the machine type
Here I worked with the machine set as raspberrypi4-64 for it was the one I had. There is no reasons for this procedure not to work with any other machine from the meta-raspberrypi layer. Just remember to replace the name of your machine everywhere I put raspberrypi4-64 and you should be fine.
Hope this will help some other people lost in the Yoctoverse
Cheers

Related

configuring linux kernel with (hard-loaded, built in) currently loaded modules

I want to build from source a recent Linux kernel (e.g. 4.13.4 in end of september 2017) on my Debian/Sid/x86-64 with all (or most) currently loaded modules configured as hard-built in the new kernel.
(I believe that I have read something like this somewhere, but can't remember where and can't find it)
It would be some make configfromloadedmodules (but of course it is not exactly configfromloadedmodules makefile target, but some other target that I did not easily find).
That is, for most (ideally all) currently loaded modules (as given by lsmod) it would answer Y (not m) for each of them at make config time and give me some good enough .config; but I don't want a bloated kernel with all drivers -even those that I don't use and which are not currently loaded- built in.
Does that exist, or was what I have probably read some wish or some non standard feature of experimental kernels?
This would avoid any initrd thing and give me a kernel suited for my hardware and habits.
The current kernel is a standard Debian one 4.12.0-2-amd64, so I have its /boot/config-4.12.0-2-amd64 (so I want to automatize the replacement of CONFIGxxx=m with CONFIGxxx=y there, according to the currently loaded modules, e.g. as given by lsmod).
See also this answer; I still believe that device trees are not essential to Linux, but they are a useful convenience.
A near variant of my question is how to easily configure a kernel, suited to my computer and hardware and set-up, without ìnitrd, without any modules (e.g. with CONFIG_MODULES=n) and without (or with very few) useless drivers, which would work as nicely as my current Debian kernel.
I believe, you should read about "make localmodconfig" and "make localyesconfig" and use one as per your requirement.
This, This and This are helpful links.

Android NDK: Providing library variants for the same abi

I'm looking for the best way to develop and package different variants of a library with different compile settings but for the same ABI and then selecting the best fit at runtime. In more concrete terms, I'd like a NEON and non-NEON armeabi-v7a build.
The native library has a public C interface that third parties link to. They seem to need to link to one of the variants to prevent link errors, but I'd like to load the alternative variant at runtime if it's a better fit for the device, and have the runtime loader do the correct relocations.
From what I see so far it seems I need to give both variants the same file name, so need to put them in different folders. Subfolders under the abi folder don't seem to get copied by the package installation process so that approach doesn't work. The best suggestion I've seen so far is to manually copy one variant from the res folder to a known device path and to call System.loadLibrary() with a full path. Reference: https://groups.google.com/forum/#!topic/android-ndk/zu_dmcmUlMo
Is this still the best/recommended approach?
How will this interact with the binary translation done on non-arm devices? (Although I can supply an x86 build, some third parties may leave it out of their apk).
I'm assuming cpufeatures on a device using binary translation will not report the cpu family as ARM, so my proposed solution would be to build a standard armeabi-v7a library in the normal way (which I guess will get binary translated), and ship a NEON-supporting library in res/raw. Then at runtime if cpufeatures reports an ARM CPU with NEON support then copy out that library and call loadLibrary with the full path. Can anyone see any problems with that approach?
If you explicitly want to have two different builds of a lib, then yes, it's probably the best compromise.
First off - do note that many libraries that can use NEON can be built with those parts runtime-enabled so that you can have a normal ARMv7 build which doesn't strictly require NEON but can enable those codepaths at runtime if detected - e.g. libav/FFmpeg do that, and the same goes for many other similar libraries. This allows you to have one single ARMv7 binary that fully utilizes NEON where applicable, while still works on the few ARMv7 devices without NEON.
If you're trying to use compiler autovectorization, or if this is a library where the NEON routines aren't easily confined to restricted parts that are enabled at runtime (or hoping to gain extra performance by building the whole library with NEON enabled), your approach sounds sane.
Keep in mind that you want to have at least one native library that is packaged "normally" (which you seem to have, but which has been an issue in e.g. https://stackoverflow.com/a/29329413/3115956). On installation, the installer picks the best match of the bundled architectures and only extracts the libs from that one, and runs the process in that mode. On devices with multiple ABIs (32 and 64 bit), this is essential since if the process is started in a different mode it's too late to switch mode once you try to load a library in a different form.
On an x86 device that emulates ARM binaries, at least the cpufeatures library will return ARM if the process is running in ARM mode. If you use system properties to find the primary and secondary ABIs, you won't know which of them the current process is using though.
EDIT: x86 devices with binary translation actually seem to be able to load an armeabi library even if the same process already has loaded some bundled x86 libraries as well. So apparently this translation is done on a per library basis, not like 32 vs 64 bit, where a certain mode is chosen for the process at startup, which excludes loading any libraries of the other variant.

Open embedded core: Adding packages to a build

So using open embedded core and the Igepv2 meta layer I just finished building virtual/kernel, now if I want to add to this:
how would I go and add software packages to this, is it ok to change the bblayers/local.conf files then start a new recipe (would this just build upon what I have already got)?
What if I wanted to now build the angstrom distro, I saw that it requires a different set up to the oe-core layout is there a way to use what I got already here?
Your question isn't entirely clear. You don't "add software packages" to virtual/kernel, you add software packages to an image recipe (core-image-base, for example.) Building the virtual/kernel "target" only builds the kernel and any dependencies that are required to build the kernel. You will ultimately want to build boot images, and there are several example image recipes to choose from, such as 'core-image-base' or core-image-sato', etc. These build root file system images and usually bootloader images depending on the machine you are building.
It is definitely ok to modify bblayers.conf and/or local.conf. Beware, though that anything you put in local.conf will basically be visible to every recipe bitbake processes. bblayers.conf is the place where you would add your own layer(s) for customization, so the only thing you should be modifying in there is adding a layer directory to the stack.
You might want to check out 'hob', which is documented on the yoctoproject.org website. This is a user interface that helps make it a bit easier to add packages to a recipe. Plenty of documentation is available for all of this, though reading it will keep you busy for a couple days! :)
Also, feel free to post your questions on the yocto project mailing list, which is also detailed on the yoctoproject.org website.
Have fun!

where can I find a good enough default config to compile the Linux kernel?

i want to compile a custom linux kernel for myself. because i dont know every option of kernel therefore i am looking for a good default config for start.
You should check out KernelCheck. It's basically an app that will help you figure out what kernel options are worth tweaking for your particular system. It will also automatically download, compile and install the selected kernel for you.
If you have a good running kernel that has builtin support for stored configuration (i.e. /proc/config.gz exists), then you could copy that kernel configuration and use it as a starting point:
zcat /proc/config.gz > /path/to/kernel/source/.config
Look in the repository of the Linux distribution you are using, they often have a generic or a huge kernel .config, those are the "best choices".
From the Linux kernel source tree, type make help and look for the lines that include defconfig. Default configurations for various platforms are typically included here. Once you find the one that matches your platform, type, for example make defconfig to get your default .config file. For example, to build on a Nexus S, I use make herring_defconfig. (herring is, I guess, the code name for the Nexus S hardware, which can incidentally be found if you look at /proc/cpuinfo on the Nexus S)
If you are not building for x86 (for example, like in my case, if you are building for Android) be sure to remember to export ARCH (for example, export ARCH=arm before running make). Otherwise, make help will not show the correct defconfig targets for the architecture you want to build.
I haven't done this in a while but aren't there sane defaults in the config dialogs?
This question is old, but I feel compelled to add http://kernel-seeds.org/ as a very viable option for a starting point to compile a fitting custom kernel -- as there are seeds for the vanilla sources, even if you are not on gentoo.

Can autotools create multi-platform makefiles

I have a plugin project I've been developing for a few years where the plugin works with numerous combinations of [primary application version, 3rd party library version, 32-bit vs. 64-bit]. Is there a (clean) way to use autotools to create a single makefile that builds all versions of the plugin.
As far as I can tell from skimming through the autotools documentation, the closest approximation to what I'd like is to have N independent copies of the project, each with its own makefile. This seems a little suboptimal for testing and development as (a) I'd need to continually propagate code changes across all the different copies and (b) there is a lot of wasted space in duplicating the project so many times. Is there a better way?
EDIT:
I've been rolling my own solution for a while where I have a fancy makefile and some perl scripts to hunt down various 3rd party library versions, etc. As such, I'm open to other non-autotools solutions. For other build tools, I'd want them to be very easy for end users to install. The tools also need to be smart enough to hunt down various 3rd party libraries and headers without a huge amount of trouble. I'm mostly looking for a linux solution, but one that also works for Windows and/or the Mac would be a bonus.
If your question is:
Can I use the autotools on some machine A to create a single universal makefile that will work on all other machines?
then the answer is "No". The autotools do not even make a pretense at trying to do that. They are designed to contain portable code that will determine how to create a workable makefile on the target machine.
If your question is:
Can I use the autotools to configure software that needs to run on different machines, with different versions of the primary software which my plugin works with, plus various 3rd party libraries, not to mention 32-bit vs 64-bit issues?
then the answer is "Yes". The autotools are designed to be able to do that. Further, they work on Unix, Linux, MacOS X, BSD.
I have a program, SQLCMD (which pre-dates the Microsoft program of the same name by a decade and more), which works with the IBM Informix databases. It detects the version of the client software (called IBM Informix ESQL/C, part of the IBM Informix ClientSDK or CSDK) is installed, and whether it is 32-bit or 64-bit. It also detects which version of the software is installed, and adapts its functionality to what is available in the supporting product. It supports versions that have been released over a period of about 17 years. It is autoconfigured -- I had to write some autoconf macros for the Informix functionality, and for a couple of other gizmos (high resolution timing, presence of /dev/stdin etc). But it is doable.
On the other hand, I don't try and release a single makefile that fits all customer machines and environments; there are just too many possibilities for that to be sensible. But autotools takes care of the details for me (and my users). All they do is:
./configure
That's easier than working out how to edit the makefile. (Oh, for the first 10 years, the program was configured by hand. It was hard for people to do, even though I had pretty good defaults set up. That was why I moved to auto-configuration: it makes it much easier for people to install.)
Mr Fooz commented:
I want something in between. Customers will use multiple versions and bitnesses of the same base application on the same machine in my case. I'm not worried about cross-compilation such as building Windows binaries on Linux.
Do you need a separate build of your plugin for the 32-bit and 64-bit versions? (I'd assume yes - but you could surprise me.) So you need to provide a mechanism for the user to say
./configure --use-tppkg=/opt/tp/pkg32-1.0.3
(where tppkg is a code for your third-party package, and the location is specifiable by the user.) However, keep in mind usability: the fewer such options the user has to provide, the better; against that, do not hard code things that should be optional, such as install locations. By all means look in default locations - that's good. And default to the bittiness of the stuff you find. Maybe if you find both 32-bit and 64-bit versions, then you should build both -- that would require careful construction, though. You can always echo "Checking for TP-Package ..." and indicate what you found and where you found it. Then the installer can change the options. Make sure you document in './configure --help' what the options are; this is standard autotools practice.
Do not do anything interactive though; the configure script should run, reporting what it does. The Perl Configure script (note the capital letter - it is a wholly separate automatic configuration system) is one of the few intensively interactive configuration systems left (and that is probably mainly because of its heritage; if starting anew, it would most likely be non-interactive). Such systems are more of a nuisance to configure than the non-interactive ones.
Cross-compilation is tough. I've never needed to do it, thank goodness.
Mr Fooz also commented:
Thanks for the extra comments. I'm looking for something like:
./configure --use-tppkg=/opt/tp/pkg32-1.0.3 --use-tppkg=/opt/tp/pkg64-1.1.2
where it would create both the 32-bit and 64-bit targets in one makefile for the current platform.
Well, I'm sure it could be done; I'm not so sure that it is worth doing by comparison with two separate configuration runs with a complete rebuild in between. You'd probably want to use:
./configure --use-tppkg32=/opt/tp/pkg32-1.0.3 --use-tppkg64=/opt/tp/pkg64-1.1.2
This indicates the two separate directories. You'd have to decide how you're going to do the build, but presumably you'd have two sub-directories, such as 'obj-32' and 'obj-64' for storing the separate sets of object files. You'd also arrange your makefile along the lines of:
FLAGS_32 = ...32-bit compiler options...
FLAGS_64 = ...64-bit compiler options...
TPPKG32DIR = #TPPKG32DIR#
TPPKG64DIR = #TPPKG64DIR#
OBJ32DIR = obj-32
OBJ64DIR = obj-64
BUILD_32 = #BUILD_32#
BUILD_64 = #BUILD_64#
TPPKGDIR =
OBJDIR =
FLAGS =
all: ${BUILD_32} ${BUILD_64}
build_32:
${MAKE} TPPKGDIR=${TPPKG32DIR} OBJDIR=${OBJ32DIR} FLAGS=${FLAGS_32} build
build_64:
${MAKE} TPPKGDIR=${TPPKG64DIR} OBJDIR=${OBJ64DIR} FLAGS=${FLAGS_64} build
build: ${OBJDIR}/plugin.so
This assumes that the plugin would be a shared object. The idea here is that the autotool would detect the 32-bit or 64-bit installs for the Third Party Package, and then make substitutions. The BUILD_32 macro would be set to build_32 if the 32-bit package was required and left empty otherwise; the BUILD_64 macro would be handled similarly.
When the user runs 'make all', it will build the build_32 target first and the build_64 target next. To build the build_32 target, it will re-run make and configure the flags for a 32-bit build. Similarly, to build the build_64 target, it will re-run make and configure the flags for a 64-bit build. It is important that all the flags affected by 32-bit vs 64-bit builds are set on the recursive invocation of make, and that the rules for building objects and libraries are written carefully - for example, the rule for compiling source to object must be careful to place the object file in the correct object directory - using GCC, for example, you would specify (in a .c.o rule):
${CC} ${CFLAGS} -o ${OBJDIR}/$*.o -c $*.c
The macro CFLAGS would include the ${FLAGS} value which deals with the bits (for example, FLAGS_32 = -m32 and FLAGS_64 = -m64, and so when building the 32-bit version,FLAGS = -m32would be included in theCFLAGS` macro.
The residual issues in the autotools is working out how to determine the 32-bit and 64-bit flags. If the worst comes to the worst, you'll have to write macros for that yourself. However, I'd expect (without having researched it) that you can do it using standard facilities from the autotools suite.
Unless you create yourself a carefully (even ruthlessly) symmetric makefile, it won't work reliably.
As far as I know, you can't do that. However, are you stuck with autotools? Are neither CMake nor SCons an option?
We tried it and it doesn't work! So we use now SCons.
Some articles to this topic: 1 and 2
Edit:
Some small example why I love SCons:
env.ParseConfig('pkg-config --cflags --libs glib-2.0')
With this line of code you add GLib to the compile environment (env). And don't forget the User Guide which just great to learn SCons (you really don't have to know Python!). For the end user you could try SCons with PyInstaller or something like that.
And in comparison to make, you use Python, so a complete programming language! With this in mind you can do just everything (more or less).
Have you ever considered to use a single project with multiple build directories?
if your automake project is implemented in a proper way (i.e.: NOT like gcc)
the following is possible:
mkdir build1 build2 build3
cd build1
../configure $(YOUR_OPTIONS)
cd build2
../configure $(YOUR_OPTIONS2)
[...]
you are able to pass different configuration parameters like include directories and compilers (cross compilers i.e.).
you can then even run this in a single make call by running
make -C build1 -C build2 -C build3

Resources