How to strip down my Yocto Linux? - linux

I would like to strip down my Yocto Linux before put in it to flash. Unneeded software, man pages, POCO sample codes and other documentation just waste resource, especially on the i.MX28 EVK with only 128MB flash.
My local.conf file looks following:
$ gedit conf/local.conf &
...
IMAGE_INSTALL_append = " poco nginx canutils vsftpd curl fcgi spawn-fcgi net-snmp util-linux ubiattach-klibc ubimkvol-klibc ubiformat-klibc minicom net-tools zeroconf avahi-autoipd mtd-utils u-boot-fw-utils ethtool"
...
I bitbake the image "core-image-base".
I was wondering, is there a way that I can delete all of the Unneeded files?
Can somebody help me howto strip down my Yocto Linux?

When you look into the recipe for core-image-base and the included core-image class (core-image-base.bb & core-image.bbclass) you will notice that there is only packagegroup-core-boot and packagegroup-base-extended in that image.
The description for those:
By default we install packagegroup-core-boot and packagegroup-base-extended packages;
this gives us working (console only) rootfs.
This lets assume that it's not supposed to be removed and that so you can't remove much software/files on the 'Yocto-way'. What you can do is writing patches which are removing files manually or take a look in how to build a tiny system with Yocto (Link to Development Manual).
You can activate this distribution by changing the DISTRO Variable in your local.conf:
DISTRO = "poky-tiny"

This is an example of a minimal console image:
recipes-core/images/core-image-small.bb
DESCRIPTION = "Minimal console image."
IMAGE_INSTALL= "\
base-files \
base-passwd \
busybox \
sysvinit \
initscripts \
${ROOTFS_PKGMANAGE_BOOTSTRAP} \
${CORE_IMAGE_EXTRA_INSTALL} \
"
IMAGE_LINGUAS = " "
LICENSE = "MIT"
inherit core-image
IMAGE_ROOTFS_SIZE ?= "8192"
This recipe produces an image of about 6.4MB. If you use poky-tiny by add DISTRO = "poky-tiny" to your conf/local.conf the image will be around 4MB.
To build this, you will need to add
INSANCE_SKIP_glibc-locale = "installed-vs-shipped"
You could also use PACKAGE_CLASSES ?= package_ipk package manager as it is the lightest and remove package-management feature from your production root file system altogether.
If you choose to have packagegroup-core-boot in your image, you could use BusyBox's mdev device manager instead of udev by specifying in your conf/local.conf
VIRTUAL-RUNTIME_dev_manager = "mdev"
If you are running root filesystem on a block device, use ext2 instead of ext3 or ext4 without the journal
Configure BusyBox with only the essential applets by providing your own configuration file in bbappend.
Review the glibc configuration, which can be changed via the DISTRO_FEATURES_LIBC distribution configuration variable. You can find the example in the poky-tiny distribution.
Consider switching to a ligher C library. Use uclibc or musl instead of the standard glibc http://www.etalabs.net/compare_libcs.html
to use musl, in local.conf
TCLIBC = "musl"
add meta-musl to conf/bblayers.conf

Related

How to add libgif as a package in yocto build

I am building yocto linux for embedded linux platform.
The build is successful and root file system is generated.
however libgif.so library is missing in the root filesystem.
I want libgif to be compiled and copied in my generated root filesystem (in /usr/lib/)
I tried adding giflib in local.conf
DISTRO_FEATURES_append = " giflib "
I expect the giflib to be compiled and copied in the /usr/lib in root filesystem. but it isn't.
If i add EXTRA_IMAGEDEPENDS += " giflib " and just build giflib with "bitbake giflib" then the giflib is compiled and generated at path
Build/tmp/work/aarch64-poky-linux/giflib/5.1.4-r0/build/lib/.libs/giflib.so
Thats not how it works, sorry.
Just building the package will not automagically install it into any rootfs
You have to tell bitbake that you want giflib to be included in your image. This is done by adding it to IMAGE_INSTALL of your custom image.
Adding it to DISTRO_FEATURES will probably have no effect at all, because that is only parsed for specific keywords.
This describes how to add something to a custom image: Custom images in Yocto. You can technically also go the local.conf routes mentioned in the paragraphs above the linked one, but that only hinders proper reproductibility. A bit more extensively, I explain it here.

lib32-ncurses not installing into rootfs

I am trying to add 32-bit ncurses into my root file system
I am using intel yocto bsp sumo branch
Here is my local.conf:
require conf/multilib.conf
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
IMAGE_INSTALL_append = " dpkg gnutls lib32-glibc lib32-libgcc lib32-libstdc++ lib32-gnutls lib32-freetype lib32 -libx11 lib32-ncurses lib32-dpkg python3-six"
ncurses folder is present in tmp
build/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0
The image folder is created and has the libraries
/tmp/work/x86-pokymllib32-linux/lib32-ncurses/6.0+20171125-r0/image/lib
libncurses.so.5 libncurses.so.5.9 libncursesw.so.5 libncursesw.so.5.9 libtinfo.so.5 libtinfo.so.5.9
But these files are not present in root file system.
How can i debug or what should be my next step to get it into root file system. which log files should I look
Thanks for your time.
I found answer after posting query in yocto mailing list.
$ oe-pkgdata-util find-path */libncurses.so*
ncurses-libncurses: /lib64/libncurses.so.5
ncurses-libncurses: /lib64/libncurses.so.5.9
ncurses-dbg: /lib64/.debug/libncurses.so.5.9
lib32-ncurses-dbg: /lib/.debug/libncurses.so.5.9
ncurses-dev: /usr/lib64/libncurses.so
lib32-ncurses-dev: /usr/lib/libncurses.so
lib32-ncurses-libncurses: /lib/libncurses.so.5.9
lib32-ncurses-libncurses: /lib/libncurses.so.5
So including lib32-ncurses-libncurses in local.conf will solve the problem
IMAGE_INSTALL_append += "lib32-ncurses-libncurses"
I see libncurses.so in packages-split/lib32-ncurses-dev , what should I do to add it in rootfs
The default recipe won't install the development package into the rootfs unless explicitly instructed to do so. You can add this to your local.conf for quick testing:
IMAGE_INSTALL_append += lib32-ncurses-dev
You should now see the contents of packages-split/lib32-ncurses-dev inside your ncurses image folder and subsequently image rootfs.
There is a similar approach for dbg packages as well.

How to enable tc command when building a kernel using Yocto recipes

I want to enable tc command that comes in iproute2 on my linux kernel. My kernel is built using yocto and bitbake.
So, I copied the iproute recipes and whole directory from the following link to try --
https://git.yoctoproject.org/cgit.cgi/poky/plain/meta/recipes-connectivity/iproute2
And included in my yocto build. That picked up recipe and built it all well. But I tc command is still not available on the built kernel.
Question:
What am I missing and how to enable tc in the kernel of a linux image built using Yocto recipes?
You shouldn't need to copy the whole recipe, poky should be in your sources directory. So just reference the recipe in your image. You need both iproute2 and iproute2-tc.
IMAGE_INSTALL += "iproute2 \
iproute2-tc"
Additionally you may need to enable some kernel modules that tc make use of, depending on your needs:
CONFIG_NET_SCHED
CONFIG_NET_SCH_CBQ
CONFIG_NET_SCH_HTB
CONFIG_NET_SCH_HFSC
CONFIG_NET_SCH_ATM
CONFIG_NET_SCH_PRIO
CONFIG_NET_SCH_MULTIQ
CONFIG_NET_SCH_RED
CONFIG_NET_SCH_SFQ
CONFIG_NET_SCH_TEQL
CONFIG_NET_SCH_TBF
CONFIG_NET_SCH_GRED
CONFIG_NET_SCH_DSMARK
CONFIG_NET_SCH_NETEM
CONFIG_NET_SCH_INGRESS

Use Linux setcap command to set capabilities during Yocto build

I'm using Yocto 1.8 to build a linux system.
I need to use the command "setcap" to set files capabilities during build, which is introduced via libcap package recipe: http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-support/libcap/libcap_2.25.bb?h=master
The problem is that the recipe provides libcap package, which is only the library, and another subpackage called libcap-bin which contains the binaries I need to use. But I couldn't build or use the libcap-bin-native package inside my recipe as a dependancy (using DEPENDS variable). so everytime I call "setcap" binary, Yocto uses the host binaries (Ubuntu 14.04 64-bit) not the build system ones (as it's not there).
I need to know how to include the native binaries built from libcap-bin package in my native sysroot buildsystem to be used during recipe execution.
Example recipe to use setcap command:
DESCRIPTION = "Apply CAPs on files"
SECTION = "bin"
LICENSE = "CLOSED"
do_install() {
install -d ${D}${bindir}
touch ${D}${bindir}/testacl
}
DEPENDS = "libcap libcap-native"
#New task will be added to each recipe to apply attributes inside ipks
fakeroot do_setcaps() {
setcap 'cap_sys_admin,cap_sys_rawio+ep' ${WORKDIR}/packages-split/${PN}${bindir}/testacl
}
#Adding the new task just before do_package_write_ipk task
addtask setcaps before do_package_write_ipk after do_packagedata
This recipe is working fine, except that it uses the setcap command from my host system (Ubuntu 14.04 64-bit) which is located "/sbin/setcap"
The dependency package libcap-native only includes the library files inside my native sysroot, but not the binaries.
If I used this inside my recipe:
DEPENDS = "libcap-bin"
I got this error:
ERROR: Nothing PROVIDES 'libcap-bin'
I also saw this thread talking about the same topic:
Linux capabilities with yocto
But he uses Yocto > 2.3 and I'm using Yocto 1.8 , and I can't update it right now.
Any help?
PS: I already updated my yocto build system to preserve ACLs and extended attributes during IPK creation, and it's working and being preserved inside the IPK, inside the rootfs, and on the target after flashing.
I found the solution.
I had to add this to the libcap recipe
PACKAGECONFIG_class-native = "attr"
As the generated binaries (setcap & getcap) are depending on libattr, and this has to be configured manually.
I found that it's already configured for the target package
PACKAGECONFIG ??= "attr ${#bb.utils.contains('DISTRO_FEATURES', 'pam', 'pam', '', d)}"
Sorry for disturbing.
I can't comment yet so comment here.
The command setcap should be provided by libcap-native. And please double check whether it exists in tmp/work/x86_64-linux/libcap-native/2.25-r0/image/:
$ find tmp/work/x86_64-linux/libcap-native/2.25-r0/sysroot-destdir/ -name setcap
tmp/work/x86_64-linux/libcap-native/2.25-r0/sysroot-destdir/buildarea3/kkang/cgp9/builds/qemumips64-Apr24/tmp/sysroots/x86_64-linux/usr/sbin/setcap
You can find setcap here after remove the prefix:
$ ls /buildarea3/kkang/cgp9/builds/qemumips64-Apr24/tmp/sysroots/x86_64-linux/usr/sbin/setcap
/buildarea3/kkang/cgp9/builds/qemumips64-Apr24/tmp/sysroots/x86_64-linux/usr/sbin/setcap

The Jungo WinDriver need a linux symbolic link , what does it mean?

Its manual says:
To run GUI WinDriver applications (e.g., DriverWizard [5]; Debug Monitor [7.2]) you must also
have version 5.0 of the libstdc++ library — libstdc++.so.5. If you do not have this file, install it from the relevant RPM in your Linux distribution (e.g., compat-libstdc++).
Before proceeding with the installation, you must also make sure that you have a linux symbolic link. If you do not, create one by typing : /usr/src$ ln -s 'target kernel'/linux
For example, for the Linux 2.4 kernel type :
/usr/src$ ln -s linux-2.4/ linux
what does this symbolic link mean ? what do the <target kernel> and linux preset ?
If I install WinDriver in Ubuntu 13.10 , how should specify these two parameters ?
When installing WinDriver on a Linux machine, you must make sure that you are compiling WinDriver with the same header files that were used to build your kernel. #uname -a will tell you your kernel version number.
You should verify that the directory /usr/src/linux (normally a symbolic ink) is pointing to the correct kernel header sources and that the header files are using exactly the same version numbers as your running kernel.
A refers to the location of the kernel headers and refers to the Linux kernel name-number.
To fix this:
Become super user: $su ;
Change directory to: /usr/src/: # cd /usr/src/ ;
Delete the previous link you created (if any): # rm linux ;
And create a new symbolic: # ln -s linux-2.4/ linux.
I recommend following the Linux installation procedure from the Windriver manual at:
http://www.jungo.com/st/support/documentation/windriver/11.5.0/wdpci_manual.mhtml/wd_install_process.html#wd_install_linux
Regards,
Nadav, Jungo support manager

Resources