I need to create wic image with enable secureboot. Then I changed in local.conf to "IMAGE_FSTYPES = "wic" " then bitbake-core-image-sato-sdk to get the image.
The generated image able to boot but cannot boot with secureboot. I investigated and found that the layer in my meta-secureboot had a secureboot.class. This class was calling do_efi_populate and efi_iso_populate_append .
In this two function contains a line that control to generate bzImage.signed for secureboot. Therefore, there is no way I can change to wic image because this two function stuck with do_efi_populate and efi_iso_populate that only call for iso and hddimg . Do you guys have any idea to call function from hddimg/iso to wic.
Step done:
Create core-image-sato-sdk.bbappend . Then adding:
inherit image_types_wic
do_image_wic[recrdeptask] += "do_efi_iso_populate".
example of secureboot.class:
efi_populate_append() {
#Sign bzImage and deploy as bzImage.signed
sb-keymgmt.py -c sign -kn ${DEPLOY_DIR_IMAGE}/yocto.key -cn ${DEPLOY_DIR_IMAGE}/yocto.crt -
usf ${DEPLOY_DIR_IMAGE}/bzImage -sf ${DEPLOY_DIR_IMAGE}/bzImage.signed
install -m 0644 ${DEPLOY_DIR_IMAGE}/bzImage.signed ${DEST}/bzImage.signed}
efi_iso_populate_append()
iso_dir=$1
efi_populate $iso_dir
# Build a EFI directory to create efi.img
mkdir -p ${EFIIMGDIR}/${EFIDIR}
cp $iso_dir/${EFIDIR}/* ${EFIIMGDIR}${EFIDIR}
Related
I want to build my own yocto image for a raspberry (cm3). I use the meta-raspberry (dunfell) layer and poky dunfell-23.0.0.
For installing the microchip wilc3000 module I have to modify the kernel following this guide. In that way, I change the kernel conf (Kconfig) to add the mchp driver in the menu and later selecting it.
I have generated a patch to the kernel using this guide (Patch-based workflow). After generating the patch, I have modified and generated a new kernel config (defconfig). All the changes are applied in my own layer with this recipe (linux-raspberrypi_%.bbappend):
FILESEXTRAPATHS_prepend := "${THISDIR}/patchs:"
SRC_URI += "file://0001-Add-wilc3000-driver.patch \
file://defconfig_my \
"
PACKAGE_ARCH = "${MACHINE_ARCH}"
# PR="r2"
INTREE_DEFCONFIG_pn-linux-ti = "defconfig_my"
kmoddir = "/lib/modules/${KERNEL_VERSION}/kernel/drivers/net/wireless/mchp"
# do_configure_append() {
# cat ${WORKDIR}/*.cfg >> ${B}/.config
# }
do_install_append() {
install -d ${D}${kmoddir}
install -m 0755 ${WORKDIR}/wilc-spi.ko ${D}${kmoddir}
}
FILES_${PN}_append += " \
${kmoddir}/wilc-spi.ko \
"
The patchs folder contains the patch for the kernel and the new kernel configuration generated
When I generate the image:
bitbake -v core-image-base
The generation fails in do_install task when it tries to copy wilc-spi.ko, which is not generated.
Which is the way to compile and deploy the kernel with my own configuration? if I download and compile the kernel in a separate folder, it successfully generates the wilc-spi.ko, but inside build folder in yocto there is no trace of the file generation.
Please, help me to add this driver to the kernel, Thanks a lot.
As #qschulz pointed out, the solution was to change defconfig_my to defconfig and remove all the extra code. Finally, the code looks like this:
FILESEXTRAPATHS_prepend := "${THISDIR}/patchs:"
SRC_URI += "file://0001-Add-wilc3000-driver.patch \
file://defconfig \
"
PACKAGE_ARCH = "${MACHINE_ARCH}"
PR="r3"
FILES_${PN}_append += " \
${kmoddir}/wilc-spi.ko \
"
KERNEL_MODULE_AUTOLOAD += "wilc-spi.ko"
And add in the layer.conf the instruction to load the module:
MACHINE_EXTRA_RDEPENDS += " kernel-module-wilc-spi "
I'm trying to install external binary inside NixOS, using declarative ways. Inside nix-pkg manual, I found such way of getting external binary inside NixOS
{ pkgs ? import <nixpkgs> {} }:
pkgs.stdenv.mkDerivation {
name = "goss";
src = pkgs.fetchurl {
url = "https://github.com/aelsabbahy/goss/releases/download/v0.3.13/goss-linux-amd64";
sha256 = "1q0kfdbifffszikcl0warzmqvsbx4bg19l9a3vv6yww2jvzj4dgb";
};
phases = ["installPhase"];
installPhase = ''
'';
But I'm wondering, what should I add inside InstallPhase, to make this binary being installed inside the system?
This seems to be an open source Go application, so it's preferable to use Nixpkgs' Go support instead, which may be more straightforward than patching a binary.
That said, installPhase is responsible creating the $out path; typically mkdir -p $out/bin followed by cp, make install or similar commands.
So that's not actually installing it into the system; after all Nix derivations are not supposed to have side effects. "Installing" it into the system is the responsibility of NixOS's derivations, as configured by you.
You could say that 'installation' is the combination of modifying the NixOS configuration + switching to the new NixOS. I tend to think about the modification to the configuration only; the build and switch feel like implementation details, even though nixos-rebuild is usually a manual operation.
Example:
installPhase = ''
install -D $src $out/bin/goss
chmod a+x $out/bin/goss
'';
Normally chmod would be done to a local file by the build phase, but we don't really need that phase here.
I have no idea why this was so hard to figure out. Having robust configuration systems is fine, but at the end of the day sometimes you just need to be able to download and expose a single flipping file on the $PATH.
The result of fetchurl is "the unaltered contents of the URL within the Nix store", which is being used for the src. So in installPhase, $src points to the downloaded data, and you just have to copy/install/link that into $out/…..
pkgs.stdenv.mkDerivation {
name = "hello_static";
src = pkgs.fetchurl {
name = "hello_static";
url = "https://raw.githubusercontent.com/ruanyf/simple-bash-scripts/6e837f949010e0f5e9305e629da946de12cc63e8/scripts/hello-world.sh";
sha256 = "sha256:somE27ajbm0TtWv9tyeqTWDW3gbIs6xvlcFS9QS1ZJ0=";
};
phases = [ "installPhase" ];
installPhase = ''
install -D $src $out/bin/hello_static
'';
};
I'm building a custom image that uses the meta-intel layer (I'm targeting Intel boards, such as the Minnowboard Turbot, for instance), and I want to tweak the options for booting.
First problem
As far as I understand, meta-intel uses systemd-boot (via rmc-boot) as EFI_PROVIDER.
So I should be able to override the specific BOOT_TIMEOUT parameter by setting :
SYSTEMD_BOOT_TIMEOUT := "0"
in my custom image, as far as I can see in this file
Unfortunately, that doesn't work (the boot timeout is still 4 seconds). How come ?
Second problem
As well, I would like to append options to the boot.conf file (in /boot/loader/entries, loaded by /boot/loader/loader.conf), such as quiet, or vt.global_cursor_default=0 for instance.
I see in the Intel machine conf that there is an APPEND configuration, but overriding it or appending to it in my custom image doesn't work (it's still not written in the boot.conf file) :
APPEND += "quiet vt.global_cursor_default=0"
I've checked that the configuration is correctly read and it's the case :
$ bitbake my-custom-image -e | grep ^APPEND= -A1 -B1
# " quiet rootwait console=ttyS0,115200 console=tty0${#bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", " ro", "", d)}"
APPEND=" quiet vt.global_cursor_default=0 rootwait console=ttyS0,115200 console=tty0"
#
But no matter what I do, the command line doesn't change on the built image.
What do I miss ? There should be a relatively easy way to achieve what I'm after I guess, but so far I have not managed to do it.
Thanks a lot !
I have been looking at the kernel command line parameters for intel platform in Yocto with the meta-intel.
I have noticed differences between the wic and hddimg yocto images.
The hddimg seems to use the rmc boot entry definition whereas the wic image uses the boot entry defined in the wks kickstart.
My machine conf has the following :
WKS_FILE ?= "${#bb.utils.contains_any("EFI_PROVIDER", "systemd-boot rmc-boot", "systemd-bootdisk.wks", "mkefidisk.wks", d)}"
In turns systemd-bootdisk.wks has the following boot entry "boot" :
bootloader --ptable gpt --timeout=5 --append="rootwait rootfstype=ext4 console=ttyS0,115200 console=tty0"
The RMC definition for my Minnowboard Max has 2 entry a boot and an install.
Minnow Max B3 boot
Minnow Max B3 install
I am using the pyro release for Yocto. Perhaps integration of RMC boot definition has been integrated into the wic images.
I am looking for a common place to add the kernel command line parameter. Any idea ?
I have a trouble of putting my initramfs.cpio in my kernel image by yocto.
I have two bb files, one is used to build an initramfs, and the other one is used to build a fitimage.
I successful to build the fitimage bundled with my initramfs image.
But it always failed to build a fitImage that has an initramfs.cpio.gz in the /usr directory in the fitImage.
( I mean, I want to see a file named initramfs.cpio in the /usr when I use my fitImage booting to console )
====================================================================
Here are my error message..
ERROR: linux-mine-1_4.9.27+gitAUTOINC+d87116e608-r0 do_package: QA Issue: linux-mine: Files/directories were installed but not shipped in any package:
/usr
/usr/initramfs-mine-qemu.cpio
Please set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.
linux-mine: 2 installed and not shipped files. [installed-vs-shipped]
ERROR: linux-mine-1_4.9.27+gitAUTOINC+d87116e608-r0 do_package: Fatal QA errors found, failing task.
ERROR: linux-mine-1_4.9.27+gitAUTOINC+d87116e608-r0 do_package: Function failed: do_package
ERROR: Logfile of failure stored in: /home/paul/projects/Test/yocto/build/tmp/work/mine-poky-linux-gnueabi/linux-mine/1_4.9.27+gitAUTOINC+d87116e608-r0/temp/log.do_package.26149
ERROR: Task (/home/paul/projects/Test/yocto/yocto-2.2/poky/../meta-mine/recipes-kernel/linux/linux-mine_4.9.bb:do_package) failed with exit code '1'
====================================================================
Here is my kernel image bb file
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}-${PV}:"
LINUX_VERSION ?= "4.9.27"
SRCREV = "d87116e608e94ad684b5e94d46c892e33b9e2d78"
SRC_URI = "git://local/kernel;protocol=ssh;branch=master"
#FILES_${PN} += "/usr /usr/initramfs-mine-${MACHINE_ARCH}.cpio"
#FILES_${PN}-${PV} += "/usr /usr/initramfs-mine-${MACHINE_ARCH}.cpio"
#IMAGE_INSTALL = "initramfs-mine"
do_install_append () {
echo "WangPaul : S=[${S}]"
echo "WangPaul : B=[${B}]"
echo "WangPaul : D=[${D}]"
install -d ${D}/usr/
install -m 0444 ${B}/usr/initramfs-mine-${MACHINE_ARCH}.cpio ${D}/usr/
}
====================================================================
Here is my initramfs bb file
LICENSE = "GPLv2"
PACKAGE_INSTALL = "initramfs-live-boot ${VIRTUAL-RUNTIME_base-utils} udev ${ROOTFS_BOOTSTRAP_INSTALL}"
IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}"
inherit core-image
====================================================================
I have found similar questions:
Ship extra files in kernel module recipe and
An example of using FILES_${PN}
The way in aboves discussion are not work...
Any information would be appreciate !!
Thanks !!
The error seems to QA issues it means the source is compiled but not adding to rootfs. Add below line to yourkernel-image.bb. it will solve the issue.
FILES_${PN} += "${exec_prefix}/*"
Note: may be In your kernel.bb file you have given wrong format
I'm doing get simple trace file from QEMU.
I followed instructions docs/tracing.txt
with this command "qemu-system-x86_64 -m 2G -trace events=/tmp/events ../qemu/test.img"
i'd like to get just simple trace file.
i've got trace-pid file, however, it dosen't have anything in it.
Build with the 'simple' trace backend:
./configure --enable-trace-backends=simple
make
Create a file with the events you want to trace:
echo bdrv_aio_readv > /tmp/events
echo bdrv_aio_writev >> /tmp/events
Run the virtual machine to produce a trace file:
qemu -trace events=/tmp/events ... # your normal QEMU invocation
Pretty-print the binary trace file:
./scripts/simpletrace.py trace-events trace-* # Override * with QEMU
i followd this instructions.
please somebody give me some advise for this situation.
THANKS!
I got same problem by following the same document.
https://fossies.org/linux/qemu/docs/tracing.txt
got nothing because
bdrv_aio_readv and bdrv_aio_writev was not enabled by default, at least the version I complied, was not enabled. you need to open trace-events under source directory, looking for some line without disabled, e.g. I using:
echo "load_file" > /tmp/events
Then start qemu,
after a guest started, I run
./scripts/simpletrace.py trace-events trace-Pid
I got
load_file 1474.156 pid=5249 name=kvmvapic.bin path=qemu-2.8.0-rc0/pc-bios/kvmvapic.bin
load_file 22437.571 pid=5249 name=vgabios-stdvga.bin path=qemu-2.8.0-rc0/pc-bios/vgabios-stdvga.bin
load_file 10034.465 pid=5249 name=efi-e1000.rom
you can also add -monitor stdio to qemu command line, after it started, you can the following command in qemu CLI:
(qemu) info trace-events
load_file : state 1
vm_state_notify : state 1
balloon_event : state 0
cpu_out : state 0
cpu_in : state 0
1 means enabled events.
Modify the trace-events file in the source tree
As of v2.9.0 you also have to remove the disable from the lines you want to enable there, e.g.:
-disable exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
+exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
and recompile.
Here is a minimal fully automated runnable example that boots Linux and produces traces: https://github.com/cirosantilli/linux-kernel-module-cheat
For example, I used the traces to count how many boot instructions Linux has: https://github.com/cirosantilli/linux-kernel-module-cheat/blob/c7bbc6029af7f4fab0a23a380d1607df0b2a3701/count-boot-instructions.md
I have a lightly patched QEMU as a submodule, the key commit is: https://github.com/cirosantilli/qemu/commit/e583d175e4cdfb12b4812a259e45c679743b32ad