I am building core-image-minimal for warrior branch. My device has atom processor so I have changed nehalem to atom in tune-corei7.inc file. My machine is set to intel-corei7-64. While generating core-image-minimal, I am facing following error:
NOTE: Installing complementary packages ...
NOTE: Running ['oe-pkgdata-util', '-p', '/home/panther2/warrior/build_panther1/tmp/pkgdata/panther1', 'glob', '/tmp/installed-pkgs03hhi936', '']
NOTE: Running intercept scripts:
NOTE: > Executing update_gio_module_cache intercept ...
NOTE: Exit code 1. Output:
+ [ True = False -a qemuwrapper-cross != nativesdk-qemuwrapper-cross ]
+ qemu-x86_64 -r 3.2.0 -cpu atom,check=false -E LD_LIBRARY_PATH=/home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/usr/lib:/home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/lib -L /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/usr/libexec/gio-querymodules /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/usr/lib/gio/modules/
unable to find CPU model 'atom'
ERROR: The postinstall intercept hook 'update_gio_module_cache' failed, details in /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/temp/log.do_rootfs
ERROR:
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs
Any help here?
Thanks in advance..!
Edit : Attaching "tune-corei7.inc" file
# Settings for the GCC(1) cpu-type "atom":
#
# Intel atom CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1,
# SSE4.2 and POPCNT instruction set support.
#
# This tune is recommended for Intel atom and Silvermont (e.g. Bay Trail) CPUs
# (and beyond).
#
DEFAULTTUNE ?= "corei7-64"
# Include the previous tune to pull in PACKAGE_EXTRA_ARCHS
require conf/machine/include/tune-atom.inc
# Extra tune features
TUNEVALID[corei7] = "Enable corei7 specific processor optimizations"
TUNE_CCARGS .= "${#bb.utils.contains('TUNE_FEATURES', 'corei7', ' -march=atom -mtune=generic -mfpmath=sse -msse4.2', '', d)}"
# Extra tune selections
AVAILTUNES += "corei7-32"
TUNE_FEATURES_tune-corei7-32 = "${TUNE_FEATURES_tune-x86} corei7"
BASE_LIB_tune-corei7-32 = "lib"
TUNE_PKGARCH_tune-corei7-32 = "corei7-32"
PACKAGE_EXTRA_ARCHS_tune-corei7-32 = "${PACKAGE_EXTRA_ARCHS_tune-atom-32} corei7-32"
QEMU_EXTRAOPTIONS_corei7-32 = " -cpu nehalem,check=false"
AVAILTUNES += "corei7-64"
TUNE_FEATURES_tune-corei7-64 = "${TUNE_FEATURES_tune-x86-64} corei7"
BASE_LIB_tune-corei7-64 = "lib64"
TUNE_PKGARCH_tune-corei7-64 = "corei7-64"
PACKAGE_EXTRA_ARCHS_tune-corei7-64 = "${PACKAGE_EXTRA_ARCHS_tune-atom-64} corei7-64"
QEMU_EXTRAOPTIONS_corei7-64 = " -cpu nehalem,check=false"
AVAILTUNES += "corei7-64-x32"
TUNE_FEATURES_tune-corei7-64-x32 = "${TUNE_FEATURES_tune-x86-64-x32} corei7"
BASE_LIB_tune-corei7-64-x32 = "libx32"
TUNE_PKGARCH_tune-corei7-64-x32 = "corei7-64-x32"
PACKAGE_EXTRA_ARCHS_tune-corei7-64-x32 = "${PACKAGE_EXTRA_ARCHS_tune-atom-64-x32} corei7-64-x32"
QEMU_EXTRAOPTIONS_corei7-64-x32 = " -cpu nehalem,check=false"
Related
I am building simple .net core solution which contains two projects. Here's the sln file
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.32930.78
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DataAccess", "DataAccess\DataAccess.csproj", "{A2215C05-2906-47D8-A51E-986167EED172}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Common", "Common\Common.csproj", "{5D3617CC-A0CC-437F-96CC-D64B6F23A668}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Debug|x64 = Debug|x64
Release|Any CPU = Release|Any CPU
Release|x64 = Release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{A2215C05-2906-47D8-A51E-986167EED172}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Debug|Any CPU.Build.0 = Debug|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Debug|x64.ActiveCfg = Debug|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Debug|x64.Build.0 = Debug|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Release|Any CPU.ActiveCfg = Release|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Release|Any CPU.Build.0 = Release|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Release|x64.ActiveCfg = Release|Any CPU
{A2215C05-2906-47D8-A51E-986167EED172}.Release|x64.Build.0 = Release|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Debug|Any CPU.Build.0 = Debug|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Debug|x64.ActiveCfg = Debug|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Debug|x64.Build.0 = Debug|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Release|Any CPU.ActiveCfg = Release|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Release|Any CPU.Build.0 = Release|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Release|x64.ActiveCfg = Release|Any CPU
{5D3617CC-A0CC-437F-96CC-D64B6F23A668}.Release|x64.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {5EECAB02-1A61-4FAD-9E00-5B1292E418E6}
EndGlobalSection
EndGlobal
When I run dotnet restore/build from the solution directory on my Windows machine, everything goes as expected and the output is the following
dotnet build
MSBuild version 17.3.2+561848881 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
Common -> C:\git\Common\bin\Debug\netstandard2.1\Common.dll
Common -> C:\git\Common\bin\Debug\netstandard2.0\Common.dll
DataAccess -> C:\git\DataAccess\bin\Debug\netstandard2.0\DataAccess.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:03.08
When I am trying to run the same process from the Docker container it fails with the following message
PS /App> pwd
Path
----
/App
PS /App> dotnet build
MSBuild version 17.3.2+561848881 for .NET
/usr/share/dotnet/sdk/6.0.405/NuGet.targets(369,5): error MSB3202: The project file "/App/Common/Common.csproj" was not found. [/App/My.sln]
Build FAILED.
/usr/share/dotnet/sdk/6.0.405/NuGet.targets(369,5): error MSB3202: The project file "/App/Common/Common.csproj" was not found. [/App/My.sln]
0 Warning(s)
1 Error(s)
Time Elapsed 00:00:00.24
I am running the process on
PS /> cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
EDIT 1
Here is my Dockerfile content
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /App
COPY . ./
RUN dotnet nuget add source 'https://pkgs.dev.azure.com/myorg/_packaging/2bba9c97-9acb-40b5-aa6c-17e17617e3aa/nuget/v3/index.json' -u 'tst' -p 'XXXXXXXXX' --store-password-in-clear-text
RUN wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& rm packages-microsoft-prod.deb \
&& apt-get update && apt-get install -y powershell
EDIT 2
If I am running the build on container from individual projects folder, not from solution directory it works as expected
PS /App/Common> pwd
Path
----
/App/Common
PS /App/Common> dotnet build
MSBuild version 17.3.2+561848881 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
Common -> /App/common/bin/Debug/netstandard2.1/Common.dll
Common -> /App/common/bin/Debug/netstandard2.0/Common.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:00.67
EDIT 3
I can also confirm that the project file exists inside the container
PS /> pwd
Path
----
/
PS /> ls /App/common/Common.csproj
/App/common/Common.csproj
The issue was case sensitivity of the path.
C:\git\Common
vs
"Common\Common.csproj"
recorder in solution file
I want to set up a linux kernel with preempt-RT with yocto.
According to meta/recipes-rt/README, I add the following code in build/conf/local.conf, and do bitbake core-image-sato, but bitbake fail.
MACHINE ?= "genericx86-64"
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto-rt"
COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
COMPATIBLE_MACHINE_quilt-native = "genericx86-64"
Yocto output the following error:
NOTE: Bitbake server didn't start within 5 seconds, waiting for 90
Loading cache: 100% |#######################################################################################################################################################################| Time: 0:00:13
Loaded 1330 entries from dependency cache.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.46.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "x86_64-poky-linux"
MACHINE = "genericx86-64"
DISTRO = "poky"
DISTRO_VERSION = "3.1.20"
TUNE_FEATURES = "m64 core2"
TARGET_FPU = ""
meta
meta-poky
meta-yocto-bsp = "dunfell:90a6f6a110ab14890e2f6a1616e74ee259fc0f8f"
Initialising tasks: 100% |##################################################################################################################################################################| Time: 0:00:48
Sstate summary: Wanted 14 Found 0 Missed 14 Current 1203 (0% match, 98% complete)
NOTE: Executing Tasks
ERROR: linux-yocto-rt-5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0 do_kernel_metadata: Could not locate BSP definition for genericx86-64/preempt-rt and no defconfig was provided
ERROR: linux-yocto-rt-5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0 do_kernel_metadata: Execution of '/media/fff/disk1T/yocto/demo3/poky/build/tmp/work/genericx86_64-poky-linux/linux-yocto-rt/5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0/temp/run.do_kernel_metadata.1138429' failed with exit code 1
ERROR: Logfile of failure stored in: /media/fff/disk1T/yocto/demo3/poky/build/tmp/work/genericx86_64-poky-linux/linux-yocto-rt/5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0/temp/log.do_kernel_metadata.1138429
Log data follows:
| DEBUG: Executing python function extend_recipe_sysroot
| NOTE: Direct dependencies are ['/media/fff/disk1T/yocto/demo3/poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb:do_populate_sysroot']
| NOTE: Installed into sysroot: []
| NOTE: Skipping as already exists in sysroot: ['kern-tools-native', 'quilt-native']
| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing shell function do_kernel_metadata
| NOTE: do_kernel_metadata: for summary/debug, set KCONF_AUDIT_LEVEL > 0
| ERROR: Could not locate BSP definition for genericx86-64/preempt-rt and no defconfig was provided
| WARNING: exit code 1 from a shell command.
| ERROR: Execution of '/media/fff/disk1T/yocto/demo3/poky/build/tmp/work/genericx86_64-poky-linux/linux-yocto-rt/5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0/temp/run.do_kernel_metadata.1138429' failed with exit code 1
ERROR: Task (/media/fff/disk1T/yocto/demo3/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb:do_kernel_metadata) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3173 tasks of which 3172 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/media/fff/disk1T/yocto/demo3/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb:do_kernel_metadata
Summary: There were 2 ERROR messages shown, returning a non-zero exit code.
My hardware cpu is x86-64 core. It hint genericx86-64/preempt-rt dont exist, what action should I adapt to generate core-image-sato with preempt-RT? Please leave a comment and help me if you are similar with this problem.
I try to checkout various branch of yocto but doesn't matter.These dunfell、langdale、kirkstone.I expect to use dunfell.
Following is my build/conf/bblayers.conf:
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
/media/fff/disk1T/yocto/demo3/poky/meta \
/media/fff/disk1T/yocto/demo3/poky/meta-poky \
/media/fff/disk1T/yocto/demo3/poky/meta-yocto-bsp \
"
I am noticing that all my rules request memory twice, one at a lower maximum than what I requested (mem_mb) and then what I actually requested (mem_gb). If I run the rules as localrules they do run faster. How can I make sure the default settings do not interfere?
resources: mem_mb=100, disk_mb=8620, tmpdir=/tmp/pop071.54835, partition=h24, qos=normal, mem_gb=100, time=120:00:00
The rules are as follows:
rule bwa_mem2_mem:
input:
R1 = "data/results/qc/{species}.{population}.{individual}_1.fq.gz",
R2 = "data/results/qc/{species}.{population}.{individual}_2.fq.gz",
R1_unp = "data/results/qc/{species}.{population}.{individual}_1_unp.fq.gz",
R2_unp = "data/results/qc/{species}.{population}.{individual}_2_unp.fq.gz",
idx= "data/results/genome/genome",
ref = "data/results/genome/genome.fa"
output:
bam = "data/results/mapped_reads/{species}.{population}.{individual}.bam",
log:
bwa ="logs/bwa_mem2/{species}.{population}.{individual}.log",
sam ="logs/samtools_view/{species}.{population}.{individual}.log",
benchmark:
"benchmark/bwa_mem2_mem/{species}.{population}.{individual}.tsv",
resources:
time = parameters["bwa_mem2"]["time"],
mem_gb = parameters["bwa_mem2"]["mem_gb"],
params:
extra = parameters["bwa_mem2"]["extra"],
tag = compose_rg_tag,
threads:
parameters["bwa_mem2"]["threads"],
shell:
"bwa-mem2 mem -t {threads} -R '{params.tag}' {params.extra} {input.idx} {input.R1} {input.R2} | "
"samtools sort -l 9 -o {output.bam} --reference {input.ref} --output-fmt CRAM -# {threads} /dev/stdin 2> {log.sam}"
and the config is:
cluster:
mkdir -p logs/{rule} && # change the log file to logs/slurm/{rule}
sbatch
--partition={resources.partition}
--time={resources.time}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_gb}
--job-name=smk-{rule}-{wildcards}
--output=logs/{rule}/{rule}-{wildcards}-%j.out
--parsable # Required to pass job IDs to scancel
default-resources:
- partition=h24
- qos=normal
- mem_gb=100
- time="04:00:00"
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True # Required to run with local conda enviroment
cluster-status: status-sacct.sh # Required to monitor the status of the submitted jobs
cluster-cancel: scancel # Required to cancel the jobs with Ctrl + C
cluster-cancel-nargs: 50
Cheers,
Angel
Right now there are two separate memory resource requirements:
mem_mb
mem_gb
From the perspective of snakemake these are different, so both will be passed to the cluster. A quick fix is to use the same units, e.g. if the resource really requires only 100 mb, then the default resource should be changed to:
default-resources:
- partition=h24
- qos=normal
- mem_mb=100
I'm working with Yocto "Gatesgarth" on a custom board based on i.MX6ULL.
I'm facing some problems in generating the extensible SDK (eSDK).
The generation of normal SDK it's accomplished correctly.
Below some details.
Details of system:
Board based on NXP i.MX6ULL
Yocto version "Gatesgarth 3.2.4 (May 2021)"
BB_VERSION = "1.48.0",
NATIVELSBSTRING = "ubuntu-18.04"
DISTRO_VERSION = "5.10-gatesgarth"
meta-qt5 is present
Build environment based on Docker Container
Environment Variable:
File: conf/local.conf
SDKMACHINE ?= 'x86_64'
File: test-image-mx6ull.bb
inherit core-image
inherit populate_sdk_qt5
inherit populate_sdk_ext
SDK_EXT_TYPE = "minimal"
SDK_INCLUDE_TOOLCHAIN = "1"
SDK_INCLUDE_PKGDATA = "0"
SDK_INCLUDE_NATIVESDK = "1"
The command executed is :
bitbake test-image-mx6ull -c populate_sdk_ext
Output:
ERROR: test-image-mx6ull-1.0-r0 do_populate_sdk_ext: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk_ext(d)
0003:
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 720, function: do_populate_sdk_ext
0716: bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting
SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
0717:
0718: d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
0719: if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
*** 0720: buildtools_fn = get_current_buildtools(d)
0721: else:
0722: buildtools_fn = None
0723: d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
0724: d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 556, function: get_current_buildtools
0552: import glob
0553: btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
0554: btfiles.sort(key=os.path.getctime)
0555: print("MY-DEBUG - btfiles = {} - SDK_DEPLOY = {}".format(btfiles, d.getVar('SDK_DEPLOY')))
*** 0556: return os.path.basename(btfiles[-1])
0557:
0558:def get_sdk_required_utilities(buildtools_fn, d):
0559: """Find required utilities that aren't provided by the buildtools"""
0560: sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
Exception: IndexError: list index out of range
DEBUG: Python function do_populate_sdk_ext finished
MY-DEBUG - btfiles = [] - SDK_DEPLOY = /yocto/build-mX6ull/tmp/deploy/sdk
Question:
In line 553 the array btfiles should be filled,
but the array is empty and the line 556 generate the exception.
I have no idea of whats is wrong, what I have forget and what Yocto environment variables are needed to setup to do a correctly work.
hope you are doing good
i had similar issue where i couldnt populate esdk,
its all in GLIBC version..
kindly update your GLIBC version
In my case i had to update GLIBC version to 2.33 in "yocto-uninative.inc" file. It worked for me!!!
During the porting of the layer for my SoM from the pyro branch to the dunfell one, I've encountered some problems related to the COMPATIBLE_MACHINE list in my recipes (BitBake says that there is no recipe compatible with my machine).
In order to further investigate this issue, I tried to read the actual value of the MACHINEOVERRIDES variable using the bitbake -e command. However, I don't think this list is complete, because watching the bitbake -e output I can see that there are other variable expansions which are used to form the value of the general OVERRIDES variable.
This is the output of the bitbake -e | grep OVERRIDES command run from my build environment:
# $DISTROOVERRIDES [3 operations]
DISTROOVERRIDES="fslc"
# $FILESOVERRIDES [2 operations]
# "${TRANSLATED_TARGET_ARCH}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}"
# [doc] "A subset of OVERRIDES used by the OpenEmbedded build system for creating FILESPATH."
# "${TRANSLATED_TARGET_ARCH}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}"
FILESOVERRIDES="arm:isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul:fslc"
# $MACHINEOVERRIDES [14 operations]
# "PRISTINE_MACHINEOVERRIDES"
MACHINEOVERRIDES="isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul"
# $MACHINEOVERRIDES_EXTENDER_FILTER_OUT
MACHINEOVERRIDES_EXTENDER_FILTER_OUT=" imx mx6 mx6q mx6dl mx6sx mx6sl mx6sll mx6ul mx6ull mx7 mx7d mx7ulp mx8 mx8qm mx8mm mx8mn mx8mq mx8qxp "
# $MACHINEOVERRIDES_EXTENDER_FILTER_OUT_use-mainline-bsp
MACHINEOVERRIDES_EXTENDER_FILTER_OUT_use-mainline-bsp=" imx mx6 mx6q mx6dl mx6sx mx6sl mx6sll mx6ul mx6ull mx7 mx7d mx7ulp mx8 mx8qm mx8mm mx8mn mx8mq mx8qxp "
# $MACHINEOVERRIDES_EXTENDER_mx25
MACHINEOVERRIDES_EXTENDER_mx25="use-mainline-bsp"
# $MACHINEOVERRIDES_EXTENDER_mx6dl
MACHINEOVERRIDES_EXTENDER_mx6dl="imxfbdev:imxpxp:imxipu:imxvpu:imxgpu:imxgpu2d:imxgpu3d:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx6q
MACHINEOVERRIDES_EXTENDER_mx6q="imxfbdev:imxipu:imxvpu:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx6sl
MACHINEOVERRIDES_EXTENDER_mx6sl="imxfbdev:imxpxp:imxgpu:imxgpu2d:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx6sll
MACHINEOVERRIDES_EXTENDER_mx6sll="imxfbdev:imxpxp:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx6sx
MACHINEOVERRIDES_EXTENDER_mx6sx="imxfbdev:imxpxp:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx6ul
MACHINEOVERRIDES_EXTENDER_mx6ul="imxfbdev:imxpxp"
# $MACHINEOVERRIDES_EXTENDER_mx6ull
MACHINEOVERRIDES_EXTENDER_mx6ull="imxfbdev:imxpxp:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx7d
MACHINEOVERRIDES_EXTENDER_mx7d="imxfbdev:imxpxp:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx7ulp
MACHINEOVERRIDES_EXTENDER_mx7ulp="imxfbdev:imxpxp:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8mm
MACHINEOVERRIDES_EXTENDER_mx8mm="imxdrm:imxvpu:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8mn
MACHINEOVERRIDES_EXTENDER_mx8mn="imxdrm:imxgpu:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8mq
MACHINEOVERRIDES_EXTENDER_mx8mq="imxdrm:imxvpu:imxgpu:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8qm
MACHINEOVERRIDES_EXTENDER_mx8qm="imxdrm:imxdpu:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8qxp
MACHINEOVERRIDES_EXTENDER_mx8qxp="imxdrm:imxdpu:imxgpu:imxgpu2d:imxgpu3d"
# $OVERRIDES [2 operations]
# "${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}${LIBCOVERRIDE}:forcevariable"
# [doc] "BitBake uses OVERRIDES to control what variables are overridden after BitBake parses recipes and configuration files."
# "${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}${LIBCOVERRIDE}:forcevariable"
OVERRIDES="linux-gnueabi:arm:pn-defaultpkgname:isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul:fslc:class-target:libc-glibc:forcevariable"
# $PRISTINE_MACHINEOVERRIDES [13 operations]
# rename from MACHINEOVERRIDES machine-overrides-extender.bbclass:49 [machine_overrides_extender_handler]
PRISTINE_MACHINEOVERRIDES="mx6:mx6ul:isiot:armv7ve:imx:use-mainline-bsp:isiot-geamx6ul"
# $SRC_URI_OVERRIDES_PACKAGE_ARCH
overrides = d.getVar('OVERRIDES').split(':')
msg = 'Recipe %s has PN of "%s" which is in OVERRIDES, this can result in unexpected behaviour.' % (d.getVar("FILE"), pn)
compat_machines = (d.getVar('MACHINEOVERRIDES') or "").split(":")
# unless the package sets SRC_URI_OVERRIDES_PACKAGE_ARCH=0
override = d.getVar('SRC_URI_OVERRIDES_PACKAGE_ARCH')
overrides = (":" + (d.getVar("FILESOVERRIDES") or "")).split(":")
overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + multilib
localdata.setVar("OVERRIDES", overrides)
overrides = d.getVar("OVERRIDES").split(":")
machine_overrides = (d.getVar('PRISTINE_MACHINEOVERRIDES') or '').split(':')
machine_overrides_filter_out += (d.getVar('MACHINEOVERRIDES_EXTENDER_FILTER_OUT_%s' % override) or '').split()
extender = d.getVar('MACHINEOVERRIDES_EXTENDER_%s' % override)
# so we can reprocess OVERRIDES if/as/when needed.
d.renameVar("MACHINEOVERRIDES", "PRISTINE_MACHINEOVERRIDES")
d.setVar("MACHINEOVERRIDES", "${#machine_overrides_extender(d)}")
localdata.setVar('OVERRIDES', pkg)
localdata.setVar('OVERRIDES', localdata.getVar('OVERRIDES') + ':' + pkg)
overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + item
localdata.setVar("OVERRIDES", overrides)
localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
Is there a way to get the value of MACHINEOVERRIDES and OVERRIDES variables after those operations occur?
# so we can reprocess OVERRIDES if/as/when needed.
d.renameVar("MACHINEOVERRIDES", "PRISTINE_MACHINEOVERRIDES")
d.setVar("MACHINEOVERRIDES", "${#machine_overrides_extender(d)}")
localdata.setVar('OVERRIDES', pkg)
localdata.setVar('OVERRIDES', localdata.getVar('OVERRIDES') + ':' + pkg)
overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + item
localdata.setVar("OVERRIDES", overrides)
localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
Thanks.
bitbake -e already shows you the variables after the operations.
MACHINEOVERRIDES="isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul" and
OVERRIDES="linux-gnueabi:arm:pn-defaultpkgname:isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul:fslc:class-target:libc-glibc:forcevariable"
bitbake -e first shows you how many operations act on a variable e.g. # $MACHINEOVERRIDES [14 operations] and where they are coming from and after that the expanded value.
You could use bitbake -e | less to browse the output instead of grepping it.
You can simply use :
bitbake -e <package_name> | grep ^MACHINEOVERRIDES=
bitbake -e <package_name> | grep ^OVERRIDES=