Failed build Yocto Gatesgarth "extensible SDK" (eSDK) - populate_sdk_ext fail - linux

I'm working with Yocto "Gatesgarth" on a custom board based on i.MX6ULL.
I'm facing some problems in generating the extensible SDK (eSDK).
The generation of normal SDK it's accomplished correctly.
Below some details.
Details of system:
Board based on NXP i.MX6ULL
Yocto version "Gatesgarth 3.2.4 (May 2021)"
BB_VERSION = "1.48.0",
NATIVELSBSTRING = "ubuntu-18.04"
DISTRO_VERSION = "5.10-gatesgarth"
meta-qt5 is present
Build environment based on Docker Container
Environment Variable:
File: conf/local.conf
SDKMACHINE ?= 'x86_64'
File: test-image-mx6ull.bb
inherit core-image
inherit populate_sdk_qt5
inherit populate_sdk_ext
SDK_EXT_TYPE = "minimal"
SDK_INCLUDE_TOOLCHAIN = "1"
SDK_INCLUDE_PKGDATA = "0"
SDK_INCLUDE_NATIVESDK = "1"
The command executed is :
bitbake test-image-mx6ull -c populate_sdk_ext
Output:
ERROR: test-image-mx6ull-1.0-r0 do_populate_sdk_ext: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk_ext(d)
0003:
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 720, function: do_populate_sdk_ext
0716: bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting
SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
0717:
0718: d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
0719: if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
*** 0720: buildtools_fn = get_current_buildtools(d)
0721: else:
0722: buildtools_fn = None
0723: d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
0724: d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 556, function: get_current_buildtools
0552: import glob
0553: btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
0554: btfiles.sort(key=os.path.getctime)
0555: print("MY-DEBUG - btfiles = {} - SDK_DEPLOY = {}".format(btfiles, d.getVar('SDK_DEPLOY')))
*** 0556: return os.path.basename(btfiles[-1])
0557:
0558:def get_sdk_required_utilities(buildtools_fn, d):
0559: """Find required utilities that aren't provided by the buildtools"""
0560: sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
Exception: IndexError: list index out of range
DEBUG: Python function do_populate_sdk_ext finished
MY-DEBUG - btfiles = [] - SDK_DEPLOY = /yocto/build-mX6ull/tmp/deploy/sdk
Question:
In line 553 the array btfiles should be filled,
but the array is empty and the line 556 generate the exception.
I have no idea of whats is wrong, what I have forget and what Yocto environment variables are needed to setup to do a correctly work.

hope you are doing good
i had similar issue where i couldnt populate esdk,
its all in GLIBC version..
kindly update your GLIBC version
In my case i had to update GLIBC version to 2.33 in "yocto-uninative.inc" file. It worked for me!!!

Related

Yocto bitbake core-image-sato with preempt-rt failed

I want to set up a linux kernel with preempt-RT with yocto.
According to meta/recipes-rt/README, I add the following code in build/conf/local.conf, and do bitbake core-image-sato, but bitbake fail.
MACHINE ?= "genericx86-64"
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto-rt"
COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
COMPATIBLE_MACHINE_quilt-native = "genericx86-64"
Yocto output the following error:
NOTE: Bitbake server didn't start within 5 seconds, waiting for 90
Loading cache: 100% |#######################################################################################################################################################################| Time: 0:00:13
Loaded 1330 entries from dependency cache.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.46.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "x86_64-poky-linux"
MACHINE = "genericx86-64"
DISTRO = "poky"
DISTRO_VERSION = "3.1.20"
TUNE_FEATURES = "m64 core2"
TARGET_FPU = ""
meta
meta-poky
meta-yocto-bsp = "dunfell:90a6f6a110ab14890e2f6a1616e74ee259fc0f8f"
Initialising tasks: 100% |##################################################################################################################################################################| Time: 0:00:48
Sstate summary: Wanted 14 Found 0 Missed 14 Current 1203 (0% match, 98% complete)
NOTE: Executing Tasks
ERROR: linux-yocto-rt-5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0 do_kernel_metadata: Could not locate BSP definition for genericx86-64/preempt-rt and no defconfig was provided
ERROR: linux-yocto-rt-5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0 do_kernel_metadata: Execution of '/media/fff/disk1T/yocto/demo3/poky/build/tmp/work/genericx86_64-poky-linux/linux-yocto-rt/5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0/temp/run.do_kernel_metadata.1138429' failed with exit code 1
ERROR: Logfile of failure stored in: /media/fff/disk1T/yocto/demo3/poky/build/tmp/work/genericx86_64-poky-linux/linux-yocto-rt/5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0/temp/log.do_kernel_metadata.1138429
Log data follows:
| DEBUG: Executing python function extend_recipe_sysroot
| NOTE: Direct dependencies are ['/media/fff/disk1T/yocto/demo3/poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb:do_populate_sysroot']
| NOTE: Installed into sysroot: []
| NOTE: Skipping as already exists in sysroot: ['kern-tools-native', 'quilt-native']
| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing shell function do_kernel_metadata
| NOTE: do_kernel_metadata: for summary/debug, set KCONF_AUDIT_LEVEL > 0
| ERROR: Could not locate BSP definition for genericx86-64/preempt-rt and no defconfig was provided
| WARNING: exit code 1 from a shell command.
| ERROR: Execution of '/media/fff/disk1T/yocto/demo3/poky/build/tmp/work/genericx86_64-poky-linux/linux-yocto-rt/5.4.213+gitAUTOINC+2f18e629f7_03cd66d981-r0/temp/run.do_kernel_metadata.1138429' failed with exit code 1
ERROR: Task (/media/fff/disk1T/yocto/demo3/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb:do_kernel_metadata) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3173 tasks of which 3172 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/media/fff/disk1T/yocto/demo3/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb:do_kernel_metadata
Summary: There were 2 ERROR messages shown, returning a non-zero exit code.
My hardware cpu is x86-64 core. It hint genericx86-64/preempt-rt dont exist, what action should I adapt to generate core-image-sato with preempt-RT? Please leave a comment and help me if you are similar with this problem.
I try to checkout various branch of yocto but doesn't matter.These dunfell、langdale、kirkstone.I expect to use dunfell.
Following is my build/conf/bblayers.conf:
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
/media/fff/disk1T/yocto/demo3/poky/meta \
/media/fff/disk1T/yocto/demo3/poky/meta-poky \
/media/fff/disk1T/yocto/demo3/poky/meta-yocto-bsp \
"

undefined symbol: __atomic_exchange_8

I'm trying to run google assistant on my raspberry pi following the steps on: https://developers.google.com/assistant/sdk/guides/service/python/embed/run-sample
all works fine until activating the Google Assistant with the command:
googlesamples-assistant-pushtotalk --project-id my-dev-project --device-model-id my-model
I'm getting the following ImportError:
Traceback (most recent call last):
File "/home/pi/env/bin/googlesamples-assistant-pushtotalk", line 5, in <module>
from googlesamples.assistant.grpc.pushtotalk import main
File "/home/pi/env/lib/python3.9/site-packages/googlesamples/assistant/grpc/pushtotalk.py", line 28, in <module>
import grpc
File "/home/pi/env/lib/python3.9/site-packages/grpc/__init__.py", line 22, in <module>
from grpc import _compression
File "/home/pi/env/lib/python3.9/site-packages/grpc/_compression.py", line 15, in <module>
from grpc._cython import cygrpc
ImportError: /home/pi/env/lib/python3.9/site-packages/grpc/_cython/cygrpc.cpython-39-arm-linux-gnueabihf.so: undefined symbol: __atomic_exchange_8
Any ideas on how to fix this?
just ended up here since I ran into the same problem (on a different project) but also involving python3.9, cygrpc on a RPi4 with a recent raspbian-lite (32bit).
While I don't have a solution here are my guesses:
formerly __atomic_exchange_8 was defined in /lib/arm-linux-gnueabihf/libgcc_s.so.1 but now it seems defined in libatomic:
$ grep __atomic_exchange_8 /lib/arm-linux-gnueabihf/libatomic.so.1
grep: /lib/arm-linux-gnueabihf/libatomic.so.1: binary file matches
EDIT:
Solved it. I was looking at the patch which tried to solve the problem two years ago:
https://github.com/grpc/grpc/pull/20514/commits/b912fc7d8d401bb65b3147ee77d03beaa3d46038
I figured their test check_linker_need_libatomic() might be broken, and patched it again to always return True, the problem got fixed.
Had tried earlier to fix it by adding CFLAGS='-latomic' CPPFLAGS='-latomic' but that didn't help.
here's my tiny workaround (not fix!) for today's grpc git HEAD:
root#mypi:/home/pi/CODE/grpc# git diff
diff --git a/setup.py b/setup.py
index 1a72c5c668..60b7705cd2 100644
--- a/setup.py
+++ b/setup.py
## -197,6 +197,7 ## ENABLE_DOCUMENTATION_BUILD = _env_bool_value(
def check_linker_need_libatomic():
"""Test if linker on system needs libatomic."""
+ return True
code_test = (b'#include <atomic>\n' +
b'int main() { return std::atomic<int64_t>{}; }')
cxx = os.environ.get('CXX', 'c++')
diff --git a/tools/distrib/python/grpcio_tools/setup.py b/tools/distrib/python/grpcio_tools/setup.py
index 6b842f56b9..8d5f581ac7 100644
--- a/tools/distrib/python/grpcio_tools/setup.py
+++ b/tools/distrib/python/grpcio_tools/setup.py
## -85,6 +85,7 ## BUILD_WITH_STATIC_LIBSTDCXX = _env_bool_value(
def check_linker_need_libatomic():
"""Test if linker on system needs libatomic."""
+ return True
code_test = (b'#include <atomic>\n' +
b'int main() { return std::atomic<int64_t>{}; }')
cxx = os.environ.get('CXX', 'c++')
root#mypi:/home/pi/CODE/grpc#
EDIT:
as a quick test, cygrpc.cpython-39-arm-linux-gnueabihf.so needs to depend on libatomic:
pi#mypi:~/CODE/grpc $ ldd /usr/local/lib/python3.9/dist-packages/grpc/_cython/cygrpc.cpython-39-arm-linux-gnueabihf.so
linux-vdso.so.1 (0xbeef7000)
/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so => /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so (0xb698b000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb695f000)
libatomic.so.1 => /lib/arm-linux-gnueabihf/libatomic.so.1 (0xb6946000)
libstdc++.so.6 => /lib/arm-linux-gnueabihf/libstdc++.so.6 (0xb67be000)
libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0xb674f000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb65fb000)
/lib/ld-linux-armhf.so.3 (0xb6fcc000)
libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xb65ce000)
This works for me on RPI0 + Bullseye + Python3.9:
pip3 uninstall -y grpcio grpcio-tools
sudo apt install -y python3-grpcio python3-grpc-tools
EDIT: Update to gRPC v1.44.0. The issue has been fixed there, see the explanation below in the old answer.
There was a problem with the order of the parameters used by the compiler to compile some test code which result is used to determine whether libatomic needs to be linked or not.
The issue will be fixed with the next release of grpc. If they maintain the same schedule of previous releases it should be v1.44.0 which should come out some time the next month.
In the mean time you can git cherry-pick the proper fix and build grpc yourself

Bazel Error After Upgrading Nodejs Rules - ERROR: defs.bzl has been removed from build_bazel_rules_nodejs

After upgrading build_bazel_rules_nodejs from 0.42.2 to 1.0.1 I get this error:
ERROR: /home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl:19:5: Traceback (most recent call
last):
File "/home/flolu/Desktop/minimal-bazel-monorepo/services/server/src/BUILD", line 76
nodejs_image(name = "server", <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/io_bazel_rules_docker/nodejs/image.bzl", line 112, in nodejs_image
nodejs_binary(name = binary, <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl", line 19, in nodejs_binary
fail(<1 more arguments>)
ERROR: defs.bzl has been removed from build_bazel_rules_nodejs
Please update your load statements to use index.bzl instead.
See https://github.com/bazelbuild/rules_nodejs/wiki#migrating-off-build_bazel_rules_nodejsdefsbzl for help.
ERROR: error loading package 'services/server/src': Package 'services/server/src' contains errors
INFO: Elapsed time: 0.119s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded)
FAILED: Build did NOT complete successfully (1 packages loaded)
Line 76 in the error refers to this part of the BUILD file:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
But there is no defs.bzl. So I am confused by the error.
So in detail I have upgraded from
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "16fc00ab0d1e538e88f084272316c0693a2e9007d64f45529b82f6230aedb073",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/0.42.2/rules_nodejs-0.42.2.tar.gz"],
)
to
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "e1a0d6eb40ec89f61a13a028e7113aa3630247253bcb1406281b627e44395145",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/1.0.1/rules_nodejs-1.0.1.tar.gz"],
)
You can recreate the error by cloning this repo: https://github.com/flolude/minimal-bazel-monorepo/tree/48add7ddcad4d25e361e1c7f7f257cf916a797b2 and running
bazel test //services/server/src:test
There are some breaking changes between those versions of build_bazel_rules_nodejs. Namely the import path this:
load("#build_bazel_rules_nodejs//:defs..bzl", <whatever>)
needs to become this
load("#build_bazel_rules_nodejs//:index.bzl", <whatever>)
You also need to update your io_bazel_rules_docker to at least v0.13.0. From looking at the release notes its the version compatible with 1.0.1 in node. https://github.com/bazelbuild/rules_docker/releases/

No value for arguement in function call

I am very new to Python and am working through the Dagster hello tutorial
I have set up the following from the tutorial
import csv
from dagster import execute_pipeline, execute_solid, pipeline, solid
#solid
def hello_cereal(context):
# Assuming the dataset is in the same directory as this file
dataset_path = 'cereal.csv'
with open(dataset_path, 'r') as fd:
# Read the rows in using the standard csv library
cereals = [row for row in csv.DictReader(fd)]
context.log.info(
'Found {n_cereals} cereals'.format(n_cereals=len(cereals))
)
return cereals
#pipeline
def hello_cereal_pipeline():
hello_cereal()
However pylint shows
a no value for parameter
message.
What have I missed?
When I try to execute the pipeline I get the following
D:\python\dag>dagster pipeline execute -f hello_cereal.py -n
hello_cereal_pipeline 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
PIPELINE_START - Started execution of pipeline
"hello_cereal_pipeline". 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
ENGINE_EVENT - Executing steps in process (pid: 11684)
event_specific_data = {"metadata_entries": [["pid", null, ["11684"]],
["step_keys", null, ["{'hello_cereal.compute'}"]]]} 2019-11-25
14:47:09 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_START - Started execution
of step "hello_cereal.compute".
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute" 2019-11-25 14:47:10 - dagster - ERROR - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_FAILURE - Execution of
step "hello_cereal.compute" failed.
cls_name = "FileNotFoundError"
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute"
File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\errors.py",
line 114, in user_code_error_boundary
yield File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\engine\engine_inprocess.py",
line 621, in _user_event_sequence_for_step_compute_fn
for event in gen: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 75, in _execute_core_compute
for step_output in _yield_compute_results(compute_context, inputs, compute_fn): File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 52, in _yield_compute_results
for event in user_event_sequence: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\definitions\decorators.py",
line 418, in compute
result = fn(context, **kwargs) File "hello_cereal.py", line 10, in hello_cereal
with open(dataset_path, 'r') as fd:
2019-11-25 14:47:10 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - ENGINE_EVENT - Finished steps
in process (pid: 11684) in 183ms event_specific_data =
{"metadata_entries": [["pid", null, ["11684"]], ["step_keys", null,
["{'hello_cereal.compute'}"]]]} 2019-11-25 14:47:10 - dagster - ERROR
- hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 - PIPELINE_FAILURE - Execution of pipeline "hello_cereal_pipeline"
failed.
[Update]
From Rahul's comment I realised I had not copied the whole example.
When I corrected that I got a FileNotFoundError
To answer the original question about why you are receiving a "no value for parameter" pylint message -
This is because the pipeline function calls don't include any parameters in the constructors and the #solid functions have parameters defined. This is intentional from dagster and can be ignored by adding the following line either at the beginning of the module, or to the right of the line with the pylint message. Note that putting the python comment below at the beginning of the module tells pylint to ignore any instance of the warning in the module, whereas putting the comment in-line tells pylint to ignore only that instance of the warning.
# pylint: disable=no-value-for-parameter
Lastly, you could also put a similar ignore statement in a .pylintrc file too, but I'd advise against that as that would be project-global and you could miss true issues.
hope this helps a bit!
Please check whether the dataset(csv file) which you are using is in the same directory with your code file. That may be the case why are you getting the
FileNotFoundError error

Building content shell as final and sharing

everything good? I'm doing working on a personal project that aims to use the content_shell as a minimal browser, just for viewing sites on terminals and some other options I'm adding. For this I am using the content shell I built it in Ubuntu 16.04 with the following flags ng:
is_debug = false
is_java_debug = false
is_official_build = true
target_cpu = "x86"
symbol_level = 0
remove_webcore_debug_symbols = true
enable_nacl = false
is_component_build = true
He generated all of these files:
AHEM____.TTF
angledata
args.gn
brotli
build.ninja
build.ninja.d
character_data_generator
clang_newlib_x64
content_shell
content_shell.log
content_shell.pak
fonts.conf
font_service.service
GardinerModBug.ttf
GardinerModCat.ttf
gen
genmacro
genmodule
genperf
genstring
genversion
glibc_x64
icudtl.dat
irt_x64
libaccessibility.so
libaccessibility.so.TOC
libanimation.so
libanimation.so.TOC
libaura_extra.so
libaura_extra.so.TOC
libaura.so
libaura.so.TOC
libbase_i18n.so
libbase_i18n.so.TOC
libbase.so
libbase.so.TOC
libbindings.so
libbindings.so.TOC
libblink_android_mojo_bindings_shared.so
libblink_android_mojo_bindings_shared.so.TOC
libblink_common.so
libblink_common.so.TOC
libblink_controller.so
libblink_controller.so.TOC
libblink_core.so
libblink_core.so.TOC
libblink_deprecated_test_plugin.so
libblink_modules.so
libblink_modules.so.TOC
libblink_mojo_bindings_shared.so
libblink_mojo_bindings_shared.so.TOC
libblink_offscreen_canvas_mojo_bindings_shared.so
libblink_offscreen_canvas_mojo_bindings_shared.so.TOC
libblink_platform.so
libblink_platform.so.TOC
libblink_test_plugin.so
libbluetooth.so
libbluetooth.so.TOC
libboringssl.so
libboringssl.so.TOC
libcapture_base.so
libcapture_base.so.TOC
libcapture_lib.so
libcapture_lib.so.TOC
libcc_animation.so
libcc_animation.so.TOC
libcc_base.so
libcc_base.so.TOC
libcc_blink.so
libcc_blink.so.TOC
libcc_debug.so
libcc_debug.so.TOC
libcc_ipc.so
libcc_ipc.so.TOC
libcc_paint.so
libcc_paint.so.TOC
libcc.so
libcc.so.TOC
libcdm_manager.so
libcdm_manager.so.TOC
libchromium_sqlite3.so
libchromium_sqlite3.so.TOC
libcodec.so
libcodec.so.TOC
libcolor_space.so
libcolor_space.so.TOC
libcompositor.so
libcompositor.so.TOC
libcontent_common_mojo_bindings_shared.so
libcontent_common_mojo_bindings_shared.so.TOC
libcontent_public_common_mojo_bindings_shared.so
libcontent_public_common_mojo_bindings_shared.so.TOC
libcontent.so
libcontent.so.TOC
libcrcrypto.so
libcrcrypto.so.TOC
libc++.so
libc++.so.TOC
libdbus.so
libdbus.so.TOC
libdevice_base.so
libdevice_base.so.TOC
libdevice_event_log.so
libdevice_event_log.so.TOC
libdevice_gamepad.so
libdevice_gamepad.so.TOC
libdevices.so
libdevices.so.TOC
libdevice_vr_mojo_bindings_blink.so
libdevice_vr_mojo_bindings_blink.so.TOC
libdevice_vr_mojo_bindings_shared.so
libdevice_vr_mojo_bindings_shared.so.TOC
libdevice_vr_mojo_bindings.so
libdevice_vr_mojo_bindings.so.TOC
libdiscardable_memory_client.so
libdiscardable_memory_client.so.TOC
libdiscardable_memory_common.so
libdiscardable_memory_common.so.TOC
libdiscardable_memory_service.so
libdiscardable_memory_service.so.TOC
libdisplay.so
libdisplay.so.TOC
libdisplay_types.so
libdisplay_types.so.TOC
libdisplay_util.so
libdisplay_util.so.TOC
libEGL.so
libEGL.so.TOC
libembedder.so
libembedder.so.TOC
libevents_base.so
libevents_base.so.TOC
libevents_devices_x11.so
libevents_devices_x11.so.TOC
libevents_ozone_layout.so
libevents_ozone_layout.so.TOC
libevents.so
libevents.so.TOC
libevents_x.so
libevents_x.so.TOC
libffmpeg.so
libffmpeg.so.TOC
libfingerprint.so
libfingerprint.so.TOC
libfreetype.so.6
libfreetype.so.6.TOC
libgeolocation.so
libgeolocation.so.TOC
libgeometry_skia.so
libgeometry_skia.so.TOC
libgeometry.so
libgeometry.so.TOC
libgesture_detection.so
libgesture_detection.so.TOC
libgfx_ipc_color.so
libgfx_ipc_color.so.TOC
libgfx_ipc_geometry.so
libgfx_ipc_geometry.so.TOC
libgfx_ipc_skia.so
libgfx_ipc_skia.so.TOC
libgfx_ipc.so
libgfx_ipc.so.TOC
libgfx.so
libgfx.so.TOC
libgfx_switches.so
libgfx_switches.so.TOC
libgfx_x11.so
libgfx_x11.so.TOC
libgin.so
libgin.so.TOC
libgles2_c_lib.so
libgles2_c_lib.so.TOC
libgles2_implementation.so
libgles2_implementation.so.TOC
libgles2_utils.so
libgles2_utils.so.TOC
libGLESv2.so
libGLESv2.so.TOC
libgl_init.so
libgl_init.so.TOC
libgl_in_process_context.so
libgl_in_process_context.so.TOC
libgl_wrapper.so
libgl_wrapper.so.TOC
libgpu.so
libgpu.so.TOC
libhost.so
libhost.so.TOC
libicui18n.so
libicui18n.so.TOC
libicuuc.so
libicuuc.so.TOC
libinterfaces_shared.so
libinterfaces_shared.so.TOC
libipc_mojom_shared.so
libipc_mojom_shared.so.TOC
libipc_mojom.so
libipc_mojom.so.TOC
libipc.so
libipc.so.TOC
libjs.so
libjs.so.TOC
libkeycodes_x11.so
libkeycodes_x11.so.TOC
libkeyed_service_content.so
libkeyed_service_content.so.TOC
libkeyed_service_core.so
libkeyed_service_core.so.TOC
libmedia_blink.so
libmedia_blink.so.TOC
libmedia_gpu.so
libmedia_gpu.so.TOC
libmedia_mojo_services.so
libmedia_mojo_services.so.TOC
libmedia.so
libmedia.so.TOC
libmetrics_cpp.so
libmetrics_cpp.so.TOC
libmidi.so
libmidi.so.TOC
libmirclient.so.9
libmirclient.so.9.TOC
libmojo_common_lib.so
libmojo_common_lib.so.TOC
libmojo_ime_lib.so
libmojo_ime_lib.so.TOC
libmojo_public_system_cpp.so
libmojo_public_system_cpp.so.TOC
libmojo_public_system.so
libmojo_public_system.so.TOC
libmojo_system_impl.so
libmojo_system_impl.so.TOC
libnative_theme.so
libnative_theme.so.TOC
libnet.so
libnet.so.TOC
libnet_with_v8.so
libnet_with_v8.so.TOC
libosmesa.so
libplatform.so
libplatform.so.TOC
libppapi_host.so
libppapi_host.so.TOC
libppapi_proxy.so
libppapi_proxy.so.TOC
libppapi_shared.so
libppapi_shared.so.TOC
libprefs.so
libprefs.so.TOC
libprinting.so
libprinting.so.TOC
libprotobuf_lite.so
libprotobuf_lite.so.TOC
librange.so
librange.so.TOC
libresource_coordinator_cpp.so
libresource_coordinator_cpp.so.TOC
libresource_coordinator_public_interfaces_internal_shared.so
libresource_coordinator_public_interfaces_internal_shared.so.TOC
libsandbox_services.so
libsandbox_services.so.TOC
libseccomp_bpf.so
libseccomp_bpf.so.TOC
libsensors.so
libsensors.so.TOC
libservice_manager_cpp.so
libservice_manager_cpp.so.TOC
libservice_manager_cpp_types.so
libservice_manager_cpp_types.so.TOC
libservice_manager_mojom_blink.so
libservice_manager_mojom_blink.so.TOC
libservice_manager_mojom_constants_blink.so
libservice_manager_mojom_constants_blink.so.TOC
libservice_manager_mojom_constants_shared.so
libservice_manager_mojom_constants_shared.so.TOC
libservice_manager_mojom_constants.so
libservice_manager_mojom_constants.so.TOC
libservice_manager_mojom_shared.so
libservice_manager_mojom_shared.so.TOC
libservice_manager_mojom.so
libservice_manager_mojom.so.TOC
libservice.so
libservice.so.TOC
libshared_memory_support.so
libshared_memory_support.so.TOC
libshell_dialogs.so
libshell_dialogs.so.TOC
libskia.so
libskia.so.TOC
libsnapshot.so
libsnapshot.so.TOC
libsql.so
libsql.so.TOC
libstartup_tracing.so
libstartup_tracing.so.TOC
libstorage_browser.so
libstorage_browser.so.TOC
libstorage_common.so
libstorage_common.so.TOC
libstub_window.so
libstub_window.so.TOC
libsuid_sandbox_client.so
libsuid_sandbox_client.so.TOC
libsurface.so
libsurface.so.TOC
libtest_runner.so
libtest_runner.so.TOC
libtracing.so
libtracing.so.TOC
libui_base_ime.so
libui_base_ime.so.TOC
libui_base.so
libui_base.so.TOC
libui_base_x.so
libui_base_x.so.TOC
libui_data_pack.so
libui_data_pack.so.TOC
libui_touch_selection.so
libui_touch_selection.so.TOC
libui_views_mus_lib.so
libui_views_mus_lib.so.TOC
liburl_ipc.so
liburl_ipc.so.TOC
liburl.so
liburl.so.TOC
libuser_prefs.so
libuser_prefs.so.TOC
libv8_libbase.so
libv8_libbase.so.TOC
libv8_libplatform.so
libv8_libplatform.so.TOC
libv8.so
libv8.so.TOC
libviews.so
libviews.so.TOC
libviz_common.so
libviz_common.so.TOC
libviz_resource_format.so
libviz_resource_format.so.TOC
libVkLayer_core_validation.so
libVkLayer_core_validation.so.TOC
libVkLayer_object_tracker.so
libVkLayer_object_tracker.so.TOC
libVkLayer_parameter_validation.so
libVkLayer_parameter_validation.so.TOC
libVkLayer_swapchain.so
libVkLayer_swapchain.so.TOC
libVkLayer_threading.so
libVkLayer_threading.so.TOC
libVkLayer_unique_objects.so
libVkLayer_unique_objects.so.TOC
libweb_dialogs.so
libweb_dialogs.so.TOC
libwebview.so
libwebview.so.TOC
libwm_public.so
libwm_public.so.TOC
libwm.so
libwm.so.TOC
libwtf.so
libwtf.so.TOC
libx11_events_platform.so
libx11_events_platform.so.TOC
libx11_window.so
libx11_window.so.TOC
locales
mksnapshot
mus_app_resources_100.pak
mus_app_resources_200.pak
mus_app_resources_strings.pak
nacl_bootstrap_x64
natives_blob.bin
newlib_pnacl
newlib_pnacl_nonsfi
obj
protoc
proto_zero_plugin
pyproto
re2c
resources
shell_resources.pak
snapshot_blob.bin
swiftshader
test_ime_driver.service
toolchain.ninja
transport_security_state_generator
ui
ui_resources_100_percent.pak
ui.service
ui_test.pak
v8_build_config.json
v8_context_snapshot.bin
v8_context_snapshot_generator
views_mus_resources.pak
yasm
How do I make all these files with the same structure as the official Chromium? I removed the folders I do not need and the files add up to about 1.7 GB I want to share with others my project. I have already researched and am looking here in the group something about this but I have not yet found, which one or which command should I use to reduce the number of files, I know it is not identical however Electron uses the content_shell and the file structure is similar to of Chromium:
Electron file structure
Do I have to build it again? If so, which flags should I use?
Thanks everyone for your attention :-)
You need to set is_component_build = false when invoking gn to generate ninja build files.
This is because when is_component_build is set to true, the build will generate a shared object for each component target to improve compile time (faster incremental builds), whereas when set to false it will produce static libs for each component and bundle them into a huge shared object to speed up the application at runtime (among other reasons).
Even if you need to build it on Linux, it might be helpful to have a look at Checking out and Building Chromium for Windows # Faster builds and this answer.

Resources