Yocto:Execute shell script inside my source repository environment (inside downloads folder ?) - linux

in Yocto I need to execute a bash script that depends on the source repository environment. My current approach is to execute the script directly in the downloads folder.
Pseudocode (hope you know what I mean):
do_patch_append () {
os.system(' \
pushd ${DL_DIR}/svn/*/svn/myapp/Src; \
dos2unix ./VersionBuild/MakeVersionList.sh; \
./VersionBuild/MakeVersionList.sh ${S}/Src/VersionList.h; \
popd \
')
}
Before I start fiddling with the details: Is this a valid approach ? And is do_patch the right place ?
Reason: The script MakeVersionList.sh generates a header file containing revision info from subversion. It executes "svn info" on several folders and parses its output into the header file VersionList.h, which will later be accessed in ${S} during compilation.
Thanks a lot.

Thanks for giving helpful advice. Here's how I finally did it. To minimize dependencies to server infrastructure (SVN server) I put the version generation into do_fetch. I had to do a manual do_unpack as well, because adding the generated file to SRC_URI (like ${DL_DIR}/svn/...) made the fetcher try to fetch it.
#
# Build bitstone application
#
SUMMARY = "Bitstone mega wallet"
SECTION = "Rockefeller"
LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://COPYING;md5=bbea815ee2795b2f4230826c0c6b8814"
#Register for root file system aggregation
APP_NAME = "bst-wallet"
FILES_${PN} += "${bindir}/${APP_NAME}"
PV = "${SRCPV}"
SRCREV = "${AUTOREV}"
SVN_SRV = "flints-svn"
SRC_URI = "svn://${SVN_SRV}/svn/${APP_NAME};module=Src;protocol=http;user=fred;pswd=pebbles "
S = "${WORKDIR}/Src"
GEN_FILE = "VersionList.h"
do_fetch_append () {
os.system(' \
cd ./svn/{0}/svn/{1}/Src; \
TZ=Pacific/Yap ./scripts/MakeVersionList.sh {2}; \
'.format(d.getVar("SVN_SRV"), d.getVar("APP_NAME"), d.getVar("GEN_FILE"))
)
}
do_unpack_append () {
os.system(' \
cp {3}/svn/{0}/svn/{1}/Src/{2} {4} \
'.format(d.getVar("SVN_SRV"), d.getVar("APP_NAME"), d.getVar("GEN_FILE"), \
d.getVar("DL_DIR"), d.getVar("S")) \
)
}
do_install_append () {
install -d ${D}${bindir}
install -m 0755 ${APP_NAME} ${D}${bindir}/
}

Related

How to properly decrypt PGP encrypted file in a AWS Lambda function in Python?

I'm trying to create an AWS Lambda in python that:
downloads a compressed and encrypted file from an S3 bucket
decrypts the file using python-gnupg
stores the decrypted compressed contents in another S3 bucket
This is using python 3.8 and python-gnupg package in a Lambda layer.
I've verified the PGP key is correct, that it is being loaded into the keyring just fine, and that the encrypted file is being downloaded correctly.
However, when I attempt to run gnupg.decrypt_file I get output that looks like it's been successful, but
the decrypt status shows not ok and the decrypted file does not exist.
How can I get PGP decryption working in Lambda?
Here is the relevant code extracted from the lambda function:
import gnupg
from pathlib import Path
# ...
gpg = gnupg.GPG(gnupghome='/tmp')
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
# ...
# this works as expected
status = gpg.import_keys(MY_KEY_DATA)
# ...
print('Performing Decryption of', encrypted_path)
print(encrypted_path, "exists :", Path(encrypted_path).exists())
with open(encrypted_path, 'rb') as f:
status = gpg.decrypt_file(f, output=decrypted_path, always_trust = True)
print('decrypt ok =', status.ok)
print('decrypt status = ', status.status)
print('decrypt stderr = ', status.status)
print('decrypt stderr = ', status.stderr)
print(decrypted_path, "exists :", Path(decrypted_path).exists())
Expectation was to get output similar to the following in CloudWatch:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = True
2022-11-08T10:24:44.018-05:00 decrypt status = [SOME OUTPUT FROM GPG BINARY]
2022-11-08T10:24:44.018-05:00 decrypt stderr = ""
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : True
Instead what I get is:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = False
2022-11-08T10:24:44.018-05:00 decrypt status = good passphrase
2022-11-08T10:24:44.018-05:00 decrypt stderr = [GNUPG:] ENC_TO XXXXXX 1 0
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : False
It appears as though decryption process starts to work, but something kills it, or perhaps the gpg binary is expecting some TTY input and halts?
I've tried locally running gpg decryption using the cli and it works as expected, although I am using GnuPG version 2.3.1, not sure what version exists on Lambda.
After a lot of digging I managed to get this working.
I'm not 100% sure if the cause is the older GnuPG binary installed on the Lambda image by default, but to be sure I decided to build a GnuPG 2.3.1 layer for lambda which I confirmed was working as expected in a Docker container.
I used https://github.com/skeeto/lean-static-gpg/blob/master/build.sh as a foundation for compiling the binary in
Docker, but updated it to include compression, which was required for this use case.
Here is the updated updated build.sh script I used, optimized for building for Lambda:
#!/bin/sh
set -e
MUSL_VERSION=1.2.2
GNUPG_VERSION=2.3.1
LIBASSUAN_VERSION=2.5.5
LIBGCRYPT_VERSION=1.9.2
LIBGPGERROR_VERSION=1.42
LIBKSBA_VERSION=1.5.1
NPTH_VERSION=1.6
PINENTRY_VERSION=1.1.1
BZIP_VERSION=1.0.6-g10
ZLIB_VERSION=1.2.12
DESTDIR=""
PREFIX="/opt"
WORK="$PWD/work"
PATH="$PWD/work/deps/bin:$PATH"
NJOBS=$(nproc)
clean() {
rm -rf "$WORK"
}
distclean() {
clean
rm -rf download
}
download() {
gnupgweb=https://gnupg.org/ftp/gcrypt
mkdir -p download
(
cd download/
xargs -n1 curl -O <<EOF
https://www.musl-libc.org/releases/musl-$MUSL_VERSION.tar.gz
$gnupgweb/gnupg/gnupg-$GNUPG_VERSION.tar.bz2
$gnupgweb/libassuan/libassuan-$LIBASSUAN_VERSION.tar.bz2
$gnupgweb/libgcrypt/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
$gnupgweb/libgpg-error/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
$gnupgweb/libksba/libksba-$LIBKSBA_VERSION.tar.bz2
$gnupgweb/npth/npth-$NPTH_VERSION.tar.bz2
$gnupgweb/pinentry/pinentry-$PINENTRY_VERSION.tar.bz2
$gnupgweb/bzip2/bzip2-$BZIP_VERSION.tar.gz
$gnupgweb/zlib/zlib-$ZLIB_VERSION.tar.gz
EOF
)
}
clean
if [ ! -d download/ ]; then
download
fi
mkdir -p "$DESTDIR$PREFIX" "$WORK/deps"
tar -C "$WORK" -xzf download/musl-$MUSL_VERSION.tar.gz
(
mkdir -p "$WORK/musl"
cd "$WORK/musl"
../musl-$MUSL_VERSION/configure \
--prefix="$WORK/deps" \
--enable-wrapper=gcc \
--syslibdir="$WORK/deps/lib"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/zlib-$ZLIB_VERSION.tar.gz
(
mkdir -p "$WORK/zlib"
cd "$WORK/zlib"
../zlib-$ZLIB_VERSION/configure \
--prefix="$WORK/deps"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/bzip2-$BZIP_VERSION.tar.gz
(
export CFLAGS="-fPIC"
cd "$WORK/bzip2-$BZIP_VERSION"
make install PREFIX="$WORK/deps"
make clean
)
tar -C "$WORK" -xjf download/npth-$NPTH_VERSION.tar.bz2
(
mkdir -p "$WORK/npth"
cd "$WORK/npth"
../npth-$NPTH_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
(
mkdir -p "$WORK/libgpg-error"
cd "$WORK/libgpg-error"
../libgpg-error-$LIBGPGERROR_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-nls \
--disable-doc \
--disable-languages
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libassuan-$LIBASSUAN_VERSION.tar.bz2
(
mkdir -p "$WORK/libassuan"
cd "$WORK/libassuan"
../libassuan-$LIBASSUAN_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
(
mkdir -p "$WORK/libgcrypt"
cd "$WORK/libgcrypt"
../libgcrypt-$LIBGCRYPT_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-doc \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libksba-$LIBKSBA_VERSION.tar.bz2
(
mkdir -p "$WORK/libksba"
cd "$WORK/libksba"
../libksba-$LIBKSBA_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/gnupg-$GNUPG_VERSION.tar.bz2
(
mkdir -p "$WORK/gnupg"
cd "$WORK/gnupg"
../gnupg-$GNUPG_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libgcrypt-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--with-ksba-prefix="$WORK/deps" \
--with-npth-prefix="$WORK/deps" \
--with-agent-pgm="$PREFIX/bin/gpg-agent" \
--with-pinentry-pgm="$PREFIX/bin/pinentry" \
--enable-zip \
--enable-bzip2 \
--disable-card-support \
--disable-ccid-driver \
--disable-dirmngr \
--disable-gnutls \
--disable-gpg-blowfish \
--disable-gpg-cast5 \
--disable-gpg-idea \
--disable-gpg-md5 \
--disable-gpg-rmd160 \
--disable-gpgtar \
--disable-ldap \
--disable-libdns \
--disable-nls \
--disable-ntbtls \
--disable-photo-viewers \
--disable-scdaemon \
--disable-sqlite \
--disable-wks-tools
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
rm "$DESTDIR$PREFIX/bin/gpgscm"
)
tar -C "$WORK" -xjf download/pinentry-$PINENTRY_VERSION.tar.bz2
(
mkdir -p "$WORK/pinentry"
cd "$WORK/pinentry"
../pinentry-$PINENTRY_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--disable-ncurses \
--disable-libsecret \
--enable-pinentry-tty \
--disable-pinentry-curses \
--disable-pinentry-emacs \
--disable-inside-emacs \
--disable-pinentry-gtk2 \
--disable-pinentry-gnome3 \
--disable-pinentry-qt \
--disable-pinentry-tqt \
--disable-pinentry-fltk
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
)
rm -rf "$DESTDIR$PREFIX/sbin"
rm -rf "$DESTDIR$PREFIX/share/doc"
rm -rf "$DESTDIR$PREFIX/share/info"
# cleanup
distclean
Below is the Dockerfile used to build the layer:
FROM public.ecr.aws/lambda/python:3.8
# the output volume to extract the build contents
VOLUME ["/opt/bin"]
RUN yum -y groupinstall 'Development Tools'
RUN yum -y install tar gzip zlib bzip2 file hostname
WORKDIR /opt
# copy the build script
COPY static-gnupg-build.sh .
# run the build script
RUN bash ./static-gnupg-build.sh
# when run output the version
ENTRYPOINT [ "/opt/bin/gpg", "--version" ]
Once the code is compiled in the image I copied it to my local directory, zipped it, and published the layer:
docker cp MY_DOCKER_ID:/opt/bin ./gnupg
cd ./gnupg && zip -r gnupg-layer.zip bin
To publish the layer:
aws lambda publish-layer-version \
--layer-name gnupg \
--zip-file fileb://layer-gpg2.3.zip \
--compatible-architectures python3.8
I decided to not use the python-gnupg package to have more control over the exact GnuPG binary flags so I added my own
binary wrapper function:
def gpg_run(flags: list, subprocess_kwargs: dict):
gpg_bin_args = [
'/opt/bin/gpg',
'--no-tty',
'--yes', # don't prompt for input
'--always-trust', # always trust
'--status-fd', '1', # return status to stdout
'--homedir', '/tmp'
]
gpg_bin_args.extend(flags)
print('running cmd', ' '.join(gpg_bin_args))
result = subprocess.run(gpg_bin_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**subprocess_kwargs)
return result.returncode, \
result.stdout.decode('utf-8').split('/n'), \
result.stderr.decode('utf-8').split('/n')
And then added an import key and decode function:
def gpg_import_keys(input):
return gpg_run(flags=['--import'], subprocess_kwargs={input: input})
def gpg_decrypt(input, output):
return gpg_run(flags=['--output', output, '--decrypt', input])
And updated the relavent Lambda code with:
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
#...
# TODO: import the keys only needs to run once per instance
# ideally would be moved to a singleton
code, stdout, stderr = gpg_import_keys(bytes(MY_KEY_DATA, 'utf-8'))
if code > 0:
raise Exception(f'gpg_import_keys failed with code {code}: {stdout} {stderr}')
print('import_keys stdout =', stdout)
print('import_keys stderr =', stderr)
# Perform decryption.
print('Performing Decryption of', encrypted_path)
code, stdout, stderr = gpg_decrypt(encrypted_path, output=decrypted_path)
if code > 0:
raise Exception(f'gpg_decrypt failed with code {code}: {stderr}')
print('decrypt stdout =', stdout)
print('decrypt stderr =', stderr)
print('Status: OK')
print(decrypted_path, "exists :", Path(decrypted_path).exists())
And now the Cloudwatch log output is as expected and I've confirmed the decoded file is correct!
...
2022-11-17T09:25:22.732-06:00 running cmd:['/opt/bin/gpg', '--no-tty', '--batch', '--yes', '--always-trust', '--status-fd', '1', '--homedir', '/tmp', '--import']
2022-11-17T09:25:22.769-06:00 import_keys ok = True
2022-11-17T09:25:22.769-06:00 import_keys stdout = ['[GNUPG:] IMPORT_OK 0 XXX', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] IMPORT_OK 16 XXX', '[GNUPG:] IMPORT_RES 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0', '']
2022-11-17T09:25:22.769-06:00 import_keys stderr = ['']
2022-11-17T09:25:22.769-06:00 Performing Decryption of /tmp/test.txt.gpg
2022-11-17T09:25:22.769-06:00 running cmd: /opt/bin/gpg --no-tty --yes --always-trust --status-fd 1 --homedir /tmp --output /tmp/decrypted.zip --decrypt /tmp/encrypted.zip
2022-11-17T09:25:22.850-06:00 decrypt stdout = ['[GNUPG:] ENC_TO XXX 1 0', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] DECRYPTION_KEY XXX -', '[GNUPG:] BEGIN_DECRYPTION', '[GNUPG:] DECRYPTION_INFO 0 9 2', '[GNUPG:] PLAINTEXT 62 1667796554 encrypted.zip', '[GNUPG:] PLAINTEXT_LENGTH 428', '[GNUPG:] DECRYPTION_OKAY', '[GNUPG:] GOODMDC', '[GNUPG:] END_DECRYPTION', '']
2022-11-17T09:25:22.850-06:00 decrypt stderr = ['gpg: encrypted with rsa2048 key, ID XXX, created 2022-11-07', ' "XXX"', '']
2022-11-17T09:25:22.850-06:00 /tmp/decrypted.zip exists: True
...

Yocto build could not invoke dnf . command && no Match for the argument : recipe-name

I tried to build my custom recipe for the project which contains programs for sensors. All the task and library making is taken care of by CMakeLists.txt it also has python dependencies. I added all the required dependencies within my recipe and tried to build it using the "bitbake soofierecp" command execution got successful. Whereas while attempting to install it with the local.conf using "IMAGE_INSTALL += soofierecp" it is giving an error as listed below:
Error is:
ERROR: soofie-img-1.0-r0 do_rootfs: Could not invoke dnf. Command '/home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/recipe-sysroot-native/usr/bin/dnf -v --rpmverbosity=info -y -c /home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/rootfs/etc/dnf/dnf.conf --setopt=reposdir=/home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/rootfs/etc/yum.repos.d --installroot=/home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/rootfs --setopt=logdir=/home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/temp --repofrompath=oe-repo,/home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/oe-rootfs-repo --nogpgcheck install i2c-tools packagegroup-base-extended packagegroup-core-boot packagegroup-core-ssh-dropbear psplash python3 python3-dev python3-pip python3-requests run-postinsts soofierecp usbutils locale-base-en-us locale-base-en-gb' returned 1:
DNF version: 4.12.0
cachedir: /home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/rootfs/var/cache/dnf
Added oe-repo repo from /home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/oe-rootfs-repo
User-Agent: falling back to 'libdnf': could not detect OS or basearch
repo: using cache for: oe-repo
oe-repo: using metadata from Thu 26 May 2022 05:31:19 AM UTC.
Last metadata expiration check: 0:00:01 ago on Thu 26 May 2022 05:31:20 AM UTC.
No match for argument: soofierecp
Error: Unable to find a match: soofierecp
ERROR: Logfile of failure stored in: /home/anvation/nitish/source/build/tmp/work/qemux86_64-poky-linux/soofie-img/1.0-r0/temp/log.do_rootfs.1379711
ERROR: Task (/home/anvation/nitish/workspace/meta-soofie/recipes-example/images/soofie-img.bb:do_rootfs) failed with exit code '1'
NOTE: Tasks Summary: Attempted 4873 tasks of which 4872 didn't need to be rerun and 1 failed. "added the meta-layer in bblayer.conf file"
Recipe is:
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
S = "${WORKDIR}"
PR = "r0"
DEPENDS = "openssl python3-six-native python3-pytest-runner-native"
SRC_URI = "\
file://src/ \
file://third_party/pybind11/ \
file://third_party/CLI11/ \
file://third_party/c-periphery/ \
file://CMakeLists.txt \
"
do_configure() {
cmake ../
}
RDEPENDS_${PN} = "python-requests python-six"
inherit pkgconfig cmake python3native
EXTRA_OEMAKE = ""
ALLOW_EMPTY_${PN}="1"
FILES_${PN} = "${bindir}/*"
DEPENDS += "udev"
Please help me to reach for the solution
I really appreciate any help you can provide.
I got the above code working with the suggestion and help
Thank you so much
The updated recipe is:
'
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
S = "${WORKDIR}"
PR = "r0"
SRC_URI = "\
file://src/ \
file://third_party/pybind11/ \
file://third_party/CLI11/ \
file://third_party/c-periphery/ \
file://CMakeLists.txt \
"
inherit cmake pkgconfig python3native
DEPENDS += "openssl python3-six-native python3-pytest-runner-native"
EXTRA_OECMAKE = ""
do_install() {
install -d ${D}${bindir}
install -m 0755 ${WORKDIR}/build/datagen ${D}${bindir}
}
DEPENDS += "udev"
FILES_${PN} += "${D}${bindir}/"
ALLOW_EMPTY_${PN} = "1"
'

Yocto do_package: Didn't find service unit specified in SYSTEMD_SERVICE_

Description
I want to install a service into my image, but it is failing with following errors
ERROR: mypackage-git-r0 do_package: Didn't find service unit 'mypackage.service', specified in SYSTEMD_SERVICE_mypackage.
ERROR: Logfile of failure stored in: <build-location>/poky/build/tmp/work/cortexa53-poky-linux/mypackage/git-r0/temp/log.do_package.7924
ERROR: Task (<layer-location>/meta-mypackage-oe/recipes-mypackage/mypackage/mypackage_git.bb:do_package) failed with exit code '1'
Recipe
Python sources are cloned from git, now I want to create a service to run at boot. Here is the recipe
SUMMARY = " "
DESCRIPTION = " "
HOMEPAGE = " todo "
LICENSE = "CLOSED"
SRC_URI += "<URL>"
SRC_URI += "file://mypackage.service"
SRCREV = "<srcrev>"
S = "${WORKDIR}/git"
inherit setuptools3 systemd
RDEPENDS_${PN} = " \
${PYTHON_PN}-pyserial \
${PYTHON_PN}-pyusb \
${PYTHON_PN}-terminal \
"
SYSTEMD_PACKAGES = "${PN}"
do_install_append () {
install -d ${D}${system_unitdir}
install -m 0755 ${WORKDIR}/mypackage.service ${D}{system_unitdir}
}
SYSTEMD_SERVICE_${PN} = "mypackage.service"
FILES_${PN} += "${system_unitdir}/mypackage.service"
Recipe structure
recipes-mypackage/mypackage/
├── mypackage
│   └── mypackage.service
└── mypackage_git.bb
1 directory, 2 files
Service file
NOTE: mypackage has feature to run as daemon using -d option
[Unit]
Description=mypackage service
[Service]
Type=simple
ExecStart=/usr/bin/mypackage -d
[Install]
WantedBy=multi-user.target
Build configurations
Image recipe inherits core-image-base and contains
IMAGE_FEATURES += "package-management"
PACKAGE_CLASSES ?= "rpm deb package_deb"
DISTRO_FEATURES_append = " systemd"
VIRTUAL-RUNTIME_init_manager += "systemd"
inherit extrausers
local.conf contents
MACHINE = "raspberrypi3-64"
ENABLE_UART = "1"
RPI_USE_U_BOOT = "1"
GPU_FREQ = "250"
I might have messed up a lot of things in the recipe, so need some pointer to clean up the recipe as well as resolve the issue.
Thanks.
Replace system_unitdir by systemd_system_unitdir.
SYSTEMD_PACKAGES already contains ${PN} so you can ignore it, same for FILES_${PN} += "${systemd_system_unitdir}/mypackage.service" as if systemd.bbclass finds your unit, it'll be added to the appropriate FILES_ automatically.
c.f. https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/systemd.bbclass#n4
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/systemd.bbclass#n109
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/systemd.bbclass#n148
And for completeness, thanks to #Jussi Kukkonen for the comment, missing $ sign before {systemd_system_unitdir}.

Nodejs node binary core dumped(Ilegal Insatruction)

I am working on bit-bake environment. I am using nodejs ver 10.15.3
dest cpu == ppc64 linux
My problem is node binary core dumps and I am not able to identify the root cause. I am trying to compile nodejs for dest cpu(ppc64).
I am not sure but I guess there are runtime requirements which are not satisfied on the target machine.
below is my recipe:-
DESCRIPTION = "nodeJS Evented I/O for V8 JavaScript"
HOMEPAGE = "http://nodejs.org"
LICENSE = "MIT & BSD & Artistic-2.0"
LIC_FILES_CHKSUM = "file://LICENSE;md5=9ceeba79eb2ea1067b7b3ed16fff8bab"
DEPENDS = "openssl zlib icu"
DEPENDS_append_class-target = " nodejs-native"
inherit pkgconfig
COMPATIBLE_MACHINE_armv4 = "(!.*armv4).*"
COMPATIBLE_MACHINE_armv5 = "(!.*armv5).*"
COMPATIBLE_MACHINE_mips64 = "(!.*mips64).*"
SRC_URI = "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
file://0001-Disable-running-gyp-files-for-bundled-deps.patch \
file://0003-Crypto-reduce-memory-usage-of-SignFinal.patch \
file://0004-Make-compatibility-with-gcc-4.8.patch \
file://0005-Link-atomic-library.patch \
file://0006-Use-target-ldflags.patch \
"
SRC_URI_append_class-target = " \
file://0002-Using-native-torque.patch \
"
SRC_URI[md5sum] = "d76210a6ae1ea73d10254947684836fb"
SRC_URI[sha256sum] = "4e22d926f054150002055474e452ed6cbb85860aa7dc5422213a2002ed9791d5"
S = "${WORKDIR}/node-v${PV}"
# v8 errors out if you have set CCACHE
CCACHE = ""
def map_nodejs_arch(a, d):
import re
if re.match('i.86$', a): return 'ia32'
elif re.match('x86_64$', a): return 'x64'
elif re.match('aarch64$', a): return 'arm64'
elif re.match('(powerpc64|ppc64le)$', a): return 'ppc64'
elif re.match('powerpc$', a): return 'ppc'
return a
ARCHFLAGS_arm = "${#bb.utils.contains('TUNE_FEATURES', 'callconvention-hard', '--with-arm-float-abi=hard', '--with-arm-float-abi=softfp', d)} \
${#bb.utils.contains('TUNE_FEATURES', 'neon', '--with-arm-fpu=neon', \
bb.utils.contains('TUNE_FEATURES', 'vfpv3d16', '--with-arm-fpu=vfpv3-d16', \
bb.utils.contains('TUNE_FEATURES', 'vfpv3', '--with-arm-fpu=vfpv3', \
'--with-arm-fpu=vfp', d), d), d)}"
GYP_DEFINES_append_mipsel = " mips_arch_variant='r1' "
ARCHFLAGS ?= ""
# Node is way too cool to use proper autotools, so we install two wrappers to forcefully inject proper arch cflags to workaround gypi
do_configure () {
rm -rf ${S}/deps/openssl
export LD="${CXX}"
GYP_DEFINES="${GYP_DEFINES}" export GYP_DEFINES
# $TARGET_ARCH settings don't match --dest-cpu settings
./configure --prefix=${prefix} --with-intl=system-icu --without-snapshot --shared-openssl --shared-zlib \
--dest-cpu="${#map_nodejs_arch(d.getVar('TARGET_ARCH'), d)}" \
--dest-os=linux \
${ARCHFLAGS}
}
do_compile () {
export LD="${CXX}"
oe_runmake BUILDTYPE=Release
}
do_install () {
oe_runmake install DESTDIR=${D}
}
do_install_append_class-native() {
# use node from PATH instead of absolute path to sysroot
# node-v0.10.25/tools/install.py is using:
# shebang = os.path.join(node_prefix, 'bin/node')
# update_shebang(link_path, shebang)
# and node_prefix can be very long path to bindir in native sysroot and
# when it exceeds 128 character shebang limit it's stripped to incorrect path
# and npm fails to execute like in this case with 133 characters show in log.do_install:
# updating shebang of /home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/work/x86_64-linux/nodejs-native/0.10.15-r0/image/home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/sysroots/x86_64-linux/usr/bin/npm to /home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/sysroots/x86_64-linux/usr/bin/node
# /usr/bin/npm is symlink to /usr/lib/node_modules/npm/bin/npm-cli.js
# use sed on npm-cli.js because otherwise symlink is replaced with normal file and
# npm-cli.js continues to use old shebang
sed "1s^.*^#\!/usr/bin/env node^g" -i ${D}${exec_prefix}/lib/node_modules/npm/bin/npm-cli.js
# Install the native torque to provide it within sysroot for the target compilation
install -d ${D}${bindir}
install -m 0755 ${S}/out/Release/torque ${D}${bindir}/torque
}
do_install_append_class-target() {
sed "1s^.*^#\!${bindir}/env node^g" -i ${D}${exec_prefix}/lib/node_modules/npm/bin/npm-cli.js
}
PACKAGES =+ "${PN}-npm"
FILES_${PN}-npm = "${exec_prefix}/lib/node_modules ${bindir}/npm ${bindir}/npx"
RDEPENDS_${PN}-npm = "bash python-shell python-datetime python-subprocess python-textutils \
python-compiler python-misc python-multiprocessing"
PACKAGES =+ "${PN}-systemtap"
FILES_${PN}-systemtap = "${datadir}/systemtap"
BBCLASSEXTEND = "native"
I am able to apply gdb to node binary below is the snapshot. It core dumps at this point.
Thread 1 "node" hit Breakpoint 10, v8::internal::Runtime_PromiseHookInit (args_length=2, args_object=0x3fffffffd188, isolate=0x11284ab0)
at /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc:132
132 /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc: No such file or directory.
(gdb) bt
#0 v8::internal::Runtime_PromiseHookInit (args_length=2, args_object=0x3fffffffd188, isolate=0x11284ab0) at /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc:132
#1 0x000003c7b3f04134 in ?? ()
(gdb) c
Continuing.
Nodejs is not supported on PPC64 LE architecture. There is only support for the Big Endian platform on PPC architecture till 7.9 Version.

How do I write a yocto/bitbake recipe to replace the default vsftpd.conf file with my own file?

I want to replace the default vsftpd.conf file with my own file!
My bitbake file looks following:
bbexample_1.0.bb
DESCRIPTION = "Configuration and extra files for TX28"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
S = "${WORKDIR}"
SRC_URI += " \
file://ld.so.conf \
file://nginx/nginx.conf \
file://init.d/myscript.sh"
inherit allarch
do_install () {
install -d ${D}${sysconfdir}
install -d ${D}${sysconfdir}/nginx
install -d ${D}${sysconfdir}/init.d
rm -f ${D}${sysconfdir}/ld.so.conf
install -m 0755 ${WORKDIR}/ld.so.conf ${D}${sysconfdir}
install -m 0755 ${WORKDIR}/nginx/nginx.conf ${D}${sysconfdir}/nginx/
install -m 0755 ${WORKDIR}/init.d/myscript.sh ${D}${sysconfdir}/init.d/
}
bbexample_1.0.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}-${PV}:"
SRC_URI += " \
file://vsftpd.conf"
do_install_append () {
install -m 0755 ${WORKDIR}/vsftpd.conf ${D}${sysconfdir}
}
But, the file could not be replaced!
What is wrong?
What you need to do is to use a bbappend in your own layer,
vsftpd recipe is located in meta-openembedded/meta-networking/recipes-daemons
Thus you need to create a file called vstfpd_%.bbappend (% makes it valid for every version)
This file must be located in <your-layer>/meta-networking/recipes-daemons. You also need to put your custom vsftpd.conf in <your-layer>/meta-networking/recipes-daemons/vsftpd folder
Its content should be:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
do_install_append(){
install -m 644 ${WORKDIR}/vsftpd.conf ${D}${sysconfdir}
}
Example from meta-openembedded here
you should add to your recipe:
FILES_${PN} += " file you installed"

Resources