How to properly decrypt PGP encrypted file in a AWS Lambda function in Python? - python-3.x

I'm trying to create an AWS Lambda in python that:
downloads a compressed and encrypted file from an S3 bucket
decrypts the file using python-gnupg
stores the decrypted compressed contents in another S3 bucket
This is using python 3.8 and python-gnupg package in a Lambda layer.
I've verified the PGP key is correct, that it is being loaded into the keyring just fine, and that the encrypted file is being downloaded correctly.
However, when I attempt to run gnupg.decrypt_file I get output that looks like it's been successful, but
the decrypt status shows not ok and the decrypted file does not exist.
How can I get PGP decryption working in Lambda?
Here is the relevant code extracted from the lambda function:
import gnupg
from pathlib import Path
# ...
gpg = gnupg.GPG(gnupghome='/tmp')
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
# ...
# this works as expected
status = gpg.import_keys(MY_KEY_DATA)
# ...
print('Performing Decryption of', encrypted_path)
print(encrypted_path, "exists :", Path(encrypted_path).exists())
with open(encrypted_path, 'rb') as f:
status = gpg.decrypt_file(f, output=decrypted_path, always_trust = True)
print('decrypt ok =', status.ok)
print('decrypt status = ', status.status)
print('decrypt stderr = ', status.status)
print('decrypt stderr = ', status.stderr)
print(decrypted_path, "exists :", Path(decrypted_path).exists())
Expectation was to get output similar to the following in CloudWatch:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = True
2022-11-08T10:24:44.018-05:00 decrypt status = [SOME OUTPUT FROM GPG BINARY]
2022-11-08T10:24:44.018-05:00 decrypt stderr = ""
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : True
Instead what I get is:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = False
2022-11-08T10:24:44.018-05:00 decrypt status = good passphrase
2022-11-08T10:24:44.018-05:00 decrypt stderr = [GNUPG:] ENC_TO XXXXXX 1 0
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : False
It appears as though decryption process starts to work, but something kills it, or perhaps the gpg binary is expecting some TTY input and halts?
I've tried locally running gpg decryption using the cli and it works as expected, although I am using GnuPG version 2.3.1, not sure what version exists on Lambda.

After a lot of digging I managed to get this working.
I'm not 100% sure if the cause is the older GnuPG binary installed on the Lambda image by default, but to be sure I decided to build a GnuPG 2.3.1 layer for lambda which I confirmed was working as expected in a Docker container.
I used https://github.com/skeeto/lean-static-gpg/blob/master/build.sh as a foundation for compiling the binary in
Docker, but updated it to include compression, which was required for this use case.
Here is the updated updated build.sh script I used, optimized for building for Lambda:
#!/bin/sh
set -e
MUSL_VERSION=1.2.2
GNUPG_VERSION=2.3.1
LIBASSUAN_VERSION=2.5.5
LIBGCRYPT_VERSION=1.9.2
LIBGPGERROR_VERSION=1.42
LIBKSBA_VERSION=1.5.1
NPTH_VERSION=1.6
PINENTRY_VERSION=1.1.1
BZIP_VERSION=1.0.6-g10
ZLIB_VERSION=1.2.12
DESTDIR=""
PREFIX="/opt"
WORK="$PWD/work"
PATH="$PWD/work/deps/bin:$PATH"
NJOBS=$(nproc)
clean() {
rm -rf "$WORK"
}
distclean() {
clean
rm -rf download
}
download() {
gnupgweb=https://gnupg.org/ftp/gcrypt
mkdir -p download
(
cd download/
xargs -n1 curl -O <<EOF
https://www.musl-libc.org/releases/musl-$MUSL_VERSION.tar.gz
$gnupgweb/gnupg/gnupg-$GNUPG_VERSION.tar.bz2
$gnupgweb/libassuan/libassuan-$LIBASSUAN_VERSION.tar.bz2
$gnupgweb/libgcrypt/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
$gnupgweb/libgpg-error/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
$gnupgweb/libksba/libksba-$LIBKSBA_VERSION.tar.bz2
$gnupgweb/npth/npth-$NPTH_VERSION.tar.bz2
$gnupgweb/pinentry/pinentry-$PINENTRY_VERSION.tar.bz2
$gnupgweb/bzip2/bzip2-$BZIP_VERSION.tar.gz
$gnupgweb/zlib/zlib-$ZLIB_VERSION.tar.gz
EOF
)
}
clean
if [ ! -d download/ ]; then
download
fi
mkdir -p "$DESTDIR$PREFIX" "$WORK/deps"
tar -C "$WORK" -xzf download/musl-$MUSL_VERSION.tar.gz
(
mkdir -p "$WORK/musl"
cd "$WORK/musl"
../musl-$MUSL_VERSION/configure \
--prefix="$WORK/deps" \
--enable-wrapper=gcc \
--syslibdir="$WORK/deps/lib"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/zlib-$ZLIB_VERSION.tar.gz
(
mkdir -p "$WORK/zlib"
cd "$WORK/zlib"
../zlib-$ZLIB_VERSION/configure \
--prefix="$WORK/deps"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/bzip2-$BZIP_VERSION.tar.gz
(
export CFLAGS="-fPIC"
cd "$WORK/bzip2-$BZIP_VERSION"
make install PREFIX="$WORK/deps"
make clean
)
tar -C "$WORK" -xjf download/npth-$NPTH_VERSION.tar.bz2
(
mkdir -p "$WORK/npth"
cd "$WORK/npth"
../npth-$NPTH_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
(
mkdir -p "$WORK/libgpg-error"
cd "$WORK/libgpg-error"
../libgpg-error-$LIBGPGERROR_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-nls \
--disable-doc \
--disable-languages
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libassuan-$LIBASSUAN_VERSION.tar.bz2
(
mkdir -p "$WORK/libassuan"
cd "$WORK/libassuan"
../libassuan-$LIBASSUAN_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
(
mkdir -p "$WORK/libgcrypt"
cd "$WORK/libgcrypt"
../libgcrypt-$LIBGCRYPT_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-doc \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libksba-$LIBKSBA_VERSION.tar.bz2
(
mkdir -p "$WORK/libksba"
cd "$WORK/libksba"
../libksba-$LIBKSBA_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/gnupg-$GNUPG_VERSION.tar.bz2
(
mkdir -p "$WORK/gnupg"
cd "$WORK/gnupg"
../gnupg-$GNUPG_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libgcrypt-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--with-ksba-prefix="$WORK/deps" \
--with-npth-prefix="$WORK/deps" \
--with-agent-pgm="$PREFIX/bin/gpg-agent" \
--with-pinentry-pgm="$PREFIX/bin/pinentry" \
--enable-zip \
--enable-bzip2 \
--disable-card-support \
--disable-ccid-driver \
--disable-dirmngr \
--disable-gnutls \
--disable-gpg-blowfish \
--disable-gpg-cast5 \
--disable-gpg-idea \
--disable-gpg-md5 \
--disable-gpg-rmd160 \
--disable-gpgtar \
--disable-ldap \
--disable-libdns \
--disable-nls \
--disable-ntbtls \
--disable-photo-viewers \
--disable-scdaemon \
--disable-sqlite \
--disable-wks-tools
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
rm "$DESTDIR$PREFIX/bin/gpgscm"
)
tar -C "$WORK" -xjf download/pinentry-$PINENTRY_VERSION.tar.bz2
(
mkdir -p "$WORK/pinentry"
cd "$WORK/pinentry"
../pinentry-$PINENTRY_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--disable-ncurses \
--disable-libsecret \
--enable-pinentry-tty \
--disable-pinentry-curses \
--disable-pinentry-emacs \
--disable-inside-emacs \
--disable-pinentry-gtk2 \
--disable-pinentry-gnome3 \
--disable-pinentry-qt \
--disable-pinentry-tqt \
--disable-pinentry-fltk
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
)
rm -rf "$DESTDIR$PREFIX/sbin"
rm -rf "$DESTDIR$PREFIX/share/doc"
rm -rf "$DESTDIR$PREFIX/share/info"
# cleanup
distclean
Below is the Dockerfile used to build the layer:
FROM public.ecr.aws/lambda/python:3.8
# the output volume to extract the build contents
VOLUME ["/opt/bin"]
RUN yum -y groupinstall 'Development Tools'
RUN yum -y install tar gzip zlib bzip2 file hostname
WORKDIR /opt
# copy the build script
COPY static-gnupg-build.sh .
# run the build script
RUN bash ./static-gnupg-build.sh
# when run output the version
ENTRYPOINT [ "/opt/bin/gpg", "--version" ]
Once the code is compiled in the image I copied it to my local directory, zipped it, and published the layer:
docker cp MY_DOCKER_ID:/opt/bin ./gnupg
cd ./gnupg && zip -r gnupg-layer.zip bin
To publish the layer:
aws lambda publish-layer-version \
--layer-name gnupg \
--zip-file fileb://layer-gpg2.3.zip \
--compatible-architectures python3.8
I decided to not use the python-gnupg package to have more control over the exact GnuPG binary flags so I added my own
binary wrapper function:
def gpg_run(flags: list, subprocess_kwargs: dict):
gpg_bin_args = [
'/opt/bin/gpg',
'--no-tty',
'--yes', # don't prompt for input
'--always-trust', # always trust
'--status-fd', '1', # return status to stdout
'--homedir', '/tmp'
]
gpg_bin_args.extend(flags)
print('running cmd', ' '.join(gpg_bin_args))
result = subprocess.run(gpg_bin_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**subprocess_kwargs)
return result.returncode, \
result.stdout.decode('utf-8').split('/n'), \
result.stderr.decode('utf-8').split('/n')
And then added an import key and decode function:
def gpg_import_keys(input):
return gpg_run(flags=['--import'], subprocess_kwargs={input: input})
def gpg_decrypt(input, output):
return gpg_run(flags=['--output', output, '--decrypt', input])
And updated the relavent Lambda code with:
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
#...
# TODO: import the keys only needs to run once per instance
# ideally would be moved to a singleton
code, stdout, stderr = gpg_import_keys(bytes(MY_KEY_DATA, 'utf-8'))
if code > 0:
raise Exception(f'gpg_import_keys failed with code {code}: {stdout} {stderr}')
print('import_keys stdout =', stdout)
print('import_keys stderr =', stderr)
# Perform decryption.
print('Performing Decryption of', encrypted_path)
code, stdout, stderr = gpg_decrypt(encrypted_path, output=decrypted_path)
if code > 0:
raise Exception(f'gpg_decrypt failed with code {code}: {stderr}')
print('decrypt stdout =', stdout)
print('decrypt stderr =', stderr)
print('Status: OK')
print(decrypted_path, "exists :", Path(decrypted_path).exists())
And now the Cloudwatch log output is as expected and I've confirmed the decoded file is correct!
...
2022-11-17T09:25:22.732-06:00 running cmd:['/opt/bin/gpg', '--no-tty', '--batch', '--yes', '--always-trust', '--status-fd', '1', '--homedir', '/tmp', '--import']
2022-11-17T09:25:22.769-06:00 import_keys ok = True
2022-11-17T09:25:22.769-06:00 import_keys stdout = ['[GNUPG:] IMPORT_OK 0 XXX', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] IMPORT_OK 16 XXX', '[GNUPG:] IMPORT_RES 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0', '']
2022-11-17T09:25:22.769-06:00 import_keys stderr = ['']
2022-11-17T09:25:22.769-06:00 Performing Decryption of /tmp/test.txt.gpg
2022-11-17T09:25:22.769-06:00 running cmd: /opt/bin/gpg --no-tty --yes --always-trust --status-fd 1 --homedir /tmp --output /tmp/decrypted.zip --decrypt /tmp/encrypted.zip
2022-11-17T09:25:22.850-06:00 decrypt stdout = ['[GNUPG:] ENC_TO XXX 1 0', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] DECRYPTION_KEY XXX -', '[GNUPG:] BEGIN_DECRYPTION', '[GNUPG:] DECRYPTION_INFO 0 9 2', '[GNUPG:] PLAINTEXT 62 1667796554 encrypted.zip', '[GNUPG:] PLAINTEXT_LENGTH 428', '[GNUPG:] DECRYPTION_OKAY', '[GNUPG:] GOODMDC', '[GNUPG:] END_DECRYPTION', '']
2022-11-17T09:25:22.850-06:00 decrypt stderr = ['gpg: encrypted with rsa2048 key, ID XXX, created 2022-11-07', ' "XXX"', '']
2022-11-17T09:25:22.850-06:00 /tmp/decrypted.zip exists: True
...

Related

Preseed late_command

I am trying to add a late_command to my preseed.cfg file but I get following error.
command:
d-i preseed/late_command string \
no_proxy=0,1,2,3,4,5,6,7,8,9 preseed_fetch /scripts/preseed_late_commands.sh /tmp/preseed_late_commands.sh ; \
/target/usr/bin/dos2unix /tmp/preseed_late_commands.sh || true; \
log-output -t preseed_late_commands.sh sh /tmp/preseed_late_commands.sh
The error that I get:
Execution of preseeded command "d-i preseed/late_command string \
no_proxy=0,1,2,3,4,5,6,7,8,9 preseed_fetch /scripts/preseed_late_commands.sh /tmp/preseed_late_commands.sh ; \
/target/usr/bin/dos2unix /tmp/preseed_late_commands.sh || true; \
log-output -t preseed_late_commands.sh sh /tmp/preseed_late_commands.sh" failed with exit code 2.
Infos:
os: Debian 11 (Bullseye)
preseed.cfg directory: http/d-i/bullseye/preseed.cfg
preseed_late_commands.sh directory: http/d-i/bullseye/scripts/preseed_late_commands.sh
Error image:
Error image
What should I do to get this command to run?
How and where can I see a live log (console output) or find a log file?

Yocto:Execute shell script inside my source repository environment (inside downloads folder ?)

in Yocto I need to execute a bash script that depends on the source repository environment. My current approach is to execute the script directly in the downloads folder.
Pseudocode (hope you know what I mean):
do_patch_append () {
os.system(' \
pushd ${DL_DIR}/svn/*/svn/myapp/Src; \
dos2unix ./VersionBuild/MakeVersionList.sh; \
./VersionBuild/MakeVersionList.sh ${S}/Src/VersionList.h; \
popd \
')
}
Before I start fiddling with the details: Is this a valid approach ? And is do_patch the right place ?
Reason: The script MakeVersionList.sh generates a header file containing revision info from subversion. It executes "svn info" on several folders and parses its output into the header file VersionList.h, which will later be accessed in ${S} during compilation.
Thanks a lot.
Thanks for giving helpful advice. Here's how I finally did it. To minimize dependencies to server infrastructure (SVN server) I put the version generation into do_fetch. I had to do a manual do_unpack as well, because adding the generated file to SRC_URI (like ${DL_DIR}/svn/...) made the fetcher try to fetch it.
#
# Build bitstone application
#
SUMMARY = "Bitstone mega wallet"
SECTION = "Rockefeller"
LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://COPYING;md5=bbea815ee2795b2f4230826c0c6b8814"
#Register for root file system aggregation
APP_NAME = "bst-wallet"
FILES_${PN} += "${bindir}/${APP_NAME}"
PV = "${SRCPV}"
SRCREV = "${AUTOREV}"
SVN_SRV = "flints-svn"
SRC_URI = "svn://${SVN_SRV}/svn/${APP_NAME};module=Src;protocol=http;user=fred;pswd=pebbles "
S = "${WORKDIR}/Src"
GEN_FILE = "VersionList.h"
do_fetch_append () {
os.system(' \
cd ./svn/{0}/svn/{1}/Src; \
TZ=Pacific/Yap ./scripts/MakeVersionList.sh {2}; \
'.format(d.getVar("SVN_SRV"), d.getVar("APP_NAME"), d.getVar("GEN_FILE"))
)
}
do_unpack_append () {
os.system(' \
cp {3}/svn/{0}/svn/{1}/Src/{2} {4} \
'.format(d.getVar("SVN_SRV"), d.getVar("APP_NAME"), d.getVar("GEN_FILE"), \
d.getVar("DL_DIR"), d.getVar("S")) \
)
}
do_install_append () {
install -d ${D}${bindir}
install -m 0755 ${APP_NAME} ${D}${bindir}/
}

Inserting lines in a multiline command using a for loop in a bash script

I have a bash script which executes a multi-line command multiple times and I am changing the some values on each iteration. Here is my code below:
for (( peer=1; peer<=$nodesNum;peer++ ))
do
echo "Starting peer $peer"
nodeos -p eosio -d /eosio_data/node$peer --config-dir /eosio_data/node$peer --http-server-address=127.0.0.1:$http \
--p2p-listen-endpoint=127.0.0.1:$p2p --access-control-allow-origin=* \
-p "user$peer" --http-validate-host=false --signature-provider=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3 \
--max-transaction-time=1000 --genesis-json /eosio_data/genesis.json --wasm-runtime=wabt --max-clients=2000 -e \
--plugin eosio::chain_plugin --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin \
--plugin eosio::chain_api_plugin \
--p2p-peer-address localhost:8888 \
&>eosio_data/logs/nodeos_stderr$p2p.log & \
sleep 1
http=$((http+1))
p2p=$((p2p+1))
done
I need to add a --p2p-peer-address localhost:$((9010 + $peer)) command multiple times for each peer as part of the multi-line command. I new to bash scripting and I couldn't find a similar example.
It's not entirely clear what you need, but I think it's something like the following. An array of --p2p-peer-address options is created, then incorporated
into the larger set of common options. Each call to nodeos then has some peer-specific options in addition to the common options.
# for example
HTTP_BASE=8080
P2P_BASE=12345
# Set of --p2p-peer-address options for shared by all calls.
for ((peer=1; peer <= $nodesNum; peer++)); do
peer_args+=(
--p2p-peer-address
localhost:$((9010+$peer))
)
done
# These are the same for all calls
fixed_args=(
-p eosio
"--access-control-allow-origin=*"
--http-validate-host=false
--signature-provider=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
--max-transaction-time=1000
--genesis-json /eosio_data/genesis.json
--wasm-runtime=wabt
--max-clients=2000
-e
--plugin eosio::chain_plugin
--plugin eosio::producer_plugin
--plugin eosio::producer_api_plugin
--plugin eosio::chain_api_plugin
--p2p-peer-address localhost:8888
"${peer_args[#]}"
)
for ((peer=1; peer<=$nodesNum; peer++)); do
# Call-specific arguments, followed by common arguments.
# $peer is either incorporated into each argument value
nodeos -d /eosio_data/node$peer \
--config-dir /eosio_data/node$peer \
--http-server-address=127.0.0.1:$((HTTP_BASE + $peer)) \
--p2p-listen-endpoint=127.0.0.1:$((P2P_BASE + $peer)) \
-p "user$peer" \
"${fixed_args[#]}" \
&> eosio_data/logs/nodeos_stderr$((P2P_BASE + $peer)).log &
sleep 1
done
To avoid having each instance of nodeos try to connect to itself, build peer_args anew on each iteration of the loop. Remove "${peer_args[#]}" from the definition of fixed_args, then adjust the main loop like so:
for ((peer=1; peer <= $nodesNum; peer++)); do
peer_args=()
for ((neighbor=1; neighbor <= $nodesNum; neighbor++)); do
(( neighbor == peer )) && continue
peer_args+=( --p2p-peer-address localhost:$((9010+peer)) )
done
nodeos -d /eosio_data/node$peer \
--config-dir /eosio_data/node$peer \
--http-server-address=127.0.0.1:$((HTTP_BASE + $peer)) \
--p2p-listen-endpoint=127.0.0.1:$((P2P_BASE + $peer)) \
-p "user$peer" \
"${fixed_args[#]}" \
"${peer_args[#]}" \
&> eosio_data/logs/nodeos_stderr$((P2P_BASE + $peer)).log &
sleep 1
done
I think you were very close. The expression you authored --p2p-peer-address localhost:$((9010 + $peer)) can be inserted into your nodeos call as follows:
for (( peer=1; peer<=$nodesNum;peer++ ))
do
echo "Starting peer $peer"
nodeos -p eosio -d /eosio_data/node$peer \
--config-dir /eosio_data/node$peer \
--http-server-address=127.0.0.1:$http \
--p2p-listen-endpoint=127.0.0.1:$p2p \
--access-control-allow-origin=* \
-p "user$peer" \
--http-validate-host=false \
--signature-provider=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3 \
--max-transaction-time=1000 --genesis-json /eosio_data/genesis.json --wasm-runtime=wabt --max-clients=2000 -e \
--plugin eosio::chain_plugin --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin \
--plugin eosio::chain_api_plugin \
--p2p-peer-address localhost:$((9010 + $peer)) &>eosio_data/logs/nodeos_stderr$p2p.log &
sleep 1
http=$((http+1))
p2p=$((p2p+1))
done

How do I write a yocto/bitbake recipe to replace the default vsftpd.conf file with my own file?

I want to replace the default vsftpd.conf file with my own file!
My bitbake file looks following:
bbexample_1.0.bb
DESCRIPTION = "Configuration and extra files for TX28"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
S = "${WORKDIR}"
SRC_URI += " \
file://ld.so.conf \
file://nginx/nginx.conf \
file://init.d/myscript.sh"
inherit allarch
do_install () {
install -d ${D}${sysconfdir}
install -d ${D}${sysconfdir}/nginx
install -d ${D}${sysconfdir}/init.d
rm -f ${D}${sysconfdir}/ld.so.conf
install -m 0755 ${WORKDIR}/ld.so.conf ${D}${sysconfdir}
install -m 0755 ${WORKDIR}/nginx/nginx.conf ${D}${sysconfdir}/nginx/
install -m 0755 ${WORKDIR}/init.d/myscript.sh ${D}${sysconfdir}/init.d/
}
bbexample_1.0.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}-${PV}:"
SRC_URI += " \
file://vsftpd.conf"
do_install_append () {
install -m 0755 ${WORKDIR}/vsftpd.conf ${D}${sysconfdir}
}
But, the file could not be replaced!
What is wrong?
What you need to do is to use a bbappend in your own layer,
vsftpd recipe is located in meta-openembedded/meta-networking/recipes-daemons
Thus you need to create a file called vstfpd_%.bbappend (% makes it valid for every version)
This file must be located in <your-layer>/meta-networking/recipes-daemons. You also need to put your custom vsftpd.conf in <your-layer>/meta-networking/recipes-daemons/vsftpd folder
Its content should be:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
do_install_append(){
install -m 644 ${WORKDIR}/vsftpd.conf ${D}${sysconfdir}
}
Example from meta-openembedded here
you should add to your recipe:
FILES_${PN} += " file you installed"

Error: undefined reference to NetCDF functions

Background
I'm working on compiling MCIP which mean Meteorology Chemistry Interface Processor in centos 5.9 system.
I use gcc -version 4.9 to implement the process.
Setting
Here is some configuration setting in ~/.bashrc:
export DIR=/disk2/hyf/lib ## All lib ar installed under this path
# NetCDF setting
export PATH="$DIR/netcdf/bin:$PATH"
export NETCDF="$DIR/netcdf"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NETCDF/lib
# IOAPI
export BIN=Linux2_x86_64gfort
export BASEDIR=/disk2/hyf/backup/software/ioapi
export PATH=$DIR/ioapi-3.1/bin:$PATH
export LD_LIBRARY_PATH=$DIR/ioapi-3.1/lib:$LD_LIBRARY_PATH
# Set M3LIB for the model
export M3LIB=/disk2/hyf/cmaq/CMAQv5.1/lib
I also do soft link for the CMAQ model as these:
ln -s $NETCDF $(M3LIB)/x86_64/gcc/netcdf
ln -s $IOAPI $(M3LIB)/x86_64/gcc/ioapi
Makefile
Here is some subroutine in the Makefile:
# Requirements: set M3LIB before running this script
.SUFFIXES:
.SUFFIXES: .o .f90 .F90
MODEL = mcip.exe
#...gfortran
FC = gfortran
NETCDF = $(M3LIB)/netcdf
IOAPI_ROOT = $(M3LIB)/ioapi
FFLAGS = -O3 -gdwarf-2 -gstrict-dwarf -I$(NETCDF)/include - I$(IOAPI_ROOT)/include \
-ffpe-trap='invalid','zero','overflow','underflow'
##FFLAGS = -g -O0 \
-ffpe-trap='invalid','zero','overflow','underflow' \
-I$(NETCDF)/include -I$(IOAPI_ROOT)/include
LIBS = -L$(IOAPI_ROOT)/lib -lioapi \
-L$(NETCDF)/lib -lnetcdf -lgomp
DEFS =
MODULES =\
const_mod.o \
const_pbl_mod.o \
coord_mod.o \
date_time_mod.o \
date_pack_mod.o \
files_mod.o \
groutcom_mod.o \
luvars_mod.o \
mcipparm_mod.o \
mcoutcom_mod.o \
mdoutcom_mod.o \
metinfo_mod.o \
metvars_mod.o \
vgrd_mod.o \
wrf_netcdf_mod.o \
xvars_mod.o \
sat2mcip_mod.o
OBJS =\
mcip.o \
alloc_ctm.o \
alloc_met.o \
alloc_x.o \
bcldprc_ak.o \
blddesc.o \
chkwpshdr.o \
chkwrfhdr.o \
close_files.o \
collapx.o \
comheader.o \
cori.o \
dealloc_ctm.o \
dealloc_met.o \
dealloc_x.o \
detangle_soil_px.o \
e_aerk.o \
dynflds.o \
getgist.o \
getluse.o \
getmet.o \
getpblht.o \
getsdt.o \
getversion.o \
graceful_stop.o \
gridout.o \
init_io.o \
init_met.o \
init_x.o \
julian.o \
layht.o \
ll2xy_lam.o \
.......
ERROR
The output after I make shows like:
make[1]: Entering directory `/disk2/hyf/cmaq/CMAQv5.1/scripts/mcip/src'
gfortran -g -O0 -gdwarf-2 -gstrict-dwarf \
-I/disk2/hyf/cmaq/CMAQv5.1/lib/x86_64/gcc/netcdf/include \
-I/disk2/hyf/cmaq/CMAQv5.1/lib/x86_64/gcc/ioapi/include -c const_mod.f90
......
chkwpshdr.o: In function `chkwpshdr_':
/disk2/hyf/cmaq/CMAQv5.1/scripts/mcip/src/chkwpshdr.f90:109: \
undefined reference to `__netcdf_MOD_nf90_get_att_one_fourbyteint'
(a lot of these code showing the same mistake 'undefined reference')
/disk2/hyf/cmaq/CMAQv5.1/lib/x86_64/gcc/ioapi/lib/libioapi.a(open3.o): In
function `open3_':
open3.F:(.text+0x1531): undefined reference to `ncclos_'
.........
I think the compiler may have some conflicts with the .F and .f90 files in some case. But I don't know why. The gcc andhas already successfully installed with $PATH defined.
I've faced the same problem, and I've got to solve it by adding -lnetcdff and -lnetcdf (in this order) to the LIBS option in the MCIP makefile. Make sure the NETCDF variable is pointing to the correct path where netcdf is installed in your system.

Resources