Related
I'm trying to create an AWS Lambda in python that:
downloads a compressed and encrypted file from an S3 bucket
decrypts the file using python-gnupg
stores the decrypted compressed contents in another S3 bucket
This is using python 3.8 and python-gnupg package in a Lambda layer.
I've verified the PGP key is correct, that it is being loaded into the keyring just fine, and that the encrypted file is being downloaded correctly.
However, when I attempt to run gnupg.decrypt_file I get output that looks like it's been successful, but
the decrypt status shows not ok and the decrypted file does not exist.
How can I get PGP decryption working in Lambda?
Here is the relevant code extracted from the lambda function:
import gnupg
from pathlib import Path
# ...
gpg = gnupg.GPG(gnupghome='/tmp')
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
# ...
# this works as expected
status = gpg.import_keys(MY_KEY_DATA)
# ...
print('Performing Decryption of', encrypted_path)
print(encrypted_path, "exists :", Path(encrypted_path).exists())
with open(encrypted_path, 'rb') as f:
status = gpg.decrypt_file(f, output=decrypted_path, always_trust = True)
print('decrypt ok =', status.ok)
print('decrypt status = ', status.status)
print('decrypt stderr = ', status.status)
print('decrypt stderr = ', status.stderr)
print(decrypted_path, "exists :", Path(decrypted_path).exists())
Expectation was to get output similar to the following in CloudWatch:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = True
2022-11-08T10:24:44.018-05:00 decrypt status = [SOME OUTPUT FROM GPG BINARY]
2022-11-08T10:24:44.018-05:00 decrypt stderr = ""
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : True
Instead what I get is:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = False
2022-11-08T10:24:44.018-05:00 decrypt status = good passphrase
2022-11-08T10:24:44.018-05:00 decrypt stderr = [GNUPG:] ENC_TO XXXXXX 1 0
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : False
It appears as though decryption process starts to work, but something kills it, or perhaps the gpg binary is expecting some TTY input and halts?
I've tried locally running gpg decryption using the cli and it works as expected, although I am using GnuPG version 2.3.1, not sure what version exists on Lambda.
After a lot of digging I managed to get this working.
I'm not 100% sure if the cause is the older GnuPG binary installed on the Lambda image by default, but to be sure I decided to build a GnuPG 2.3.1 layer for lambda which I confirmed was working as expected in a Docker container.
I used https://github.com/skeeto/lean-static-gpg/blob/master/build.sh as a foundation for compiling the binary in
Docker, but updated it to include compression, which was required for this use case.
Here is the updated updated build.sh script I used, optimized for building for Lambda:
#!/bin/sh
set -e
MUSL_VERSION=1.2.2
GNUPG_VERSION=2.3.1
LIBASSUAN_VERSION=2.5.5
LIBGCRYPT_VERSION=1.9.2
LIBGPGERROR_VERSION=1.42
LIBKSBA_VERSION=1.5.1
NPTH_VERSION=1.6
PINENTRY_VERSION=1.1.1
BZIP_VERSION=1.0.6-g10
ZLIB_VERSION=1.2.12
DESTDIR=""
PREFIX="/opt"
WORK="$PWD/work"
PATH="$PWD/work/deps/bin:$PATH"
NJOBS=$(nproc)
clean() {
rm -rf "$WORK"
}
distclean() {
clean
rm -rf download
}
download() {
gnupgweb=https://gnupg.org/ftp/gcrypt
mkdir -p download
(
cd download/
xargs -n1 curl -O <<EOF
https://www.musl-libc.org/releases/musl-$MUSL_VERSION.tar.gz
$gnupgweb/gnupg/gnupg-$GNUPG_VERSION.tar.bz2
$gnupgweb/libassuan/libassuan-$LIBASSUAN_VERSION.tar.bz2
$gnupgweb/libgcrypt/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
$gnupgweb/libgpg-error/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
$gnupgweb/libksba/libksba-$LIBKSBA_VERSION.tar.bz2
$gnupgweb/npth/npth-$NPTH_VERSION.tar.bz2
$gnupgweb/pinentry/pinentry-$PINENTRY_VERSION.tar.bz2
$gnupgweb/bzip2/bzip2-$BZIP_VERSION.tar.gz
$gnupgweb/zlib/zlib-$ZLIB_VERSION.tar.gz
EOF
)
}
clean
if [ ! -d download/ ]; then
download
fi
mkdir -p "$DESTDIR$PREFIX" "$WORK/deps"
tar -C "$WORK" -xzf download/musl-$MUSL_VERSION.tar.gz
(
mkdir -p "$WORK/musl"
cd "$WORK/musl"
../musl-$MUSL_VERSION/configure \
--prefix="$WORK/deps" \
--enable-wrapper=gcc \
--syslibdir="$WORK/deps/lib"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/zlib-$ZLIB_VERSION.tar.gz
(
mkdir -p "$WORK/zlib"
cd "$WORK/zlib"
../zlib-$ZLIB_VERSION/configure \
--prefix="$WORK/deps"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/bzip2-$BZIP_VERSION.tar.gz
(
export CFLAGS="-fPIC"
cd "$WORK/bzip2-$BZIP_VERSION"
make install PREFIX="$WORK/deps"
make clean
)
tar -C "$WORK" -xjf download/npth-$NPTH_VERSION.tar.bz2
(
mkdir -p "$WORK/npth"
cd "$WORK/npth"
../npth-$NPTH_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
(
mkdir -p "$WORK/libgpg-error"
cd "$WORK/libgpg-error"
../libgpg-error-$LIBGPGERROR_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-nls \
--disable-doc \
--disable-languages
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libassuan-$LIBASSUAN_VERSION.tar.bz2
(
mkdir -p "$WORK/libassuan"
cd "$WORK/libassuan"
../libassuan-$LIBASSUAN_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
(
mkdir -p "$WORK/libgcrypt"
cd "$WORK/libgcrypt"
../libgcrypt-$LIBGCRYPT_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-doc \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libksba-$LIBKSBA_VERSION.tar.bz2
(
mkdir -p "$WORK/libksba"
cd "$WORK/libksba"
../libksba-$LIBKSBA_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/gnupg-$GNUPG_VERSION.tar.bz2
(
mkdir -p "$WORK/gnupg"
cd "$WORK/gnupg"
../gnupg-$GNUPG_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libgcrypt-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--with-ksba-prefix="$WORK/deps" \
--with-npth-prefix="$WORK/deps" \
--with-agent-pgm="$PREFIX/bin/gpg-agent" \
--with-pinentry-pgm="$PREFIX/bin/pinentry" \
--enable-zip \
--enable-bzip2 \
--disable-card-support \
--disable-ccid-driver \
--disable-dirmngr \
--disable-gnutls \
--disable-gpg-blowfish \
--disable-gpg-cast5 \
--disable-gpg-idea \
--disable-gpg-md5 \
--disable-gpg-rmd160 \
--disable-gpgtar \
--disable-ldap \
--disable-libdns \
--disable-nls \
--disable-ntbtls \
--disable-photo-viewers \
--disable-scdaemon \
--disable-sqlite \
--disable-wks-tools
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
rm "$DESTDIR$PREFIX/bin/gpgscm"
)
tar -C "$WORK" -xjf download/pinentry-$PINENTRY_VERSION.tar.bz2
(
mkdir -p "$WORK/pinentry"
cd "$WORK/pinentry"
../pinentry-$PINENTRY_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--disable-ncurses \
--disable-libsecret \
--enable-pinentry-tty \
--disable-pinentry-curses \
--disable-pinentry-emacs \
--disable-inside-emacs \
--disable-pinentry-gtk2 \
--disable-pinentry-gnome3 \
--disable-pinentry-qt \
--disable-pinentry-tqt \
--disable-pinentry-fltk
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
)
rm -rf "$DESTDIR$PREFIX/sbin"
rm -rf "$DESTDIR$PREFIX/share/doc"
rm -rf "$DESTDIR$PREFIX/share/info"
# cleanup
distclean
Below is the Dockerfile used to build the layer:
FROM public.ecr.aws/lambda/python:3.8
# the output volume to extract the build contents
VOLUME ["/opt/bin"]
RUN yum -y groupinstall 'Development Tools'
RUN yum -y install tar gzip zlib bzip2 file hostname
WORKDIR /opt
# copy the build script
COPY static-gnupg-build.sh .
# run the build script
RUN bash ./static-gnupg-build.sh
# when run output the version
ENTRYPOINT [ "/opt/bin/gpg", "--version" ]
Once the code is compiled in the image I copied it to my local directory, zipped it, and published the layer:
docker cp MY_DOCKER_ID:/opt/bin ./gnupg
cd ./gnupg && zip -r gnupg-layer.zip bin
To publish the layer:
aws lambda publish-layer-version \
--layer-name gnupg \
--zip-file fileb://layer-gpg2.3.zip \
--compatible-architectures python3.8
I decided to not use the python-gnupg package to have more control over the exact GnuPG binary flags so I added my own
binary wrapper function:
def gpg_run(flags: list, subprocess_kwargs: dict):
gpg_bin_args = [
'/opt/bin/gpg',
'--no-tty',
'--yes', # don't prompt for input
'--always-trust', # always trust
'--status-fd', '1', # return status to stdout
'--homedir', '/tmp'
]
gpg_bin_args.extend(flags)
print('running cmd', ' '.join(gpg_bin_args))
result = subprocess.run(gpg_bin_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**subprocess_kwargs)
return result.returncode, \
result.stdout.decode('utf-8').split('/n'), \
result.stderr.decode('utf-8').split('/n')
And then added an import key and decode function:
def gpg_import_keys(input):
return gpg_run(flags=['--import'], subprocess_kwargs={input: input})
def gpg_decrypt(input, output):
return gpg_run(flags=['--output', output, '--decrypt', input])
And updated the relavent Lambda code with:
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
#...
# TODO: import the keys only needs to run once per instance
# ideally would be moved to a singleton
code, stdout, stderr = gpg_import_keys(bytes(MY_KEY_DATA, 'utf-8'))
if code > 0:
raise Exception(f'gpg_import_keys failed with code {code}: {stdout} {stderr}')
print('import_keys stdout =', stdout)
print('import_keys stderr =', stderr)
# Perform decryption.
print('Performing Decryption of', encrypted_path)
code, stdout, stderr = gpg_decrypt(encrypted_path, output=decrypted_path)
if code > 0:
raise Exception(f'gpg_decrypt failed with code {code}: {stderr}')
print('decrypt stdout =', stdout)
print('decrypt stderr =', stderr)
print('Status: OK')
print(decrypted_path, "exists :", Path(decrypted_path).exists())
And now the Cloudwatch log output is as expected and I've confirmed the decoded file is correct!
...
2022-11-17T09:25:22.732-06:00 running cmd:['/opt/bin/gpg', '--no-tty', '--batch', '--yes', '--always-trust', '--status-fd', '1', '--homedir', '/tmp', '--import']
2022-11-17T09:25:22.769-06:00 import_keys ok = True
2022-11-17T09:25:22.769-06:00 import_keys stdout = ['[GNUPG:] IMPORT_OK 0 XXX', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] IMPORT_OK 16 XXX', '[GNUPG:] IMPORT_RES 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0', '']
2022-11-17T09:25:22.769-06:00 import_keys stderr = ['']
2022-11-17T09:25:22.769-06:00 Performing Decryption of /tmp/test.txt.gpg
2022-11-17T09:25:22.769-06:00 running cmd: /opt/bin/gpg --no-tty --yes --always-trust --status-fd 1 --homedir /tmp --output /tmp/decrypted.zip --decrypt /tmp/encrypted.zip
2022-11-17T09:25:22.850-06:00 decrypt stdout = ['[GNUPG:] ENC_TO XXX 1 0', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] DECRYPTION_KEY XXX -', '[GNUPG:] BEGIN_DECRYPTION', '[GNUPG:] DECRYPTION_INFO 0 9 2', '[GNUPG:] PLAINTEXT 62 1667796554 encrypted.zip', '[GNUPG:] PLAINTEXT_LENGTH 428', '[GNUPG:] DECRYPTION_OKAY', '[GNUPG:] GOODMDC', '[GNUPG:] END_DECRYPTION', '']
2022-11-17T09:25:22.850-06:00 decrypt stderr = ['gpg: encrypted with rsa2048 key, ID XXX, created 2022-11-07', ' "XXX"', '']
2022-11-17T09:25:22.850-06:00 /tmp/decrypted.zip exists: True
...
I am working on bit-bake environment. I am using nodejs ver 10.15.3
dest cpu == ppc64 linux
My problem is node binary core dumps and I am not able to identify the root cause. I am trying to compile nodejs for dest cpu(ppc64).
I am not sure but I guess there are runtime requirements which are not satisfied on the target machine.
below is my recipe:-
DESCRIPTION = "nodeJS Evented I/O for V8 JavaScript"
HOMEPAGE = "http://nodejs.org"
LICENSE = "MIT & BSD & Artistic-2.0"
LIC_FILES_CHKSUM = "file://LICENSE;md5=9ceeba79eb2ea1067b7b3ed16fff8bab"
DEPENDS = "openssl zlib icu"
DEPENDS_append_class-target = " nodejs-native"
inherit pkgconfig
COMPATIBLE_MACHINE_armv4 = "(!.*armv4).*"
COMPATIBLE_MACHINE_armv5 = "(!.*armv5).*"
COMPATIBLE_MACHINE_mips64 = "(!.*mips64).*"
SRC_URI = "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
file://0001-Disable-running-gyp-files-for-bundled-deps.patch \
file://0003-Crypto-reduce-memory-usage-of-SignFinal.patch \
file://0004-Make-compatibility-with-gcc-4.8.patch \
file://0005-Link-atomic-library.patch \
file://0006-Use-target-ldflags.patch \
"
SRC_URI_append_class-target = " \
file://0002-Using-native-torque.patch \
"
SRC_URI[md5sum] = "d76210a6ae1ea73d10254947684836fb"
SRC_URI[sha256sum] = "4e22d926f054150002055474e452ed6cbb85860aa7dc5422213a2002ed9791d5"
S = "${WORKDIR}/node-v${PV}"
# v8 errors out if you have set CCACHE
CCACHE = ""
def map_nodejs_arch(a, d):
import re
if re.match('i.86$', a): return 'ia32'
elif re.match('x86_64$', a): return 'x64'
elif re.match('aarch64$', a): return 'arm64'
elif re.match('(powerpc64|ppc64le)$', a): return 'ppc64'
elif re.match('powerpc$', a): return 'ppc'
return a
ARCHFLAGS_arm = "${#bb.utils.contains('TUNE_FEATURES', 'callconvention-hard', '--with-arm-float-abi=hard', '--with-arm-float-abi=softfp', d)} \
${#bb.utils.contains('TUNE_FEATURES', 'neon', '--with-arm-fpu=neon', \
bb.utils.contains('TUNE_FEATURES', 'vfpv3d16', '--with-arm-fpu=vfpv3-d16', \
bb.utils.contains('TUNE_FEATURES', 'vfpv3', '--with-arm-fpu=vfpv3', \
'--with-arm-fpu=vfp', d), d), d)}"
GYP_DEFINES_append_mipsel = " mips_arch_variant='r1' "
ARCHFLAGS ?= ""
# Node is way too cool to use proper autotools, so we install two wrappers to forcefully inject proper arch cflags to workaround gypi
do_configure () {
rm -rf ${S}/deps/openssl
export LD="${CXX}"
GYP_DEFINES="${GYP_DEFINES}" export GYP_DEFINES
# $TARGET_ARCH settings don't match --dest-cpu settings
./configure --prefix=${prefix} --with-intl=system-icu --without-snapshot --shared-openssl --shared-zlib \
--dest-cpu="${#map_nodejs_arch(d.getVar('TARGET_ARCH'), d)}" \
--dest-os=linux \
${ARCHFLAGS}
}
do_compile () {
export LD="${CXX}"
oe_runmake BUILDTYPE=Release
}
do_install () {
oe_runmake install DESTDIR=${D}
}
do_install_append_class-native() {
# use node from PATH instead of absolute path to sysroot
# node-v0.10.25/tools/install.py is using:
# shebang = os.path.join(node_prefix, 'bin/node')
# update_shebang(link_path, shebang)
# and node_prefix can be very long path to bindir in native sysroot and
# when it exceeds 128 character shebang limit it's stripped to incorrect path
# and npm fails to execute like in this case with 133 characters show in log.do_install:
# updating shebang of /home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/work/x86_64-linux/nodejs-native/0.10.15-r0/image/home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/sysroots/x86_64-linux/usr/bin/npm to /home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/sysroots/x86_64-linux/usr/bin/node
# /usr/bin/npm is symlink to /usr/lib/node_modules/npm/bin/npm-cli.js
# use sed on npm-cli.js because otherwise symlink is replaced with normal file and
# npm-cli.js continues to use old shebang
sed "1s^.*^#\!/usr/bin/env node^g" -i ${D}${exec_prefix}/lib/node_modules/npm/bin/npm-cli.js
# Install the native torque to provide it within sysroot for the target compilation
install -d ${D}${bindir}
install -m 0755 ${S}/out/Release/torque ${D}${bindir}/torque
}
do_install_append_class-target() {
sed "1s^.*^#\!${bindir}/env node^g" -i ${D}${exec_prefix}/lib/node_modules/npm/bin/npm-cli.js
}
PACKAGES =+ "${PN}-npm"
FILES_${PN}-npm = "${exec_prefix}/lib/node_modules ${bindir}/npm ${bindir}/npx"
RDEPENDS_${PN}-npm = "bash python-shell python-datetime python-subprocess python-textutils \
python-compiler python-misc python-multiprocessing"
PACKAGES =+ "${PN}-systemtap"
FILES_${PN}-systemtap = "${datadir}/systemtap"
BBCLASSEXTEND = "native"
I am able to apply gdb to node binary below is the snapshot. It core dumps at this point.
Thread 1 "node" hit Breakpoint 10, v8::internal::Runtime_PromiseHookInit (args_length=2, args_object=0x3fffffffd188, isolate=0x11284ab0)
at /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc:132
132 /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc: No such file or directory.
(gdb) bt
#0 v8::internal::Runtime_PromiseHookInit (args_length=2, args_object=0x3fffffffd188, isolate=0x11284ab0) at /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc:132
#1 0x000003c7b3f04134 in ?? ()
(gdb) c
Continuing.
Nodejs is not supported on PPC64 LE architecture. There is only support for the Big Endian platform on PPC architecture till 7.9 Version.
I want to replace the default vsftpd.conf file with my own file!
My bitbake file looks following:
bbexample_1.0.bb
DESCRIPTION = "Configuration and extra files for TX28"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
S = "${WORKDIR}"
SRC_URI += " \
file://ld.so.conf \
file://nginx/nginx.conf \
file://init.d/myscript.sh"
inherit allarch
do_install () {
install -d ${D}${sysconfdir}
install -d ${D}${sysconfdir}/nginx
install -d ${D}${sysconfdir}/init.d
rm -f ${D}${sysconfdir}/ld.so.conf
install -m 0755 ${WORKDIR}/ld.so.conf ${D}${sysconfdir}
install -m 0755 ${WORKDIR}/nginx/nginx.conf ${D}${sysconfdir}/nginx/
install -m 0755 ${WORKDIR}/init.d/myscript.sh ${D}${sysconfdir}/init.d/
}
bbexample_1.0.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}-${PV}:"
SRC_URI += " \
file://vsftpd.conf"
do_install_append () {
install -m 0755 ${WORKDIR}/vsftpd.conf ${D}${sysconfdir}
}
But, the file could not be replaced!
What is wrong?
What you need to do is to use a bbappend in your own layer,
vsftpd recipe is located in meta-openembedded/meta-networking/recipes-daemons
Thus you need to create a file called vstfpd_%.bbappend (% makes it valid for every version)
This file must be located in <your-layer>/meta-networking/recipes-daemons. You also need to put your custom vsftpd.conf in <your-layer>/meta-networking/recipes-daemons/vsftpd folder
Its content should be:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
do_install_append(){
install -m 644 ${WORKDIR}/vsftpd.conf ${D}${sysconfdir}
}
Example from meta-openembedded here
you should add to your recipe:
FILES_${PN} += " file you installed"
I'm not very experienced with the cabal workflow, so I suspect that the problem that I am having is a simple one. But, I cannot fond a solution.
I have the following dependencies in my cabal file:
$ grep tasty ume.cabal
build-depends: base >=4.7 && <4.8, HDBC >=2.4 && <2.5, parsec >=3.1 && <3.2,
filepath >=1.4 && <1.5,
HDBC-sqlite3 >=2.3 && <2.4, time >=1.5 && <1.6, sqlite >=0.5 && <0.6,
bytestring >=0.10 && <0.11, unix >=2.7 && <2.8, cryptohash >=0.11 && <0.12, process >=1.2 && <1.3,
transformers >= 0.4 && < 0.5, text, base16-bytestring,
utf8-string, tasty >= 0.11 && < 0.12,tasty-hunit >= 0.9 && < 0.10
Ok, when I try to run the test suit, I get this message:
$ cabal test
Re-configuring with test suites enabled. If this fails, please run configure
manually.
Resolving dependencies...
Configuring umecore-hs-0.0.1.0...
cabal: At least the following dependencies are missing:
tasty ==0.11.*, tasty-hunit ==0.9.*
however, when I try to install the missing packages, I get this message:
$ cabal install tasty tasty-hunit
Resolving dependencies...
All the requested packages are already installed:
tasty-0.11.0.1
tasty-hunit-0.9.2
Use --reinstall if you want to reinstall anyway.
So, the dependencies are met. How come I am not able to use the libraries?
EDIT:
Here is the whole cabal file:
name: umecore-hs
version: 0.0.1.0
synopsis: An infrastructure for querying phonetic data
description: A framwork for takeing transcription files into a database structure, so that phonetically relevant queries may be made on the transcriptions.
homepage: https://github.com/dargosch/umecore-hs/wiki
license: BSD3
license-file: LICENSE
author: Fredrik Karlsson
maintainer: fredrik.k.karlsson#umu.se
-- copyright:
category: Database
build-type: Simple
-- extra-source-files:
cabal-version: >=1.10
library
exposed-modules: Phonetic.Database.QueryGenerator, Phonetic.Database.SegmentList, Phonetic.Database.UmeDatabase, Phonetic.Database.UmeQuery, Phonetic.IPA.IPAParser, Phonetic.Database.DataTypes, Phonetic.Database.UmeQueryParser, Phonetic.FileParsers.TextgridParser
ghc-options: -W -fno-warn-unused-do-bind -i/Users/frkkan96/Documents/src/umecore-hs/src
-- other-modules:
-- other-extensions:
build-depends: base >=4.8 && < 4.9, HDBC >=2.4 && <2.5, parsec >=3.1 && <3.2, filepath >=1.4 && <1.5, HDBC-sqlite3 >=2.3 && <2.4, time >=1.5 && <1.6, sqlite >=0.5 && <0.6, bytestring >=0.10 && <0.11, unix >=2.7 && <2.8, cryptohash >=0.11 && <0.12, process >=1.2 && <1.3, transformers >= 0.4 && < 0.5, text >= 1.2 && <= 1.3, base16-bytestring >= 0.1.1 && < 1.1.2, utf8-string >= 1 && < 1.1, directory >=1.2 && <1.3, regex-base >= 0.9 && < 1.0, regex-pcre >= 0.94 && < 0.95, regex-base >= 0.93 && < 0.94
hs-source-dirs: src
default-language: Haskell2010
Executable textgrid_import
Main-Is: textgrid_import.hs
Hs-Source-Dirs: src
-- Other-Modules: Phonetic.FileParsers.TextgridParser
default-language: Haskell2010
build-depends: base >=4.8 && <4.9, HDBC >=2.4 && <2.5, parsec >=3.1 && <3.2, filepath >=1.4 && <1.5, HDBC-sqlite3 >=2.3 && <2.4, time >=1.5 && <1.6, sqlite >=0.5 && <0.6, bytestring >=0.10 && <0.11, unix >=2.7 && <2.8, cryptohash >=0.11 && <0.12, process >=1.2 && <1.3, filemanip >=0.3 && < 0.4, directory >=1.2 && <1.3, optparse-applicative >= 0.12 && < 0.13
Executable umequery
Main-Is: umequery.hs
Hs-Source-Dirs: src
default-language: Haskell2010
build-depends: base >=4.8 && <4.9, HDBC >=2.4 && <2.5, parsec >=3.1 && <3.2, filepath >=1.4 && <1.5, HDBC-sqlite3 >=2.3 && <2.4, time >=1.5 && <1.6, sqlite >=0.5 && <0.6, bytestring >=0.10 && <0.11, unix >=2.7 && <2.8, cryptohash >=0.11 && <0.12, optparse-applicative >= 0.12 && < 0.13, text >= 1.2 && <= 1.3, base16-bytestring >= 0.1.1 && < 1.1.2, utf8-string >= 1 && < 1.1, transformers >= 0.4 && < 0.5, regex-pcre >= 0.94 && < 0.95, regex-base >= 0.93 && < 0.94
source-repository head
type: git
location: git://github.com/dargosch/umecore-hs.git
test-suite test
default-language:
Haskell2010
type:
exitcode-stdio-1.0
hs-source-dirs:
src, tests
main-is:
tests.hs
build-depends: base >=4.7 && <4.8, HDBC >=2.4 && <2.5, parsec >=3.1 && <3.2, filepath >=1.4 && <1.5, HDBC-sqlite3 >=2.3 && <2.4, time >=1.5 && <1.6, sqlite >=0.5 && <0.6, bytestring >=0.10 && <0.11, unix >=2.7 && <2.8, cryptohash >=0.11 && <0.12, process >=1.2 && <1.3, transformers >= 0.4 && < 0.5, text, base16-bytestring, utf8-string, tasty >= 0.11 && < 0.12,tasty-hunit >= 0.9 && < 0.10
Running cabal install --enable-tests --only-dependencies should install the dependencies of the test suite.
the Opencv 2.4.9: VideoCapture cann't open the MJPG-streamer:
VideoCapture cap;
cap.open("http://127.0.0.1:8080/?action=stream&type=.mjpg");
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the video cam" << endl;
return -1;
}
I can see the video use gst-launch. and I have searched a lot,like this and tried the fifo like this,but still cann't open it.
then I want to debug into opencv,compile opencv with CMAKE_BUILD_TYPE=DEBUG, but my GDB just cann't step into the open function.any idea?
my makefile:
OpencvDebugLibDir=/home/ry/lib
CFLAGS = -g -I$(OpencvDebugLibDir)/include/opencv -I$(OpencvDebugLibDir)
LIBS = $(OpencvDebugLibDir)/lib/*.so
target : main.cpp
g++ $(CFLAGS) $(LIBS) -o $# $<
by the way, I am in opensue13.1, with the same code,I can open the video in win 7.
thank you.
Update
now I can step into some function like:imshow,waitKey, but I can not step into others like:imread,namedWindow,it shows:
29 image = cv::imread(name);
(gdb) s
std::allocator<char>::allocator (this=0x7fffffffdc7f)
at /usr/src/debug/gcc-4.8.1-20130909/obj-x86_64-suse-linux/x86_64-suse-linux/libstdc++-v3/include/bits/allocator.h:113
113 allocator() throw() { }
test4.cpp:
#include <stdio.h>
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char** argv )
{
Mat image;
image = imread( "LinuxLogo.jpg", 1 );
if ( !image.data )
{
printf("No image data \n");
return -1;
}
namedWindow("Display Image", CV_WINDOW_AUTOSIZE );
imshow("Display Image", image);
waitKey(0);
return 0;
}
my makefile:
OpencvDebugLibDir=/home/ry/lib
CFLAGS=-g -I$(OpencvDebugLibDir)/include/opencv -I$(OpencvDebugLibDir)
LIBS=$(OpencvDebugLibDir)/lib
test4:test4.cpp
g++ $(CFLAGS) -o $# $< -L$(LIBS) -lopencv_highgui -lopencv_core -Wl,-rpath=/home/ry/lib/lib
run gdb:
gdb test4 -d /home/ry/learn/opencv/install/OpenCV/opencv-2.4.9/modules/core/src -d /home/ry/learn/opencv/install/OpenCV/opencv-2.4.9/modules/highgui/src
opencv videocapture can't open MJPEG stream,because I don't compile opencv with FFMPEG support.
detail:
when cmake the opencv:
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=ON -D WITH_TBB=ON -D WITH_OPENMP=ON -D WITH_OPENCL=ON -D WITH_CUDA=ON -D WITH_GTK=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=OFF -D BUILD_EXAMPLES=OFF -D WITH_QT=ON -D WITH_OPENGL=ON -D BUILD_JPEG=ON -D BUILD_PNG=ON -D BUILD_JASPER=ON -D BUILD_ZLIB=ON -D WITH_JPEG=ON -D WITH_PNG=ON -D WITH_JASPER=ON -D WITH_ZLIB=ON -D WITH_OPENEXR=OFF ..
you get something like:
FFMPEG: NO
-- codec: NO
-- format: NO
-- util: NO
-- swscale: NO
-- gentoo-style: NO
so the cmake don't find the FFMPEG. I should install libffmpeg-devel in my machine(Opensuse 13.1),then the pkg-config can find FFMPEG,you can check with this:
pkg-config --list-all | grep libavcodec
then run the above cmake command again, I get:
FFMPEG: YES
-- codec: YES (ver 55.69.100)
-- format: YES (ver 55.48.100)
-- util: YES (ver 52.92.100)
-- swscale: YES (ver 2.6.100)
-- gentoo-style: YES
make,and I get the opencv videocapture able to open MJPG_Streamer.
PS:to find the reason,I compile a debug version opencv,and step into the VideoCapture's open function,and in the construction function of icvInitFFMPEG() in " opencv-2.4.9/modules/highgui/src/cap.cpp ":
#elif defined HAVE_FFMPEG
icvCreateFileCapture_FFMPEG_p = (CvCreateFileCapture_Plugin)cvCreateFileCapture_FFMPEG;
icvReleaseCapture_FFMPEG_p = (CvReleaseCapture_Plugin)cvReleaseCapture_FFMPEG;
icvGrabFrame_FFMPEG_p = (CvGrabFrame_Plugin)cvGrabFrame_FFMPEG;
icvRetrieveFrame_FFMPEG_p = (CvRetrieveFrame_Plugin)cvRetrieveFrame_FFMPEG;
icvSetCaptureProperty_FFMPEG_p = (CvSetCaptureProperty_Plugin)cvSetCaptureProperty_FFMPEG;
icvGetCaptureProperty_FFMPEG_p = (CvGetCaptureProperty_Plugin)cvGetCaptureProperty_FFMPEG;
icvCreateVideoWriter_FFMPEG_p = (CvCreateVideoWriter_Plugin)cvCreateVideoWriter_FFMPEG;
icvReleaseVideoWriter_FFMPEG_p = (CvReleaseVideoWriter_Plugin)cvReleaseVideoWriter_FFMPEG;
icvWriteFrame_FFMPEG_p = (CvWriteFrame_Plugin)cvWriteFrame_FFMPEG;
#endif
it just step over these code,so I know because I don't have the HAVE_FFMPEG defined in compiling process.