Inserting lines in a multiline command using a for loop in a bash script - linux

I have a bash script which executes a multi-line command multiple times and I am changing the some values on each iteration. Here is my code below:
for (( peer=1; peer<=$nodesNum;peer++ ))
do
echo "Starting peer $peer"
nodeos -p eosio -d /eosio_data/node$peer --config-dir /eosio_data/node$peer --http-server-address=127.0.0.1:$http \
--p2p-listen-endpoint=127.0.0.1:$p2p --access-control-allow-origin=* \
-p "user$peer" --http-validate-host=false --signature-provider=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3 \
--max-transaction-time=1000 --genesis-json /eosio_data/genesis.json --wasm-runtime=wabt --max-clients=2000 -e \
--plugin eosio::chain_plugin --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin \
--plugin eosio::chain_api_plugin \
--p2p-peer-address localhost:8888 \
&>eosio_data/logs/nodeos_stderr$p2p.log & \
sleep 1
http=$((http+1))
p2p=$((p2p+1))
done
I need to add a --p2p-peer-address localhost:$((9010 + $peer)) command multiple times for each peer as part of the multi-line command. I new to bash scripting and I couldn't find a similar example.

It's not entirely clear what you need, but I think it's something like the following. An array of --p2p-peer-address options is created, then incorporated
into the larger set of common options. Each call to nodeos then has some peer-specific options in addition to the common options.
# for example
HTTP_BASE=8080
P2P_BASE=12345
# Set of --p2p-peer-address options for shared by all calls.
for ((peer=1; peer <= $nodesNum; peer++)); do
peer_args+=(
--p2p-peer-address
localhost:$((9010+$peer))
)
done
# These are the same for all calls
fixed_args=(
-p eosio
"--access-control-allow-origin=*"
--http-validate-host=false
--signature-provider=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
--max-transaction-time=1000
--genesis-json /eosio_data/genesis.json
--wasm-runtime=wabt
--max-clients=2000
-e
--plugin eosio::chain_plugin
--plugin eosio::producer_plugin
--plugin eosio::producer_api_plugin
--plugin eosio::chain_api_plugin
--p2p-peer-address localhost:8888
"${peer_args[#]}"
)
for ((peer=1; peer<=$nodesNum; peer++)); do
# Call-specific arguments, followed by common arguments.
# $peer is either incorporated into each argument value
nodeos -d /eosio_data/node$peer \
--config-dir /eosio_data/node$peer \
--http-server-address=127.0.0.1:$((HTTP_BASE + $peer)) \
--p2p-listen-endpoint=127.0.0.1:$((P2P_BASE + $peer)) \
-p "user$peer" \
"${fixed_args[#]}" \
&> eosio_data/logs/nodeos_stderr$((P2P_BASE + $peer)).log &
sleep 1
done
To avoid having each instance of nodeos try to connect to itself, build peer_args anew on each iteration of the loop. Remove "${peer_args[#]}" from the definition of fixed_args, then adjust the main loop like so:
for ((peer=1; peer <= $nodesNum; peer++)); do
peer_args=()
for ((neighbor=1; neighbor <= $nodesNum; neighbor++)); do
(( neighbor == peer )) && continue
peer_args+=( --p2p-peer-address localhost:$((9010+peer)) )
done
nodeos -d /eosio_data/node$peer \
--config-dir /eosio_data/node$peer \
--http-server-address=127.0.0.1:$((HTTP_BASE + $peer)) \
--p2p-listen-endpoint=127.0.0.1:$((P2P_BASE + $peer)) \
-p "user$peer" \
"${fixed_args[#]}" \
"${peer_args[#]}" \
&> eosio_data/logs/nodeos_stderr$((P2P_BASE + $peer)).log &
sleep 1
done

I think you were very close. The expression you authored --p2p-peer-address localhost:$((9010 + $peer)) can be inserted into your nodeos call as follows:
for (( peer=1; peer<=$nodesNum;peer++ ))
do
echo "Starting peer $peer"
nodeos -p eosio -d /eosio_data/node$peer \
--config-dir /eosio_data/node$peer \
--http-server-address=127.0.0.1:$http \
--p2p-listen-endpoint=127.0.0.1:$p2p \
--access-control-allow-origin=* \
-p "user$peer" \
--http-validate-host=false \
--signature-provider=EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3 \
--max-transaction-time=1000 --genesis-json /eosio_data/genesis.json --wasm-runtime=wabt --max-clients=2000 -e \
--plugin eosio::chain_plugin --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin \
--plugin eosio::chain_api_plugin \
--p2p-peer-address localhost:$((9010 + $peer)) &>eosio_data/logs/nodeos_stderr$p2p.log &
sleep 1
http=$((http+1))
p2p=$((p2p+1))
done

Related

How to properly decrypt PGP encrypted file in a AWS Lambda function in Python?

I'm trying to create an AWS Lambda in python that:
downloads a compressed and encrypted file from an S3 bucket
decrypts the file using python-gnupg
stores the decrypted compressed contents in another S3 bucket
This is using python 3.8 and python-gnupg package in a Lambda layer.
I've verified the PGP key is correct, that it is being loaded into the keyring just fine, and that the encrypted file is being downloaded correctly.
However, when I attempt to run gnupg.decrypt_file I get output that looks like it's been successful, but
the decrypt status shows not ok and the decrypted file does not exist.
How can I get PGP decryption working in Lambda?
Here is the relevant code extracted from the lambda function:
import gnupg
from pathlib import Path
# ...
gpg = gnupg.GPG(gnupghome='/tmp')
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
# ...
# this works as expected
status = gpg.import_keys(MY_KEY_DATA)
# ...
print('Performing Decryption of', encrypted_path)
print(encrypted_path, "exists :", Path(encrypted_path).exists())
with open(encrypted_path, 'rb') as f:
status = gpg.decrypt_file(f, output=decrypted_path, always_trust = True)
print('decrypt ok =', status.ok)
print('decrypt status = ', status.status)
print('decrypt stderr = ', status.status)
print('decrypt stderr = ', status.stderr)
print(decrypted_path, "exists :", Path(decrypted_path).exists())
Expectation was to get output similar to the following in CloudWatch:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = True
2022-11-08T10:24:44.018-05:00 decrypt status = [SOME OUTPUT FROM GPG BINARY]
2022-11-08T10:24:44.018-05:00 decrypt stderr = ""
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : True
Instead what I get is:
2022-11-08T10:24:43.939-05:00 Performing Decryption of /tmp/encrypted.zip
2022-11-08T10:24:44.018-05:00 /tmp/encrypted.txt exists : True
2022-11-08T10:24:44.018-05:00 decrypt ok = False
2022-11-08T10:24:44.018-05:00 decrypt status = good passphrase
2022-11-08T10:24:44.018-05:00 decrypt stderr = [GNUPG:] ENC_TO XXXXXX 1 0
2022-11-08T10:24:44.214-05:00 /tmp/decrypted.txt exists : False
It appears as though decryption process starts to work, but something kills it, or perhaps the gpg binary is expecting some TTY input and halts?
I've tried locally running gpg decryption using the cli and it works as expected, although I am using GnuPG version 2.3.1, not sure what version exists on Lambda.
After a lot of digging I managed to get this working.
I'm not 100% sure if the cause is the older GnuPG binary installed on the Lambda image by default, but to be sure I decided to build a GnuPG 2.3.1 layer for lambda which I confirmed was working as expected in a Docker container.
I used https://github.com/skeeto/lean-static-gpg/blob/master/build.sh as a foundation for compiling the binary in
Docker, but updated it to include compression, which was required for this use case.
Here is the updated updated build.sh script I used, optimized for building for Lambda:
#!/bin/sh
set -e
MUSL_VERSION=1.2.2
GNUPG_VERSION=2.3.1
LIBASSUAN_VERSION=2.5.5
LIBGCRYPT_VERSION=1.9.2
LIBGPGERROR_VERSION=1.42
LIBKSBA_VERSION=1.5.1
NPTH_VERSION=1.6
PINENTRY_VERSION=1.1.1
BZIP_VERSION=1.0.6-g10
ZLIB_VERSION=1.2.12
DESTDIR=""
PREFIX="/opt"
WORK="$PWD/work"
PATH="$PWD/work/deps/bin:$PATH"
NJOBS=$(nproc)
clean() {
rm -rf "$WORK"
}
distclean() {
clean
rm -rf download
}
download() {
gnupgweb=https://gnupg.org/ftp/gcrypt
mkdir -p download
(
cd download/
xargs -n1 curl -O <<EOF
https://www.musl-libc.org/releases/musl-$MUSL_VERSION.tar.gz
$gnupgweb/gnupg/gnupg-$GNUPG_VERSION.tar.bz2
$gnupgweb/libassuan/libassuan-$LIBASSUAN_VERSION.tar.bz2
$gnupgweb/libgcrypt/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
$gnupgweb/libgpg-error/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
$gnupgweb/libksba/libksba-$LIBKSBA_VERSION.tar.bz2
$gnupgweb/npth/npth-$NPTH_VERSION.tar.bz2
$gnupgweb/pinentry/pinentry-$PINENTRY_VERSION.tar.bz2
$gnupgweb/bzip2/bzip2-$BZIP_VERSION.tar.gz
$gnupgweb/zlib/zlib-$ZLIB_VERSION.tar.gz
EOF
)
}
clean
if [ ! -d download/ ]; then
download
fi
mkdir -p "$DESTDIR$PREFIX" "$WORK/deps"
tar -C "$WORK" -xzf download/musl-$MUSL_VERSION.tar.gz
(
mkdir -p "$WORK/musl"
cd "$WORK/musl"
../musl-$MUSL_VERSION/configure \
--prefix="$WORK/deps" \
--enable-wrapper=gcc \
--syslibdir="$WORK/deps/lib"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/zlib-$ZLIB_VERSION.tar.gz
(
mkdir -p "$WORK/zlib"
cd "$WORK/zlib"
../zlib-$ZLIB_VERSION/configure \
--prefix="$WORK/deps"
make -kj$NJOBS
make install
make clean
)
tar -C "$WORK" -xzf download/bzip2-$BZIP_VERSION.tar.gz
(
export CFLAGS="-fPIC"
cd "$WORK/bzip2-$BZIP_VERSION"
make install PREFIX="$WORK/deps"
make clean
)
tar -C "$WORK" -xjf download/npth-$NPTH_VERSION.tar.bz2
(
mkdir -p "$WORK/npth"
cd "$WORK/npth"
../npth-$NPTH_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgpg-error-$LIBGPGERROR_VERSION.tar.bz2
(
mkdir -p "$WORK/libgpg-error"
cd "$WORK/libgpg-error"
../libgpg-error-$LIBGPGERROR_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-nls \
--disable-doc \
--disable-languages
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libassuan-$LIBASSUAN_VERSION.tar.bz2
(
mkdir -p "$WORK/libassuan"
cd "$WORK/libassuan"
../libassuan-$LIBASSUAN_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libgcrypt-$LIBGCRYPT_VERSION.tar.bz2
(
mkdir -p "$WORK/libgcrypt"
cd "$WORK/libgcrypt"
../libgcrypt-$LIBGCRYPT_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--disable-doc \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/libksba-$LIBKSBA_VERSION.tar.bz2
(
mkdir -p "$WORK/libksba"
cd "$WORK/libksba"
../libksba-$LIBKSBA_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
--prefix="$WORK/deps" \
--enable-shared=no \
--enable-static=yes \
--with-libgpg-error-prefix="$WORK/deps"
make -kj$NJOBS
make install
)
tar -C "$WORK" -xjf download/gnupg-$GNUPG_VERSION.tar.bz2
(
mkdir -p "$WORK/gnupg"
cd "$WORK/gnupg"
../gnupg-$GNUPG_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libgcrypt-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--with-ksba-prefix="$WORK/deps" \
--with-npth-prefix="$WORK/deps" \
--with-agent-pgm="$PREFIX/bin/gpg-agent" \
--with-pinentry-pgm="$PREFIX/bin/pinentry" \
--enable-zip \
--enable-bzip2 \
--disable-card-support \
--disable-ccid-driver \
--disable-dirmngr \
--disable-gnutls \
--disable-gpg-blowfish \
--disable-gpg-cast5 \
--disable-gpg-idea \
--disable-gpg-md5 \
--disable-gpg-rmd160 \
--disable-gpgtar \
--disable-ldap \
--disable-libdns \
--disable-nls \
--disable-ntbtls \
--disable-photo-viewers \
--disable-scdaemon \
--disable-sqlite \
--disable-wks-tools
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
rm "$DESTDIR$PREFIX/bin/gpgscm"
)
tar -C "$WORK" -xjf download/pinentry-$PINENTRY_VERSION.tar.bz2
(
mkdir -p "$WORK/pinentry"
cd "$WORK/pinentry"
../pinentry-$PINENTRY_VERSION/configure \
CC="$WORK/deps/bin/musl-gcc" \
LDFLAGS="-static -s" \
--prefix="$PREFIX" \
--with-libgpg-error-prefix="$WORK/deps" \
--with-libassuan-prefix="$WORK/deps" \
--disable-ncurses \
--disable-libsecret \
--enable-pinentry-tty \
--disable-pinentry-curses \
--disable-pinentry-emacs \
--disable-inside-emacs \
--disable-pinentry-gtk2 \
--disable-pinentry-gnome3 \
--disable-pinentry-qt \
--disable-pinentry-tqt \
--disable-pinentry-fltk
make -kj$NJOBS
make install DESTDIR="$DESTDIR"
)
rm -rf "$DESTDIR$PREFIX/sbin"
rm -rf "$DESTDIR$PREFIX/share/doc"
rm -rf "$DESTDIR$PREFIX/share/info"
# cleanup
distclean
Below is the Dockerfile used to build the layer:
FROM public.ecr.aws/lambda/python:3.8
# the output volume to extract the build contents
VOLUME ["/opt/bin"]
RUN yum -y groupinstall 'Development Tools'
RUN yum -y install tar gzip zlib bzip2 file hostname
WORKDIR /opt
# copy the build script
COPY static-gnupg-build.sh .
# run the build script
RUN bash ./static-gnupg-build.sh
# when run output the version
ENTRYPOINT [ "/opt/bin/gpg", "--version" ]
Once the code is compiled in the image I copied it to my local directory, zipped it, and published the layer:
docker cp MY_DOCKER_ID:/opt/bin ./gnupg
cd ./gnupg && zip -r gnupg-layer.zip bin
To publish the layer:
aws lambda publish-layer-version \
--layer-name gnupg \
--zip-file fileb://layer-gpg2.3.zip \
--compatible-architectures python3.8
I decided to not use the python-gnupg package to have more control over the exact GnuPG binary flags so I added my own
binary wrapper function:
def gpg_run(flags: list, subprocess_kwargs: dict):
gpg_bin_args = [
'/opt/bin/gpg',
'--no-tty',
'--yes', # don't prompt for input
'--always-trust', # always trust
'--status-fd', '1', # return status to stdout
'--homedir', '/tmp'
]
gpg_bin_args.extend(flags)
print('running cmd', ' '.join(gpg_bin_args))
result = subprocess.run(gpg_bin_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**subprocess_kwargs)
return result.returncode, \
result.stdout.decode('utf-8').split('/n'), \
result.stderr.decode('utf-8').split('/n')
And then added an import key and decode function:
def gpg_import_keys(input):
return gpg_run(flags=['--import'], subprocess_kwargs={input: input})
def gpg_decrypt(input, output):
return gpg_run(flags=['--output', output, '--decrypt', input])
And updated the relavent Lambda code with:
# ...
encrypted_path = '/tmp/encrypted.zip'
decrypted_path = '/tmp/decrypted.zip'
#...
# TODO: import the keys only needs to run once per instance
# ideally would be moved to a singleton
code, stdout, stderr = gpg_import_keys(bytes(MY_KEY_DATA, 'utf-8'))
if code > 0:
raise Exception(f'gpg_import_keys failed with code {code}: {stdout} {stderr}')
print('import_keys stdout =', stdout)
print('import_keys stderr =', stderr)
# Perform decryption.
print('Performing Decryption of', encrypted_path)
code, stdout, stderr = gpg_decrypt(encrypted_path, output=decrypted_path)
if code > 0:
raise Exception(f'gpg_decrypt failed with code {code}: {stderr}')
print('decrypt stdout =', stdout)
print('decrypt stderr =', stderr)
print('Status: OK')
print(decrypted_path, "exists :", Path(decrypted_path).exists())
And now the Cloudwatch log output is as expected and I've confirmed the decoded file is correct!
...
2022-11-17T09:25:22.732-06:00 running cmd:['/opt/bin/gpg', '--no-tty', '--batch', '--yes', '--always-trust', '--status-fd', '1', '--homedir', '/tmp', '--import']
2022-11-17T09:25:22.769-06:00 import_keys ok = True
2022-11-17T09:25:22.769-06:00 import_keys stdout = ['[GNUPG:] IMPORT_OK 0 XXX', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] IMPORT_OK 16 XXX', '[GNUPG:] IMPORT_RES 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0', '']
2022-11-17T09:25:22.769-06:00 import_keys stderr = ['']
2022-11-17T09:25:22.769-06:00 Performing Decryption of /tmp/test.txt.gpg
2022-11-17T09:25:22.769-06:00 running cmd: /opt/bin/gpg --no-tty --yes --always-trust --status-fd 1 --homedir /tmp --output /tmp/decrypted.zip --decrypt /tmp/encrypted.zip
2022-11-17T09:25:22.850-06:00 decrypt stdout = ['[GNUPG:] ENC_TO XXX 1 0', '[GNUPG:] KEY_CONSIDERED XXX 0', '[GNUPG:] DECRYPTION_KEY XXX -', '[GNUPG:] BEGIN_DECRYPTION', '[GNUPG:] DECRYPTION_INFO 0 9 2', '[GNUPG:] PLAINTEXT 62 1667796554 encrypted.zip', '[GNUPG:] PLAINTEXT_LENGTH 428', '[GNUPG:] DECRYPTION_OKAY', '[GNUPG:] GOODMDC', '[GNUPG:] END_DECRYPTION', '']
2022-11-17T09:25:22.850-06:00 decrypt stderr = ['gpg: encrypted with rsa2048 key, ID XXX, created 2022-11-07', ' "XXX"', '']
2022-11-17T09:25:22.850-06:00 /tmp/decrypted.zip exists: True
...

kubernetes config map - syntax error: unterminated quoted string

I am getting below error when trying to attach shell script as config map.
I am not sure what's the issue because script work without adding in config map
It shows error is on the line 58
Which is not even there.
Any help will be really appreciated.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.metadata.name }}-micro
data:
micro-integrator.sh: |
#!/bin/sh
# micro-integrator.sh
while [ "$status" = "$START_EXIT_STATUS" ]
do
$JAVACMD \
-Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
$JVM_MEM_OPTS \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
$JAVA_OPTS \
-Dcom.sun.management.jmxremote \
-classpath "$CARBON_CLASSPATH" \
-Djava.io.tmpdir="$CARBON_HOME/tmp" \
-Dcatalina.base="$CARBON_HOME/wso2/lib/tomcat" \
-Dwso2.server.standalone=true \
-Dcarbon.registry.root=/ \
-Djava.command="$JAVACMD" \
-Dqpid.conf="/conf/advanced/" \
$JAVA_VER_BASED_OPTS \
-Dcarbon.home="$CARBON_HOME" \
-Dlogger.server.name="micro-integrator" \
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager \
-Dcarbon.config.dir.path="$CARBON_HOME/conf" \
-Dcarbon.repository.dir.path="$CARBON_HOME/repository" \
-Dcarbon.components.dir.path="$CARBON_HOME/wso2/components" \
-Dcarbon.dropins.dir.path="$CARBON_HOME/dropins" \
-Dcarbon.external.lib.dir.path="$CARBON_HOME/lib" \
-Dcarbon.patches.dir.path="$CARBON_HOME/patches" \
-Dcarbon.internal.lib.dir.path="$CARBON_HOME/wso2/lib" \
-Dcom.atomikos.icatch.hide_init_file_path=true \
-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false \
-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true \
-Dcom.sun.jndi.ldap.connect.pool.authentication=simple \
-Dcom.sun.jndi.ldap.connect.pool.timeout=3000 \
-Dorg.terracotta.quartz.skipUpdateCheck=true \
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=UTF8 \
-Djava.net.preferIPv4Stack=true \
-DNonRegistryMode=true \
-DNonUserCoreMode=true \
-Dcom.ibm.cacheLocalHost=true \
-Dcarbon.use.registry.repo=false \
-DworkerNode=false \
-Dorg.apache.cxf.io.CachedOutputStream.Threshold=104857600 \
-DavoidConfigHashRead=true \
-Dproperties.file.path=default \
-DenableReadinessProbe=true \
-DenableManagementApi=true \
$NODE_PARAMS \
-Dorg.apache.activemq.SERIALIZABLE_PACKAGES="*" \
org.wso2.micro.integrator.bootstrap.Bootstrap $*
status="$?"
done

How to create user for connect to database

ERROR :
[FATAL] [DBT-05509] Failed to connect to the specified database (cdb21).
CAUSE: OS Authentication might be disabled for this database (cdb21).
ACTION: Specify a valid sysdba user name and password to connect to the database.
First step:
./runInstaller -silent -responseFile /scratch/app/user/product/21.0.0/dbhome_1/install/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
UNIX_GROUP_NAME=oinstall \
ORACLE_BASE=/scratch/app/user \
INVENTORY_LOCATION=/scratch/app/oraInventory \
SELECTED_LANGUAGES=en \
oracle.install.db.InstallEdition=EE \
oracle.install.db.isCustomInstall=false \
oracle.install.db.OSDBA_GROUP=oinstall \
oracle.install.db.OSBACKUPDBA_GROUP=oinstall \
oracle.install.db.OSDGDBA_GROUP=oinstall \
oracle.install.db.OSKMDBA_GROUP=oinstall \
oracle.install.db.OSRACDBA_GROUP=oinstall \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true
Second step:
dbca -silent -createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb21 \
-sid cdb21 \
-responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword Welcome1 \
-systemPassword Welcome1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb21 \
-pdbAdminPassword Welcome1 \
-databaseType MULTIPURPOSE \
-memoryMgmtType auto_sga \
-totalMemory 4096 \
-storageType FS \
-datafileDestination /scratch/oradata/ \
-emConfiguration NONE \
-ignorePreReqs
Start The service using :
lsnrctl start
Then :
startup

Catalina-Opts with string parameter is not working

On my linux machine I want to configure tomcat 8 with the following
catalina_opts:
export CATALINA_OPTS="$CATALINA_OPTS -Dsina.elasticsearch.cluster.nodes=sina-1:9300 -Dsina.elasticsearch.cluster.name=sinasuite-dev -Dsina.rabbitmq.host=sina-1 -Dsina.rabbitmq.port=5672 -Dsina.rabbitmq.user=guest -Dsina.rabbitmq.password=guest -Dsina.images.directory=/home/dev/tmp -Dsina.forms.directory=/home/dev/tmp -Dsina.scheduler.rate=30000 -Dsina.alfresco.url=http://ares:8181/alfresco/api/-default-/public/cmis/versions/1.1/browser -Dsina.alfresco.site=/Sitios/sina-suite-dev/documentLibrary -Dsina.alfresco.repository=-default- -Dsina.alfresco.user=admin -Dsina.alfresco.password=admin -Dsina.cas.server.host=sina-1.alfatecsistemas.es -Dsina.cas.server.port=9444 -Dsina.cas.service.host=sina-1 -Dsina.cas.service.port=9443 -Dsina.cas.service.appname=sina-suite -Dsina.forms.pdf.files.directory=/home/dev/tmp -Dsina.fileupload.size=250000000 -Dsina.farhos.url.login=https://www.detots.com/farhos/token?usuario=%s&clave=%s -Dsina.farhos.url.component=https://www.detots.com/farhos/5/?vista=%s&paciente=%s&episodio=%s&token=%s -Dsina.nurse.profile.id=1 -Dsina.farhos.url.logout=https://www.detots.com/farhos/token?%s"
But on trying to start tomcat I'm getting the error:
/home/dev/tomcat/bin/catalina.sh: line 434: -Dsina.farhos.url.logout=https://www.detots.com/farhos/token?%s: No such file or directory
/home/dev/tomcat/bin/catalina.sh: line 434: -Dsina.nurse.profile.id=1: command not found
/home/dev/tomcat/bin/catalina.sh: line 434: -Dsina.farhos.url.component=https://www.detots.com/farhos/5/?vista=%s: No such file or directory
/home/dev/tomcat/bin/catalina.sh: line 434: -Dsina.nurse.profile.id=1: command not found
Please help
Try to surround URLs with single quotes.
export CATALINA_OPTS="$CATALINA_OPTS \
-Dsina.elasticsearch.cluster.nodes=sina-1:9300 \
-Dsina.elasticsearch.cluster.name=sinasuite-dev \
-Dsina.rabbitmq.host=sina-1 \
-Dsina.rabbitmq.port=5672 \
-Dsina.rabbitmq.user=guest \
-Dsina.rabbitmq.password=guest \
-Dsina.images.directory=/home/dev/tmp \
-Dsina.forms.directory=/home/dev/tmp \
-Dsina.scheduler.rate=30000 \
-Dsina.alfresco.url='http://ares:8181/alfresco/api/-default-/public/cmis/versions/1.1/browser' \
-Dsina.alfresco.site=/Sitios/sina-suite-dev/documentLibrary \
-Dsina.alfresco.repository=-default- \
-Dsina.alfresco.user=admin \
-Dsina.alfresco.password=admin \
-Dsina.cas.server.host=sina-1.alfatecsistemas.es \
-Dsina.cas.server.port=9444 \
-Dsina.cas.service.host=sina-1 \
-Dsina.cas.service.port=9443 \
-Dsina.cas.service.appname=sina-suite \
-Dsina.forms.pdf.files.directory=/home/dev/tmp \
-Dsina.fileupload.size=250000000 \
-Dsina.farhos.url.login='https://www.detots.com/farhos/token?usuario=%s&clave=%s' \
-Dsina.farhos.url.component='https://www.detots.com/farhos/5/?vista=%s&paciente=%s&episodio=%s&token=%s' \
-Dsina.nurse.profile.id=1 \
-Dsina.farhos.url.logout='https://www.detots.com/farhos/token?%s'"
FYI, It is recommended to add CATALINA_OPTS to bin/setenv.sh.

Error: undefined reference to NetCDF functions

Background
I'm working on compiling MCIP which mean Meteorology Chemistry Interface Processor in centos 5.9 system.
I use gcc -version 4.9 to implement the process.
Setting
Here is some configuration setting in ~/.bashrc:
export DIR=/disk2/hyf/lib ## All lib ar installed under this path
# NetCDF setting
export PATH="$DIR/netcdf/bin:$PATH"
export NETCDF="$DIR/netcdf"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NETCDF/lib
# IOAPI
export BIN=Linux2_x86_64gfort
export BASEDIR=/disk2/hyf/backup/software/ioapi
export PATH=$DIR/ioapi-3.1/bin:$PATH
export LD_LIBRARY_PATH=$DIR/ioapi-3.1/lib:$LD_LIBRARY_PATH
# Set M3LIB for the model
export M3LIB=/disk2/hyf/cmaq/CMAQv5.1/lib
I also do soft link for the CMAQ model as these:
ln -s $NETCDF $(M3LIB)/x86_64/gcc/netcdf
ln -s $IOAPI $(M3LIB)/x86_64/gcc/ioapi
Makefile
Here is some subroutine in the Makefile:
# Requirements: set M3LIB before running this script
.SUFFIXES:
.SUFFIXES: .o .f90 .F90
MODEL = mcip.exe
#...gfortran
FC = gfortran
NETCDF = $(M3LIB)/netcdf
IOAPI_ROOT = $(M3LIB)/ioapi
FFLAGS = -O3 -gdwarf-2 -gstrict-dwarf -I$(NETCDF)/include - I$(IOAPI_ROOT)/include \
-ffpe-trap='invalid','zero','overflow','underflow'
##FFLAGS = -g -O0 \
-ffpe-trap='invalid','zero','overflow','underflow' \
-I$(NETCDF)/include -I$(IOAPI_ROOT)/include
LIBS = -L$(IOAPI_ROOT)/lib -lioapi \
-L$(NETCDF)/lib -lnetcdf -lgomp
DEFS =
MODULES =\
const_mod.o \
const_pbl_mod.o \
coord_mod.o \
date_time_mod.o \
date_pack_mod.o \
files_mod.o \
groutcom_mod.o \
luvars_mod.o \
mcipparm_mod.o \
mcoutcom_mod.o \
mdoutcom_mod.o \
metinfo_mod.o \
metvars_mod.o \
vgrd_mod.o \
wrf_netcdf_mod.o \
xvars_mod.o \
sat2mcip_mod.o
OBJS =\
mcip.o \
alloc_ctm.o \
alloc_met.o \
alloc_x.o \
bcldprc_ak.o \
blddesc.o \
chkwpshdr.o \
chkwrfhdr.o \
close_files.o \
collapx.o \
comheader.o \
cori.o \
dealloc_ctm.o \
dealloc_met.o \
dealloc_x.o \
detangle_soil_px.o \
e_aerk.o \
dynflds.o \
getgist.o \
getluse.o \
getmet.o \
getpblht.o \
getsdt.o \
getversion.o \
graceful_stop.o \
gridout.o \
init_io.o \
init_met.o \
init_x.o \
julian.o \
layht.o \
ll2xy_lam.o \
.......
ERROR
The output after I make shows like:
make[1]: Entering directory `/disk2/hyf/cmaq/CMAQv5.1/scripts/mcip/src'
gfortran -g -O0 -gdwarf-2 -gstrict-dwarf \
-I/disk2/hyf/cmaq/CMAQv5.1/lib/x86_64/gcc/netcdf/include \
-I/disk2/hyf/cmaq/CMAQv5.1/lib/x86_64/gcc/ioapi/include -c const_mod.f90
......
chkwpshdr.o: In function `chkwpshdr_':
/disk2/hyf/cmaq/CMAQv5.1/scripts/mcip/src/chkwpshdr.f90:109: \
undefined reference to `__netcdf_MOD_nf90_get_att_one_fourbyteint'
(a lot of these code showing the same mistake 'undefined reference')
/disk2/hyf/cmaq/CMAQv5.1/lib/x86_64/gcc/ioapi/lib/libioapi.a(open3.o): In
function `open3_':
open3.F:(.text+0x1531): undefined reference to `ncclos_'
.........
I think the compiler may have some conflicts with the .F and .f90 files in some case. But I don't know why. The gcc andhas already successfully installed with $PATH defined.
I've faced the same problem, and I've got to solve it by adding -lnetcdff and -lnetcdf (in this order) to the LIBS option in the MCIP makefile. Make sure the NETCDF variable is pointing to the correct path where netcdf is installed in your system.

Resources