why one linux command is working inside conatiner but not in entrpoint.sh - linux

I am using ubuntu 16.04 OS as VM.
While creating container, i have some commands in entrpoint.sh which is not working or behaving as expected but the same command is working when i am manually running inside the container, to be precise below is my simple linux cp command which recursively copy from source to destination and also unzip command.
In my entrypoint.sh I have three commands :
cd /tmp/localization/Tpae7610
unzip \*.zip
cp -r /tmp/localization/Tpae7610/* /home/db2inst1/maximo/
Last two commands are not working when container starts, when I say it's not working it means it is not giving any error but not copying the source contents to destinations as expected also it is not unzipping the .zip files
NOTE: But same command is working as expected when i manually run inside the container.
entrypoint.sh
#!/bin/bash
sysctl -w kernel.shmmni=1024
sysctl -w kernel.shmall=2097152
sysctl -w kernel.msgmnb=65536
sysctl -w kernel.msgmax=65536
sysctl -w kernel.msgmni=4096
sysctl -w kernel.shmmax=4294967296
#set -e
#
# Initialize DB2 instance in a Docker container
#
# # Authors:
# *
#
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
mkdir -p /db2fs
chown db2inst1:db2iadm1 /db2fs
chmod -R 755 /db2fs
#cp /tmp/maxinst.sh /home/db2inst1/maximo/Maximo-7.6-CD/tools/maximo/maxinst.sh
if [ -z "$DB2INST1_PASSWORD" ]; then
echo ""
echo >&2 'error: DB2INST1_PASSWORD not set'
echo >&2 'Did you forget to add -e DB2INST1_PASSWORD=... ?'
exit 1
else
echo "db2inst1:$DB2INST1_PASSWORD" | chpasswd
fi
if [ -z "$LICENSE" ];then
echo ""
echo >&2 'error: LICENSE not set'
echo >&2 "Did you forget to add '-e LICENSE=accept' ?"
exit 1
fi
if [ "${LICENSE}" != "accept" ];then
echo ""
echo >&2 "error: LICENSE not set to 'accept'"
echo >&2 "Please set '-e LICENSE=accept' to accept License before use the DB2 software contained in this image."
exit 1
fi
if [[ $1 = "db2start" ]]; then
echo "Performing botc database start"
if [ ! -d /db2fs/db2inst1 ]; then
echo "Database location does not exist, creating database"
chown -R db2inst1:db2iadm1 /maxdb7605
chown -R db2inst1:db2iadm1 /home/db2inst1/maximo
find /maxdb7605 -type d -exec chmod 755 \{\} \;
find /maxdb7605 -type f -exec chmod 644 \{\} \;
cd /home/db2inst1/maximo
#unzip -o tools.zip && rm tools.zip
#unzip -o applications.zip && rm applications.zip
set -x
cd /home/db2inst1/maximo/tools
if [ ! -f java ]; then
ln -s /home/db2inst1/sqllib/java java
fi
su - db2inst1 <<EOH
db2start
db2 create database maxdb76 on /db2fs dbpath on /db2fs using codeset UTF-8 territory us pagesize 32 K
db2 connect to maxdb76
db2 create bufferpool MAXBUFFPOOL pagesize 32K
db2 grant connect on database to user maximo
db2 GRANT DBADM,SECADM, CREATETAB,BINDADD,CONNECT,CREATE_NOT_FENCED_ROUTINE,IMPLICIT_SCHEMA,LOAD,CREATE_EXTERNAL_ROUTINE,QUIESCE_CONNECT ON DATABASE TO USER maximo
db2 GRANT USAGE on WORKLOAD SYSDEFAULTUSERWORKLOAD TO USER maximo;
db2 create schema maximo authorization maximo
db2 create regular tablespace MAXDATA pagesize 32k managed by automatic storage extentsize 16 overhead 12.67 prefetchsize 16 transferrate 0.18 bufferpool MAXBUFFPOOL dropped table recovery on NO FILE SYSTEM CACHING
db2 grant use of tablespace MAXDATA to user maximo
db2 update db cfg using LOGFILSIZ 5000
db2 update db cfg using LOGPRIMARY 50
db2 update db cfg using LOGSECOND 50
db2 connect reset
db2stop force
db2start
cd /maxdb7605
db2set DB2CODEPAGE=1208
db2 connect to maxdb76
db2 -t -f /maxdb7605/dbschema.sql
db2 -t -f /maxdb7605/dev_grants.sql
db2move maxdb76 LOAD -u maximo -p maximo -l lobs
db2 connect to maxdb76 user maximo using maximo
db2 -x "select 'values nextval for MAXIMO.',sequencename,';' from maxsequence" > /maxdb7605/sequence_update.sql
db2 -t -f /maxdb7605/sequence_update.sql
db2 connect reset
EOH
rm -rf /maxdb7605
set +x
nohup /usr/sbin/sshd -D 2>&1 > /dev/null &
cd /home/db2inst1/maximo/tools/maximo
chmod +x TDToolkit.sh
chmod +x updatedb.sh
dos2unix TDToolkit.sh
dos2unix updatedb.sh
./updatedb.sh
export JAVA_HOME=/opt/ibm/java-x86_64-70
export JRE_HOME=/opt//home/db2inst1/maximoibm/java-x86_64-70/jre
export PATH=${JAVA_HOME}/bin:$PATH
cd /
cd /tmp/localization/Tpae7610
unzip \*.zip
cp -a /tmp/localization/Tpae7610/* /home/db2inst1/maximo/
cd /tmp/localization/Lightning7604
unzip \*.zip
cp -a /tmp/localization/Lightning7604/* /home/db2inst1/maximo/
cd /tmp/localization/BOTC7610
unzip \*.zip
cp -a /tmp/localization/BOTC7610/* /home/db2inst1/maximo/
cd /tmp
#remove localization folder from tmp folder
rm -rf localization
cd /home/db2inst1/maximo/tools/maximo
#./TDToolkit.sh -addlangPT -useexpander
#./TDToolkit.sh -addlangJA -useexpander
#./TDToolkit.sh -addlangDE -useexpander
#./TDToolkit.sh -addlangIT -useexpander
#./TDToolkit.sh -addlangFR -useexpander
#./TDToolkit.sh -addlangES -useexpander
#./TDToolkit.sh -pmpupdatenxtgenui -useexpander
# ./TDToolkit.sh -pmpupdatez_botc -useexpander
chmod -R 777 /home/db2inst1/maximo/tools/maximo/log
#healthcheck looks for this file to indicate the container is initialized
touch /tmp/container_started
while true; do sleep 1000; done
exec "/bin/bash"
#statements
else
su - db2inst1 <<EOH
db2start
db2 catalog db maxdb76 on /db2fs
db2 terminate
db2 connect to maxdb76
EOH
touch /tmp/container_started
while true; do sleep 1000; done
exec "/bin/bash"
fi
sleep 10
fi

Either the files under the directory /tmp/localization/Tpae7610/ are not having permissions.
Try the command cp -v ( verbose will show the file copied)
Comment the rm -rf localization in the script. Then debug the script.

Related

Limited a user with creating rbash, exporting the path in .bashrc but /bin/ls still works

I tried limiting ls command to a specific user. It works, but when I execute /bin/ls, it executes successfully again, how to restrict here.
useradd -m $username -s /bin/rbash
echo "$username:$password" | chpasswd
mkdir /home/$username/bin
chmod 755 /home/$username/bin
echo "PATH=$HOME/bin" >> /home/$username/.bashrc
echo "export PATH" >> /home/$username/.bashrc
ln -s /bin/ls /home/$username/bin/

IBM Db2: How to automatically activate databases after (re)boot?

I have several Db2 databases that I want to automatically activate after a system reboot. Restarting the Db2 service after a reboot is not a problem, but activating the databases requires access to the instance profile.
Service start/stop is crontrolled by system / systemctl. Including some user-controlled setup scripts into those scripts doesn't seem like a good idea. I briefly looked into enable-linger for the Db2 instance user or to use EnvironmentFile to set up the instance profile.
How do you activate all or a set of databases after reboot? Do you use user/group/EnvironmentFile with systemd? Do you enable linger or do you have any other method?
Here is a simple script which must be run from the Db2 instance owner.
It assumes, that Db2 instance is auto started. If it's not the case, just comment out db2gcf -s and uncomment db2gcf -u.
The script waits for the instance startup a configured number of seconds, and activates all local databases found in the Db2 instance system directory.
The script may be scheduled to run at the OS startup via Db2 instance owner's crontab entry as shown.
Log file (see the ${LOG} variable) with commands history is created in the Db2 instance owner's home directory.
#!/bin/sh
#
# Function: Activates all local DB2 databases
# Crontab entry:
# #reboot /home/scripts/db2activate.sh >/dev/null 2>&1
#
TIMEOUT=300
VERBOSE=${1:-"noverbose"}
export LC_ALL=C
if [ ! -x ~/sqllib/db2profile ]; then
echo "Must be run by a DB2 instance onwer" >&2
exit 1
fi
[ -z ${DB2INSTANCE} ] && . ~/sqllib/db2profile
if [ "${VERBOSE}" != "verbose" ]; then
LOG=~/.$(basename $0).log
exec 1>>${LOG}
exec 2>>${LOG}
fi
set -x
printf "\n*** %s ***\n" $(date +"%F-%H.%M.%S")
# Wait for the instance startup
# (or even start it with 'db2gcf -u' instead of checking status: 'db2gcf -s')
TIME1=${SECONDS}
while [ $((SECONDS-TIME1)) -le ${TIMEOUT} ]; do
db2gcf -s
# db2gcf -u
rc=$?
[ ${rc} -eq 0 ] && break
sleep 5
done
if [ ${rc} -ne 0 ]; then
echo "Instance startup timeout of ${TIMEOUT} sec reached" >&2
exit 2
fi
for dbname in $(db2 list db directory | awk -v RS='' '/= Indirect/' | grep '^ Database alias' | sort -u | cut -d'=' -f2); do
db2 activate db ${dbname}
done
Simple script which must be run from the Db2 instance owner.
su - <INSTANCE>
db2iauto -on <INSTANCE>
Exiting
exit
run user root
./<INSTANCE>/sqllib/bin/db2fmcu -d;
cd /<INSTANCE>/sqllib/bin/
./db2fmcu -u -p /opt/ibm/db2/<VERSION DB2>/bin/db2fmcd
./db2fm -i <INSTANCE> -U
./db2fm -i <INSTANCE> -u
./db2fm -i <INSTANCE> -f on
ps -ef | grep db2fm|grep <INSTANCE>
Done

Creating a docker Base Image

I have a private Linux distribution (based on redhat7).
I have an ISO file which holds the installation of that distribution, which can be used to install the OS on a clear system only.
I have some programs I would like to run as images on docker, each program on a different image.
Each program can only run on my Linux environment and so I am looking for a way to create the appropriate images, so they can be ran under docker.
I tried following Solomon instructions here:
mkdir rootfs
mount -o loop /path/to/iso rootfs
tar -C rootfs -c . | docker import - rich/mybase
But I don't know how to proceed. I can't run any command since the machine isn't running yet (no /bin/bash/ etc.)
How can I open the installation shell?
Is there a better way to run programs via docker on a private Linux distribution?
(Just to be clear, the programs can run only on that specific OS and that OS can only be installed on a clear machine. Not sure if I need a base image but I'd like to run these programs with Docker and that is possible only over this OS)
I ran into many questions like mine (like this) but I couldn't find answer that helped me.
Assumption
Server A where the ISO will be mount
Server R your private repositoy
Server N where container will be run
All server can connect to server R.
How to
build a base image as mentioned in your OP (named base/myimage)
Push the image to your private repository https://docs.docker.com/registry/deploying/
Create application images from your base base/myimage then push them to your private repo
From Server N, run the application image
docker run application/myapp
This script is from the official Docker contrib repo. It's used to create CentOS images from scratch. It should work with any Redhat/Centos based system and gives you plenty of control over the various steps. Anything beyond that you can then modify post-base-image through a Dockerfile.
The file is here
#!/usr/bin/env bash
#
# Create a base CentOS Docker image.
#
# This script is useful on systems with yum installed (e.g., building
# a CentOS image on CentOS). See contrib/mkimage-rinse.sh for a way
# to build CentOS images on other systems.
usage() {
cat <<EOOPTS
$(basename $0) [OPTIONS] <name>
OPTIONS:
-p "<packages>" The list of packages to install in the container.
The default is blank.
-g "<groups>" The groups of packages to install in the container.
The default is "Core".
-y <yumconf> The path to the yum config to install packages from. The
default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
EOOPTS
exit 1
}
# option defaults
yum_config=/etc/yum.conf
if [ -f /etc/dnf/dnf.conf ] && command -v dnf &> /dev/null; then
yum_config=/etc/dnf/dnf.conf
alias yum=dnf
fi
install_groups="Core"
while getopts ":y:p:g:h" opt; do
case $opt in
y)
yum_config=$OPTARG
;;
h)
usage
;;
p)
install_packages="$OPTARG"
;;
g)
install_groups="$OPTARG"
;;
\?)
echo "Invalid option: -$OPTARG"
usage
;;
esac
done
shift $((OPTIND - 1))
name=$1
if [[ -z $name ]]; then
usage
fi
target=$(mktemp -d --tmpdir $(basename $0).XXXXXX)
set -x
mkdir -m 755 "$target"/dev
mknod -m 600 "$target"/dev/console c 5 1
mknod -m 600 "$target"/dev/initctl p
mknod -m 666 "$target"/dev/full c 1 7
mknod -m 666 "$target"/dev/null c 1 3
mknod -m 666 "$target"/dev/ptmx c 5 2
mknod -m 666 "$target"/dev/random c 1 8
mknod -m 666 "$target"/dev/tty c 5 0
mknod -m 666 "$target"/dev/tty0 c 4 0
mknod -m 666 "$target"/dev/urandom c 1 9
mknod -m 666 "$target"/dev/zero c 1 5
# amazon linux yum will fail without vars set
if [ -d /etc/yum/vars ]; then
mkdir -p -m 755 "$target"/etc/yum
cp -a /etc/yum/vars "$target"/etc/yum/
fi
if [[ -n "$install_groups" ]];
then
yum -c "$yum_config" --installroot="$target" --releasever=/ --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y groupinstall $install_groups
fi
if [[ -n "$install_packages" ]];
then
yum -c "$yum_config" --installroot="$target" --releasever=/ --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y install $install_packages
fi
yum -c "$yum_config" --installroot="$target" -y clean all
cat > "$target"/etc/sysconfig/network <<EOF
NETWORKING=yes
HOSTNAME=localhost.localdomain
EOF
# effectively: febootstrap-minimize --keep-zoneinfo --keep-rpmdb --keep-services "$target".
# locales
rm -rf "$target"/usr/{{lib,share}/locale,{lib,lib64}/gconv,bin/localedef,sbin/build-locale-archive}
# docs and man pages
rm -rf "$target"/usr/share/{man,doc,info,gnome/help}
# cracklib
rm -rf "$target"/usr/share/cracklib
# i18n
rm -rf "$target"/usr/share/i18n
# yum cache
rm -rf "$target"/var/cache/yum
mkdir -p --mode=0755 "$target"/var/cache/yum
# sln
rm -rf "$target"/sbin/sln
# ldconfig
rm -rf "$target"/etc/ld.so.cache "$target"/var/cache/ldconfig
mkdir -p --mode=0755 "$target"/var/cache/ldconfig
version=
for file in "$target"/etc/{redhat,system}-release
do
if [ -r "$file" ]; then
version="$(sed 's/^[^0-9\]*\([0-9.]\+\).*$/\1/' "$file")"
break
fi
done
if [ -z "$version" ]; then
echo >&2 "warning: cannot autodetect OS version, using '$name' as tag"
version=$name
fi
tar --numeric-owner -c -C "$target" . | docker import - $name:$version
docker run -i -t --rm $name:$version /bin/bash -c 'echo success'
rm -rf "$target"

delete backups older than 7 days from remote ftp using ncftp

I`m currently using this script from cyberciti
#!/bin/sh
# System + MySQL backup script
# Full backup day - Sun (rest of the day do incremental backup)
# Copyright (c) 2005-2006 nixCraft <http://www.cyberciti.biz/fb/>
# This script is licensed under GNU GPL version 2.0 or above
# Automatically generated by http://bash.cyberciti.biz/backup/wizard-ftp-script.php
# ---------------------------------------------------------------------
### System Setup ###
DIRS="/home /etc /var/www"
BACKUP=/tmp/backup.$$
NOW=$(date +"%d-%m-%Y")
INCFILE="/root/tar-inc-backup.dat"
DAY=$(date +"%a")
FULLBACKUP="Sun"
### MySQL Setup ###
MUSER="admin"
MPASS="mysqladminpassword"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
GZIP="$(which gzip)"
### FTP server Setup ###
FTPD="/home/vivek/incremental"
FTPU="vivek"
FTPP="ftppassword"
FTPS="208.111.11.2"
NCFTP="$(which ncftpput)"
### Other stuff ###
EMAILID="admin#theos.in"
### Start Backup for file system ###
[ ! -d $BACKUP ] && mkdir -p $BACKUP || :
### See if we want to make a full backup ###
if [ "$DAY" == "$FULLBACKUP" ]; then
FTPD="/home/vivek/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
### Start MySQL Backup ###
# Get all databases name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BACKUP/mysql-$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
### Dump backup using FTP ###
#Start FTP backup using ncftp
ncftp -u"$FTPU" -p"$FTPP" $FTPS<<EOF
mkdir $FTPD
mkdir $FTPD/$NOW
cd $FTPD/$NOW
lcd $BACKUP
mput *
quit
EOF
### Find out if ftp backup failed or not ###
if [ "$?" == "0" ]; then
rm -f $BACKUP/*
else
T=/tmp/backup.fail
echo "Date: $(date)">$T
echo "Hostname: $(hostname)" >>$T
echo "Backup failed" >>$T
mail -s "BACKUP FAILED" "$EMAILID" <$T
rm -f $T
fi
It works nice, but my backups take up too much space on remote server, so I would like to modify this script so the ones that are older that 7 days are deleted.
Can someone tell me what to edit? I have no knowledge of shell scripting or ncftp commands though.
I don't have a practical method of trying what I type below - I would suspect that this works, but if not it will at least show the right idea. Please don't use these mods without thorough testing first - if I have stuffed up it could delete your data, but here goes:
Underneath the line:NOW=$(date +"%d-%m-%Y")
add:
DELDATE=$(date -d "-7 days" +"%d-%m-%Y")
after the line: ncftp -u"$FTPU" -p"$FTPP" $FTPS<<EOF
add:
cd $FTPD/$DELDATE
rm *
cd $FTPD
rmdir $DELDATE
The theory behind these changes is as follows:
The first addition calculates what the date was 7 days ago.
The second addition attempts to delete the old information.

Linux Malware Detect on shared hosting

I am trying to install the excellent http://www.rfxn.com/projects/linux-malware-detect/ on a shared hosting.
I have changed the inspath to my local dir but it gives errors on creating symbolic links, read only on /usr/lib/, and finally /usr/local/maldetect/conf.maldet not found.
Thanks for any help. I think solving this would prove very useful to a lot of people.
Here's the error:
./install.sh
ln: creating symbolic link `/usr/local/sbin/maldet' to `/home6/anton/mal/maldet': No such file or directory
ln: creating symbolic link `/usr/local/sbin/lmd' to `/home6/anton/mal/maldet': No such file or directory
cp: cannot create regular file `/usr/lib/libinotifytools.so.0': Read-only file system
Linux Malware Detect v1.3.9
(C) 2002-2011, R-fx Networks <proj#r-fx.org>
(C) 2011, Ryan MacDonald <ryan#r-fx.org>
inotifywait (C) 2007, Rohan McGovern <rohan#mcgovern.id.au>
This program may be freely redistributed under the terms of the GNU GPL v2
maldet(15528): {glob} /usr/local/maldetect/conf.maldet not found, aborting.
installation completed to /home6/anton/mal
config file: /home6/anton/mal/conf.maldet
exec file: /home6/anton/mal/maldet
exec link: /usr/local/sbin/maldet
exec link: /usr/local/sbin/lmd
cron.daily: /etc/cron.daily/maldet
.ca.def: line 1: /usr/local/maldetect/conf.maldet: No such file or directory
imported config options from /home6/anton/mal.last/conf.maldet
maldet(15578): {glob} /usr/local/maldetect/conf.maldet not found, aborting.
And here's the install bash:
#!/bin/bash
#
##
# Linux Malware Detect v1.3.9
# (C) 2002-2011, R-fx Networks <proj#r-fx.org>
# (C) 2011, Ryan MacDonald <ryan#r-fx.org>
# inotifywait (C) 2007, Rohan McGovern <rohan#mcgovern.id.au>
# This program may be freely redistributed under the terms of the GNU GPL v2
##
#
inspath=/home6/anton/mal
logf=$inspath/event_log
cnftemp=.ca.def
if [ ! -d "$inspath" ] && [ -d "files" ]; then
mkdir -p $inspath
chmod 750 $inspath
cp -pR files/* $inspath
chmod 750 $inspath/maldet
ln -fs $inspath/maldet /usr/local/sbin/maldet
ln -fs $inspath/maldet /usr/local/sbin/lmd
cp $inspath/inotify/libinotifytools.so.0 /usr/lib/
else
$inspath/maldet -k >> /dev/null 2>&1
mv $inspath $inspath.bk$$
rm -f $inspath.last
ln -fs $inspath.bk$$ $inspath.last
mkdir -p $inspath
chmod 750 $inspath
cp -pR files/* $inspath
chmod 750 $inspath/maldet
ln -fs $inspath/maldet /usr/local/sbin/maldet
ln -fs $inspath/maldet /usr/local/sbin/lmd
cp $inspath/inotify/libinotifytools.so.0 /usr/lib/
cp -f $inspath.bk$$/sess/* $inspath/sess/ >> /dev/null 2>&1
cp -f $inspath.bk$$/tmp/* $inspath/tmp/ >> /dev/null 2>&1
cp -f $inspath.bk$$/quarantine/* $inspath/quarantine/ >> /dev/null 2>&1
fi
if [ -d "/etc/cron.daily" ]; then
cp -f cron.daily /etc/cron.daily/maldet
chmod 755 /etc/cron.daily/maldet
fi
touch $logf
$inspath/maldet --alert-daily
$inspath/maldet --alert-weekly
echo "Linux Malware Detect v1.3.9"
echo " (C) 2002-2011, R-fx Networks <proj#r-fx.org>"
echo " (C) 2011, Ryan MacDonald <ryan#r-fx.org>"
echo "inotifywait (C) 2007, Rohan McGovern <rohan#mcgovern.id.au>"
echo "This program may be freely redistributed under the terms of the GNU GPL"
echo ""
echo "installation completed to $inspath"
echo "config file: $inspath/conf.maldet"
echo "exec file: $inspath/maldet"
echo "exec link: /usr/local/sbin/maldet"
echo "exec link: /usr/local/sbin/lmd"
echo "cron.daily: /etc/cron.daily/maldet"
echo ""
if [ -f "$cnftemp" ] && [ -f "$inspath.bk$$/conf.maldet" ]; then
. files/conf.maldet
. $inspath.bk$$/conf.maldet
. $cnftemp
echo "imported config options from $inspath.last/conf.maldet"
fi
$inspath/maldet --update 1
Most shared hosting doesn't allow its user to access the system folder.
/usr/lib/
/usr/local/
is one example of the system folder. So, I guess you can't install that software due to this limitation.

Resources