shellscript to read log files of a docker containerised application on a remote VM is not working - linux

I have an application running on VM 100.111..* . I can use the below commands to connect to the VM and get into the docker bash to read the log file which are available at path /logs/MTServices/applogs/application.log.
Steps are :
Do ssh to VM
ssh -i ~/.ssh/dev_rsa infy#100.111.**.***
Once connected, run the docker command sudo docker exec -it MTWLS bash
Now I am in the docker bash and I can use the command to change the directory to read the logs.
So far so good. Now When I create a shell script for the above commands and I run it, it doesn't seem to work. Below the is script.
#!/bin/bash
echo "starting"
VM_Host="100.111.**.***"
echo $VM_Host
pattern="Performing asynchronous operation for tenant"
echo $pattern
WLSType="MTWLS"
echo $WLSType
echo "running ssh"
ssh -i ~/.ssh/devus1_rsa opc#${VM_Host}
if [ "$?" -eq "0" ]
then
echo "last command was success"
sudo docker exec -it ${WLSType} bash
echo $(pwd)
fi
if [ "$?" -eq "0" ]
then
cd /logs/ETLServices/applogs
echo $(pwd)
fi
grep -s "$pattern" application_structured.log

I cannot try it myself, but I think the problem is here
sudo docker exec -it ${WLSType} bash
Right there you're starting a new process, interactive, and pass the control to it: I think that the following lines of your script will be executed only after you close the docker process

Related

scp command inside bash script not working under non-root user

I wrote a bash script which should move a file to remote server.
this is my code:
#!/bin/sh
# ensure running as root
if [ "$(id -u)" != "0" ]; then
exec sudo "$0" "$#"
fi
cat <<EOF > /etc/name.txt
EOF
sshpass -p 'password' scp -r /etc/name.txt root#192.168.1.50:/etc/name.txt
when I run this script under root user it works perfectly. but when I run this script under non-root user the scp part doesn't work. when I check $UID after this part:
if [ "$(id -u)" != "0" ]; then
exec sudo "$0" "$#"
fi
it shows 0 and $USER is root which means user changed to root but I don't know why it's not working. Any help?

DOCKER_OPTS are reset after system reboot

I am specifying my TLS certs in /etc/default/docker, like this:
DOCKER_OPTS="-H=unix:// --tlsverify --tlscacert=/etc/docker/mynewca.pem
--tlscert=/etc/docker/mynewcert.pem
--tlskey=/etc/docker/mynewkey.pem -H=0.0.0.0:2376"
However, every time my Docker host restarts, my settings are overridden with the defaults:
DOCKER_OPTS="-H=unix:// --tlsverify --tlscacert=/etc/docker/ca.pem
--tlscert=/etc/docker/cert.pem
--tlskey=/etc/docker/key.pem -H=0.0.0.0:2376"
This means that I can not communiate with the Docker daemon remotely until I reconfigure DOCKER_OPTS and run
sudo service restart docker
upstart is starting the Docker daemon, and it looks like the script section of /etc/init/docker.conf is overriding DOCKER_OPTS, although I can't find where it's getting the defaults from.
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKERD=/usr/bin/dockerd
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKERD" $DOCKER_OPTS --raw-logs
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
DOCKER_SOCKET=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
if ! printf "%s" "$DOCKER_OPTS" | grep -qE -e '-H|--host'; then
DOCKER_SOCKET=/var/run/docker.sock
else
DOCKER_SOCKET=$(printf "%s" "$DOCKER_OPTS" | grep -oP -e '(-H|--host)\W*unix://\K(\S+)' | sed 1q)
fi
if [ -n "$DOCKER_SOCKET" ]; then
while ! [ -e "$DOCKER_SOCKET" ]; do
initctl status $UPSTART_JOB | grep -qE "(stop|respawn)/" && exit 1
echo "Waiting for $DOCKER_SOCKET"
sleep 0.1
done
echo "$DOCKER_SOCKET is up"
fi
end script
Which
You may want to use the docker configuration file that is usually located in /etc/docker/daemon.json. See here for more information on the configuration:
https://docs.docker.com/engine/reference/commandline/dockerd//#daemon-configuration-file
In your case, the "tlscacert" option might be of special interest.
Nevertheless, the location of the configuration file may really depend on the OS and distribution (I remember the famous Gentoo /etc/conf.d/ directory)

Commands don't echo after sudo as another user

I have a single command to ssh to a remote linux host and execute a shell script.
ssh -t -t $USER#somehost 'bash -s' < ./deploy.sh
Inside deploy.sh I have this:
#!/bin/bash
whoami; # I see this command echo
sudo -i -u someoneelse #I see this command echo
whoami; # I DON'T see this command echo, but response is correct
#subsequent commands don't echo
When I run the deploy.sh script locally all commands echo.
How do I get commands to echo after I sudo as another user over ssh?
Had to set -x AFTER sudo as another user
#!/bin/bash
whoami;
sudo -i -u someonelese
set -x #make sure echo on
whoami; #command echoed

Ksh script: How to remain in ssh and continue the script

So for my script I want to ssh into a remote host and remain in the remote host after the script ends and also have the directory changed to match the remote host when the script ends.
#!/bin/ksh
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; echo $PWD; ssh -q -X ssh mylogin#myremotemachine; cd $HOME/bin/folder2; echo $PWD'
The PWD gets changed correctly before the second ssh. The reason for the second ssh is because it ends the script in the correct remote host but it will not retain the directory change which I attempted by putting commands after it but they won't execute.
Does anyone have any ideas?
Just launch a shell at the end of the command list:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; echo $PWD; ssh -q -X ssh mylogin#myremotemachine; cd $HOME/bin/folder2; echo $PWD; ksh'
If you want the shell to be a login one (i.e. one that reads .profile), use exec -l:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; exec -l ksh'
If the remote server uses an old ksh release that doesn't support the exec -l builtin and if bash or ksh93 is available, here is a workaround:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; exec bash -c "exec -l ksh"'

How to write a bash script which automate entering "docker container" and doing other things?

I want to implement an automatic bash script which enters a running docker container, and do some stuffs:
# cat docker.sh
#!/bin/bash -x
docker exec -it hammerdb_net8 bash
cd /data/oracle/tablespaces/
pwd
Executing the script on terminal:
# ./docker.sh
+ docker exec -it hammerdb_net8 bash
[root#npar1 /]#
The output shows only login the docker container, but won't do other operations.
Is there any method to automate entering docker container and doing other things?
You can use bash -c:
docker exec -it hammerdb_net8 bash -c 'cd /data/oracle/tablespaces/; pwd; ls'
For running a series of commands use here-doc in BASH:
docker exec -i hammerdb_net8 bash <<'EOF'
cd /data/oracle/tablespaces/
pwd
ls
EOF

Resources