DOCKER_OPTS are reset after system reboot - linux

I am specifying my TLS certs in /etc/default/docker, like this:
DOCKER_OPTS="-H=unix:// --tlsverify --tlscacert=/etc/docker/mynewca.pem
--tlscert=/etc/docker/mynewcert.pem
--tlskey=/etc/docker/mynewkey.pem -H=0.0.0.0:2376"
However, every time my Docker host restarts, my settings are overridden with the defaults:
DOCKER_OPTS="-H=unix:// --tlsverify --tlscacert=/etc/docker/ca.pem
--tlscert=/etc/docker/cert.pem
--tlskey=/etc/docker/key.pem -H=0.0.0.0:2376"
This means that I can not communiate with the Docker daemon remotely until I reconfigure DOCKER_OPTS and run
sudo service restart docker
upstart is starting the Docker daemon, and it looks like the script section of /etc/init/docker.conf is overriding DOCKER_OPTS, although I can't find where it's getting the defaults from.
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKERD=/usr/bin/dockerd
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKERD" $DOCKER_OPTS --raw-logs
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
DOCKER_SOCKET=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
if ! printf "%s" "$DOCKER_OPTS" | grep -qE -e '-H|--host'; then
DOCKER_SOCKET=/var/run/docker.sock
else
DOCKER_SOCKET=$(printf "%s" "$DOCKER_OPTS" | grep -oP -e '(-H|--host)\W*unix://\K(\S+)' | sed 1q)
fi
if [ -n "$DOCKER_SOCKET" ]; then
while ! [ -e "$DOCKER_SOCKET" ]; do
initctl status $UPSTART_JOB | grep -qE "(stop|respawn)/" && exit 1
echo "Waiting for $DOCKER_SOCKET"
sleep 0.1
done
echo "$DOCKER_SOCKET is up"
fi
end script
Which

You may want to use the docker configuration file that is usually located in /etc/docker/daemon.json. See here for more information on the configuration:
https://docs.docker.com/engine/reference/commandline/dockerd//#daemon-configuration-file
In your case, the "tlscacert" option might be of special interest.
Nevertheless, the location of the configuration file may really depend on the OS and distribution (I remember the famous Gentoo /etc/conf.d/ directory)

Related

How to develop a Condition to close program only when log file has been updated in Bash Script [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

IBM Db2: How to automatically activate databases after (re)boot?

I have several Db2 databases that I want to automatically activate after a system reboot. Restarting the Db2 service after a reboot is not a problem, but activating the databases requires access to the instance profile.
Service start/stop is crontrolled by system / systemctl. Including some user-controlled setup scripts into those scripts doesn't seem like a good idea. I briefly looked into enable-linger for the Db2 instance user or to use EnvironmentFile to set up the instance profile.
How do you activate all or a set of databases after reboot? Do you use user/group/EnvironmentFile with systemd? Do you enable linger or do you have any other method?
Here is a simple script which must be run from the Db2 instance owner.
It assumes, that Db2 instance is auto started. If it's not the case, just comment out db2gcf -s and uncomment db2gcf -u.
The script waits for the instance startup a configured number of seconds, and activates all local databases found in the Db2 instance system directory.
The script may be scheduled to run at the OS startup via Db2 instance owner's crontab entry as shown.
Log file (see the ${LOG} variable) with commands history is created in the Db2 instance owner's home directory.
#!/bin/sh
#
# Function: Activates all local DB2 databases
# Crontab entry:
# #reboot /home/scripts/db2activate.sh >/dev/null 2>&1
#
TIMEOUT=300
VERBOSE=${1:-"noverbose"}
export LC_ALL=C
if [ ! -x ~/sqllib/db2profile ]; then
echo "Must be run by a DB2 instance onwer" >&2
exit 1
fi
[ -z ${DB2INSTANCE} ] && . ~/sqllib/db2profile
if [ "${VERBOSE}" != "verbose" ]; then
LOG=~/.$(basename $0).log
exec 1>>${LOG}
exec 2>>${LOG}
fi
set -x
printf "\n*** %s ***\n" $(date +"%F-%H.%M.%S")
# Wait for the instance startup
# (or even start it with 'db2gcf -u' instead of checking status: 'db2gcf -s')
TIME1=${SECONDS}
while [ $((SECONDS-TIME1)) -le ${TIMEOUT} ]; do
db2gcf -s
# db2gcf -u
rc=$?
[ ${rc} -eq 0 ] && break
sleep 5
done
if [ ${rc} -ne 0 ]; then
echo "Instance startup timeout of ${TIMEOUT} sec reached" >&2
exit 2
fi
for dbname in $(db2 list db directory | awk -v RS='' '/= Indirect/' | grep '^ Database alias' | sort -u | cut -d'=' -f2); do
db2 activate db ${dbname}
done
Simple script which must be run from the Db2 instance owner.
su - <INSTANCE>
db2iauto -on <INSTANCE>
Exiting
exit
run user root
./<INSTANCE>/sqllib/bin/db2fmcu -d;
cd /<INSTANCE>/sqllib/bin/
./db2fmcu -u -p /opt/ibm/db2/<VERSION DB2>/bin/db2fmcd
./db2fm -i <INSTANCE> -U
./db2fm -i <INSTANCE> -u
./db2fm -i <INSTANCE> -f on
ps -ef | grep db2fm|grep <INSTANCE>
Done

Watch file to be updated [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

How to check if ssh-agent is already running in bash?

I have a sample sh script on my Linux environment, which basically run's the ssh-agent for the current shell, adds a key to it and runs two git commands:
#!/bin/bash
eval "$(ssh-agent -s)"
ssh-add /home/duvdevan/.ssh/id_rsa
git -C /var/www/duvdevan/ reset --hard origin/master
git -C /var/www/duvdevan/ pull origin master
Script actually works fine, but every time I run it I get a new process so I think it might become a performance issue and I might end up having useless processes out there.
An example of the output:
Agent pid 12109
Identity added: /home/duvdevan/.ssh/custom_rsa (rsa w/o comment)
Also, along with all this, is it possible to find an existing ssh-agent process and add my keys into it?
No, really, how to check if ssh-agent is already running in bash?
Answers so far don't appear to answer the original question...
Here's what works for me:
if ps -p $SSH_AGENT_PID > /dev/null
then
echo "ssh-agent is already running"
# Do something knowing the pid exists, i.e. the process with $PID is running
else
eval `ssh-agent -s`
fi
This was taken from here
Also, along with all this, is it possible to find an existing ssh-agent process and add my keys into it?
Yes. We can store the connection info in a file:
# Ensure agent is running
ssh-add -l &>/dev/null
if [ "$?" == 2 ]; then
# Could not open a connection to your authentication agent.
# Load stored agent connection info.
test -r ~/.ssh-agent && \
eval "$(<~/.ssh-agent)" >/dev/null
ssh-add -l &>/dev/null
if [ "$?" == 2 ]; then
# Start agent and store agent connection info.
(umask 066; ssh-agent > ~/.ssh-agent)
eval "$(<~/.ssh-agent)" >/dev/null
fi
fi
# Load identities
ssh-add -l &>/dev/null
if [ "$?" == 1 ]; then
# The agent has no identities.
# Time to add one.
ssh-add -t 4h
fi
This code is from pitfalls of ssh agents which describes both the pitfalls of what you're currently doing, of this approach, and how you should use ssh-ident to do this for you.
If you only want to run ssh-agent if it's not running and do nothing otherwise:
if [ $(ps ax | grep [s]sh-agent | wc -l) -gt 0 ] ; then
echo "ssh-agent is already running"
else
eval $(ssh-agent -s)
if [ "$(ssh-add -l)" == "The agent has no identities." ] ; then
ssh-add ~/.ssh/id_rsa
fi
# Don't leave extra agents around: kill it on exit. You may not want this part.
trap "ssh-agent -k" exit
fi
However, this doesn't ensure ssh-agent will be accessible (just because it's running doesn't mean we have $SSH_AGENT_PID for ssh-add to connect to).
If you want it to be killed right after the script exits, you can just add this after the eval line:
trap "kill $SSH_AGENT_PID" exit
Or:
trap "ssh-agent -k" exit
$SSH_AGENT_PID gets set in the eval of ssh-agent -s.
You should be able to find running ssh-agents by scanning through /tmp/ssh-* and reconstruct the SSH_AGENT variables from it (SSH_AUTH_SOCK and SSH_AGENT_PID).
ps -p $SSH_AGENT_PID > /dev/null || eval "$(ssh-agent -s)"
Single line command. Run for the first time will start ssh-agent. Run for the second time will not start the ssh-agent. Simple and Elegant Mate !!!
Using $SSH_AGENT_PID can only test the ssh-agent but miss identities when it is not yet added
$ eval `ssh-agent`
Agent pid 9906
$ echo $SSH_AGENT_PID
9906
$ ssh-add -l
The agent has no identities.
So it would be save to check it with ssh-add -l with an expect script like example below:
$ eval `ssh-agent -k`
Agent pid 9906 killed
$ ssh-add -l
Could not open a connection to your authentication agent.
$ ssh-add -l &>/dev/null
$ [[ "$?" == 2 ]] && eval `ssh-agent`
Agent pid 9547
$ ssh-add -l &>/dev/null
$ [[ "$?" == 1 ]] && expect $HOME/.ssh/agent
spawn ssh-add /home/user/.ssh/id_rsa
Enter passphrase for /home/user/.ssh/id_rsa:
Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa)
$ ssh-add -l
4096 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX /home/user/.ssh/id_rsa (RSA)
So when both ssh-agent and ssh-add -l are put to run on a bash script:
#!/bin/bash
ssh-add -l &>/dev/null
[[ "$?" == 2 ]] && eval `ssh-agent`
ssh-add -l &>/dev/null
[[ "$?" == 1 ]] && expect $HOME/.ssh/agent
then it would always check and assuring that the connection is running:
$ ssh-add -l
4096 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX /home/user/.ssh/id_rsa (RSA)
You can also emulate the repeating of commands on above script with do while
The accepted answer did not work for me under Ubuntu 14.04.
The test to check if the ssh-agent is running I have to use is:
[[ ! -z ${SSH_AGENT_PID+x} ]]
And I am starting the ssh-agent with:
exec ssh-agent bash
Otherwise the SSH_AGENT_PID is not set.
The following seems to work under both Ubuntu 14.04 and 18.04.
#!/bin/bash
sshkey=id_rsa
# Check ssh-agent
if [[ ! -z ${SSH_AGENT_PID+x} ]]
then
echo "[OK] ssh-agent is already running with pid: "${SSH_AGENT_PID}
else
echo "Starting new ssh-agent..."
`exec ssh-agent bash`
echo "Started agent with pid: "${SSH_AGENT_PID}
fi
# Check ssh-key
if [[ $(ssh-add -L | grep ${sshkey} | wc -l) -gt 0 ]]
then
echo "[OK] SSH key already added to ssh-agent"
else
echo "Need to add SSH key to ssh-agent..."
# This should prompt for your passphrase
ssh-add ~/.ssh/${sshkey}
fi
Thanks to all the answers here. I've used this thread a few times over the years to tweak my approach. Wanted to share my current ssh-agent.sh checker/launcher script that works for me on Linux and OSX.
The following block is my $HOME/.bash.d/ssh-agent.sh
function check_ssh_agent() {
if [ -f $HOME/.ssh-agent ]; then
source $HOME/.ssh-agent > /dev/null
else
# no agent file
return 1
fi
if [[ ${OSTYPE//[0-9.]/} == 'darwin' ]]; then
ps -p $SSH_AGENT_PID > /dev/null
# gotcha: does not verify the PID is actually an ssh-agent
# just that the PID is running
return $?
fi
if [ -d /proc/$SSH_AGENT_PID/ ]; then
# verify PID dir is actually an agent
grep ssh-agent /proc/$SSH_AGENT_PID/cmdline > /dev/null 2> /dev/null;
if [ $? -eq 0 ]; then
# yep - that is an agent
return 0
else
# nope - that is something else reusing the PID
return 1
fi
else
# agent PID dir does not exist - dead agent
return 1
fi
}
function launch_ssh_agent() {
ssh-agent > $HOME/.ssh-agent
source $HOME/.ssh-agent
# load up all the pub keys
for I in $HOME/.ssh/*.pub ; do
echo adding ${I/.pub/}
ssh-add ${I/.pub/}
done
}
check_ssh_agent
if [ $? -eq 1 ];then
launch_ssh_agent
fi
I launch the above from my .bashrc using:
if [ -d $HOME/.bash.d ]; then
for I in $HOME/.bash.d/*.sh; do
source $I
done
fi
Hope this helps others get up and going quickly.
Created a public gist if you want to hack/improve this with me: https://gist.github.com/dayne/a97a258b487ed4d5e9777b61917f0a72
cat /usr/local/bin/ssh-agent-pro << 'EOF'
#!/usr/bin/env bash
SSH_AUTH_CONST_SOCK="/var/run/ssh-agent.sock"
if [[ x$(wc -w <<< $(pidof ssh-agent)) != x1 ]] || [[ ! -e ${SSH_AUTH_CONST_SOCK} ]]; then
kill -9 $(pidof ssh-agent) 2>/dev/null
rm -rf ${SSH_AUTH_CONST_SOCK}
ssh-agent -s -a ${SSH_AUTH_CONST_SOCK} 1>/dev/null
fi
echo "export SSH_AUTH_SOCK=${SSH_AUTH_CONST_SOCK}"
echo "export SSH_AGENT_PID=$(pidof ssh-agent)"
EOF
echo "eval \$(/usr/local/bin/ssh-agent-pro)" >> /etc/profile
. /etc/profile
then you can ssh-add xxxx once, you can use ssh-agent everytime when you login.
I've noticed that having a running agent is not enough because sometimes, the SSH_AUTH_SOCK variable is set or pointing to a socket file that does not exist anymore.
Therefore, to connect to an already running ssh-agent on your machine, you can do this :
$ pgrep -u $USER -n ssh-agent -a
1906647 ssh-agent -s
$ ssh-add -l
Could not open a connection to your authentication agent.
$ test -z "$SSH_AGENT_PID" && export SSH_AGENT_PID=$(pgrep -u $USER -n ssh-agent)
$ test -z "$SSH_AUTH_SOCK" && export SSH_AUTH_SOCK=$(ls /tmp/ssh-*/agent.$(($SSH_AGENT_PID-1)))
$ ssh-add -l
The agent has no identities.
Regarding finding running ssh-agents, previous answers either don't work or rely on a magic file like $HOME/.ssh_agent. These approaches require us to believe that user never run agents without saving their output to this file.
My approach instead relies on a rarely changed default UNIX domain socket template to find an accessible ssh-agent among available possibilities.
# (Paste the below code to your ~/.bash_profile and ~/.bashrc files)
C=$SSH_AUTH_SOCK
R=n/a
unset SSH_AUTH_SOCK
for s in $(ls $C /tmp/ssh-*/agent.* 2>/dev/null | sort -u) ; do
if SSH_AUTH_SOCK=$s ssh-add -l >/dev/null ; then R=$? ; else R=$? ; fi
case "$R" in
0|1) export SSH_AUTH_SOCK=$s ; break ;;
esac
done
if ! test -S "$SSH_AUTH_SOCK" ; then
eval $(ssh-agent -s)
unset SSH_AGENT_PID
R=1
fi
echo "Using $SSH_AUTH_SOCK"
if test "$R" = "1" ; then
ssh-add
fi
In this approach, SSH_AGENT_PID remains unknown, since it is hard to deduce it for non-roots. I assume it is actually not required for users since they don't normally want to stop agents. On my system, setting SSH_AUTH_SOCK is enough to communicate with agent for e.g. passwordless authentication.
The code should work with any shell-compatible shell.
You can modify line #1 to:
PID_SSH_AGENT=`eval ssh-agent -s | grep -Po "(?<=pid\ ).*(?=\;)"`
And then at the end of the script you can do:
kill -9 $PID_SSH_AGENT
I made this bash function to count and return the number of running ssh-agent processes... it searches ssh-agent process using procfs instead of using $ ps -p $SSH_AGENT_PID:cmd or $SSH_AUTH_SOCK:var ... (these ENV-var. can still be set with old values while ssh-agent's process is already killed: if $ ssh-agent -k or $ $(ssh-agent -k) instead of $ eval $(ssh-agent -k))
function count_agent_procfs(){
declare -a agent_list=( )
for folders in $(ls -d /proc/*[[:digit:]] | grep -v /proc/1$);do
fichier="${folders}/stat"
pid=${folders/\/proc\//}
[[ -f ${fichier} ]] && [[ $(cat ${fichier} | cut -d " " -f2) == "(ssh-agent)" ]] && agent_list+=(${pid})
done
return ${#agent_list[#]}
}
..and then if there is a lot of ssh-agent process running you get their PID with this list..."${agent_list[#]}"
Very simple command to check how many processes are running for ssh-agent (or any other program): pidof ssh-agent
or:
pgrep ssh-agent
And very simple command to kill all processes of ssh-agent (or any program):
kill $(pidof ssh-agent)

Check if service exists in bash (CentOS and Ubuntu)

What is the best way in bash to check if a service is installed? It should work across both Red Hat (CentOS) and Ubuntu?
Thinking:
service="mysqld"
if [ -f "/etc/init.d/$service" ]; then
# mysqld service exists
fi
Could also use the service command and check the return code.
service mysqld status
if [ $? = 0 ]; then
# mysqld service exists
fi
What is the best solution?
To get the status of one service without "pinging" all other services, you can use the command:
systemctl list-units --full -all | grep -Fq "$SERVICENAME.service"
By the way, this is what is used in bash (auto-)completion (see in file /usr/share/bash-completion/bash_completion, look for _services):
COMPREPLY+=( $( systemctl list-units --full --all 2>/dev/null | \
awk '$1 ~ /\.service$/ { sub("\\.service$", "", $1); print $1 }' ) )
Or a more elaborate solution:
service_exists() {
local n=$1
if [[ $(systemctl list-units --all -t service --full --no-legend "$n.service" | sed 's/^\s*//g' | cut -f1 -d' ') == $n.service ]]; then
return 0
else
return 1
fi
}
if service_exists systemd-networkd; then
...
fi
Hope to help.
Rustam Mamat gets the credit for this:
If you list all your services, you can grep the results to see what's in there. E.g.:
# Restart apache2 service, if it exists.
if service --status-all | grep -Fq 'apache2'; then
sudo service apache2 restart
fi
On a SystemD system :
serviceName="Name of your service"
if systemctl --all --type service | grep -q "$serviceName";then
echo "$serviceName exists."
else
echo "$serviceName does NOT exist."
fi
On a Upstart system :
serviceName="Name of your service"
if initctl list | grep -q "$serviceName";then
echo "$serviceName exists."
else
echo "$serviceName does NOT exist."
fi
On a SysV (System V) system :
serviceName="Name of your service"
if service --status-all | grep -q "$serviceName";then
echo "$serviceName exists."
else
echo "$serviceName does NOT exist."
fi
In systemd (especially in Debian), it doesn't seems to work properly using the various answers from here. For some services like pure-ftpd if it's in disabled mode, it will not show up in service list when you trigger this command:
systemctl --all --type service
and when you start again the pure-ftpd with systemctl start pure-ftpd the list will appear again. So listing the service using systemctl --all --type service will not work for all services. Take a look at this for more information.
So, this is the best code so far (improvement from #jehon's answer) to check if a service is exist (even it has status inactive, dead or whatever status it is):
#!/bin/bash
is_service_exists() {
local x=$1
if systemctl status "${x}" 2> /dev/null | grep -Fq "Active:"; then
return 0
else
return 1
fi
}
if is_service_exists 'pure-ftpd'; then
echo "Service found!"
else
echo "Service not found!"
fi
Explanation:
If systemctl status found a service, it must have a text 'Active:' we filter using grep and it would return 0. If there is no 'Active:' text it would return 1.
If systemctl status does not find the 'Active:' text, it will print out a standard error. So, I put redirection 2> /dev/null to redirect the standard error. For example, if you are looking for the non existence service, you would get this error message if you don't put that error redirection:
Unit pure-ftpdd.service could not be found.
We don't want to have the above standard error message if you are doing scripting
EDIT:
Another method is to list out unit files which able to detect disabled service as pointed by #Anthony Rutledge for Debian system:
systemctl list-unit-files --type service | grep -F "pure-ftpd"
But using this method will not always work especially for older system because some unit files might not be detected using this command as explained in here. Also, using this method is slower if you have large unit-files that need to be filtered (as commented by #ygoe about heavy load on a small computer).
To build off of Joel B's answer, here it is as a function (with a bit of flexibility added in. Note the complete lack of parameter checking; this will break if you don't pass in 2 parameters):
#!/bin/sh
serviceCommand() {
if sudo service --status-all | grep -Fq ${1}; then
sudo service ${1} ${2}
fi
}
serviceCommand apache2 status
After reading some systemd man pages ...
https://www.freedesktop.org/software/systemd/man/systemd.unit.html
... and systemd.services(5)....
... and a nice little article ...
https://www.linux.com/learn/understanding-and-using-systemd
I believe this could be an answer.
systemctl list-unit-files --type service
Pipe to awk {'print $1'} to just get a listing of the service units.
Pipe to awk again to get the service names exclusively. Change the field separator to the period with -F.
awk -F. {'print $1'}
In summary:
systemctl list-unit-files --type service | awk {'print $1'} | awk -F. {'print $1'}
With variation and augmentation of the base solution, you can determine the state of your system's services by combining a for loop with systemctl is-active $service.
#!/bin/sh
service=mysql
status=$(/etc/init.d/mysql status)
print "$status"
#echo $status > /var/log/mysql_status_log
var=$(service --status-all | grep -w "$Service")
if [ "output" != "" ]; then
#executes if service exists
else
#executes if service does not exist
fi
$Service is the name of the service you want to know if exists.
var will contain something like
[+] apache2
if the service does exist
if systemctl cat xxx >/dev/null 2>&1; then
echo yes
fi
Try this, as ps command can be used in both Ubuntu&RHEL, this should be work in both platform.
#!/bin/bash
ps cax | grep mysqld > /dev/null
if [ $? -eq 0 ]; then
echo "mysqld service exists"
else
echo "mysqld service not exists"
fi

Resources