send notification from virtual machine (docker container) to host machine - linux

I m trying to make notify-send passing messages to host and create a notification on my desktop from my local-machine. But its not that simple :(.
My Operation system is Ubuntu and vm machine's Debian.
I know that containers communicating with "bridge" in docker or via "volume"
My approche was to install dbus-monitor,libnotify-bin on virtual machine
Write to file called notofications via demon who intercept notify-send messages.
Than have some sort of deamon on host machine which "watch" that file and everytime a new line of text is being added to that from virtual machine, deamon triggers launch a command notify-send "that line"
Usecase:
I work with docker container workspace.
I m developing websites and I found it pretty annoying that when I m "live building" script and it fails I dont get a notification.
So I wanted to get one with having same environment accross my teamates using ddev
Problem:
To intercept and write to file I tried to use dbus but:
my dbus-monitor script:
#!/bin/bash
file=notifications
dbus-monitor "interface='org.freedesktop.Notifications'" |\
grep --line-buffered "string" |\
grep --line-buffered -e method -e ":" -e '""' -e urgency -e notify -v |\
grep --line-buffered '.*(?=string)|(?<=string).*' -oPi |\
grep --line-buffered -v '^\s*$' |\
xargs -I '{}' echo {} >> $file
seemss not to work without display configuration. And I have no idea how to fix that.
Failed to open connection to session bus: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
In order to work I need to find solution to somehow "watch"
notify-send command and intercept the message. Please help :)

Related

LDAP - SSH script across multiple VM's

So I'm ssh'ing into a router that has several VM's. It is setup using LDAP so that each VM has the same files, settings, etc. However they have different cores allocated, different libraries and packages installed. Instead of logging into each VM individually and running the command, I want to automate it by putting the script in .bashrc.
So what I have so far:
export LD_LIBRARY_PATH=/lhome/username
# .so files are in ~/ to avoid permission denied problems
output=$(cat /proc/cpuinfo | grep "^cpu cores" | uniq | tail -c 2)
current=server_name
if [[ `hostname-s` != $current ]]; then
ssh $current
fi
/path/to/program --hostname $(echo $(hostname -s)) --threads $((output*2))
Each VM, upon logging in, will execute this script, so I have to check if the current VM has the hostname to avoid an SSH loop. The idea is to run the program, then exit back out to the origin to resume the script. The problem is of course that the process will die upon logging out.
It's been suggested to me to use TMUX on an array of the hostnames, but I would have no idea on how to approach this.
You could install clusterSSH, set up a list of hostnames, and execute things from the terminal windows opened. You may use screen/tmux/nohup to allow processes started to keep running, even after logout.
Yet, if you still want to play around with scripting, you may install tmux, and use:
while read host; do
scp "script_to_run_remotely" ${host}:~/
ssh ${host} tmux new-session -d '~/script_to_run_remotely'\; detach
done < hostlist
Note: hostlist should be a list of hostnames, one per line.

I can't restart my dnsmasq service, so my fog server won't work

I have a fog server set up in work, every now and then our useless internet fails and I have to reset the dnsmasq to get it working again, (don't have a dhcp server set up and can't modify the hubs settings so won't be doing this). Whenever I try sudo dnsmasq restart, I get the message:
junk found in command line.
First of all, can some please explain to me in simple terms what this actually means? As I am no Linux expert and nobody seems to have a simple explanation as to what this is...
Secondly, I have always used the command posted on another the fog forum to correct this error.
sudo /etc/init.d/dnsmasq restart
This always worked perfectly however now when I try to run this command I get the message:
command not found`.
Edit your /etc/init.d/dnsmasq
My linux distribution is Debian 9 (stretch)
Change this line :
ROOT_DS="/usr/share/dns/root.ds"
if [ -f $ROOT_DS ]; then
DNSMASQ_OPTS="$DNSMASQ_OPTS `sed -e s/". IN DS "/--trust-anchor=.,/ -e s/" "/,/g $ROOT_DS | tr '\n' ' '`"
fi
To :
ROOT_DS="/usr/share/dns/root.ds"
if [ -f $ROOT_DS ]; then
DNSMASQ_OPTS="$DNSMASQ_OPTS `sed -e s/".*IN[[:space:]]DS[[:space:]]"/--trust-anchor=.,/ -e s/"[[:space:]]"/,/g $ROOT_DS | tr '\n' ' '`"
fi
This problem occurs due to updating the dns-root-data package, more precisely in the file /usr/share/dns/root.ds.
The structure of this file was changed, the fields were separated only by spaces, now they were changed by tabs (\t)
sudo service dnsmasq start
That worked for me
Try sudo restart dnsmasq. The /etc/init.d/ directory is the location of System V init scripts. If dnsmasq is not there, it's probably been converted to use upstart and its configuration is in /etc/init/

Invocation command using SSH getting failed?

As per project requirement, i need to check the content of zip file generated which been generated on remote machine.This entire activity is done using automation framework suites. which has been written in shell scripts. I am performing above activity using ssh command abd execute unzip command with -l and -q switches. But this command is getting failed. and shows below error messages.
[SOMEUSER#MACHINE IP Function]$ ./TESTS.sh
ssh SOMEUSER#MACHINE IP unzip -l -q SOME_PATH/20130409060734*.zip | grep -i XML |wc -l
unzip: cannot find or open SOME_PATH/20130409060734*.zip, SOME_PATH/20130409060734*.zip.zip or SOME_PATH/20130409060734*.zip.ZIP.
No zipfiles found.
0
the same command i had written manually but that works properly. I really have no idea.Why this is getting failed whenever i executed via shell scripts.
[SOMEUSER#MACHINE IP Function]$ ssh SOMEUSER#MACHINE IP unzip -l -q SOME_PATH/20130409060734*.zip | grep -i XML |wc -l
2
Kindly help me to resolve that issue.
Thanks in Advance,
Priyank Shah
when you run the command from your local machine, the asterisk character is being expanded on your local machine before it is passed on to your remote ssh command. So your command is expecting to find SOME_PATH/20130409060734*.zip files on your machine and insert them into your ssh command to be passed to the other machine, whereas you (I'm assuming) mean, SOME_PATH/20130409060734*.zip files on the remote machine.
for that, precede the * character by a backslash ( \ ) and see if it helps you. In some shells escape character might be defined differently and if yours is one of them you need to find the escape character and use that one instead. Also, use quotes around the commands being passed to other server. Your command line should look something like this in my opinion:
ssh SOMEUSER#MACHINE_IP "/usr/bin/unzip -l -q SOME_PATH/20130409060734\*.zip | grep -i XML |wc -l"
Hope this helps

Running Jconsole from a service: CentOS

I installed Tomcat on my CentOS 6.3 machine and I made it a service by creating the /etc/init.d/tomcat file.
It works with the basic start, stop, restart and status functionality just fine.
I use jconsole on the servers often, so I thought it would be nice to build this functionality into the service (by running service tomcat monitor) instead of having to run ps aux|grep java and then running the jconsole <Java PID> .
Here is my service script (Just the monitor section):
monitor)
# Check for Tomcat PID (greps are separated to prevent returning the single grep PID)
FOUND_PID=$(ps aux |grep $JAVA_HOME/bin/ | grep java |awk -F' ' '{print $2}')
if [[ $FOUND_PID ]]
then
echo -e $JAVA_HOME/bin/jconsole $FOUND_PID
$JAVA_HOME/bin/jconsole $FOUND_PID
else
echo -e "Failed: Tomcat is not currently running";
fi
;;
Everything inside of the monitor section works when I run the bash script directly it works, but when the service calls it, it just hangs at the jconsole line and doesn't do anything.
And when I run service tomcat monitor, I do get the correct path output, so I know that the path is correct.
Is there a way to get the jconsole to work when called from the services script?

How do I execute a Perl program on a remote machine?

I wrote a Perl program to capture a live data stream from a tail command on a Linux machine using the following command in the console:
tail -f xyz.log | myperl.pl
It works fine. But now I have to execute this Perl program on a different machine because the log file is on that different machine. Can anyone tell me how I can do it?
You could say
ssh remotemachine tail -f xyz.log | myperl.pl
I suppose or maybe mount the remote log directories locally onto your administrative machine and do the processing there.
Or you could even say
ssh remotemachine bash -c "tail -f xyz.log | myperl.pl"
in order to run the script on the remote machine (if your script produces some output files and you want them on remote machine)

Resources