How to list the machine name along with running programs list in Linux? - linux

I have a shell script to check if Firefox is running in my Linux machine like;
ps -ef|grep firefox
this will list all the instances of firefox running in my machine, showing their PIDs, so that I can manually kill them. My question is, is it possible to display the machine name also in this list? If there are multiple instances, each line should contain the machine name (or IP) also. In my shellscript, i did something like;
hostname
ps -ef|grep firefox
which returns the hostname once, and multiple instances are listed below that one by one. How can I print machine name (or IP) also along with each line?

Like this:
ps -ef | egrep '[/ ]firefox' | sed "s/^/$(hostname -s) : /"

This will do it:
ps -ef | grep [f]irefox | xargs -I{} echo "$(hostname) {}"
Notice the brackets around 'f' in firefox. This prevents your grep command from showing up in the results.

Related

checking information of different computers in a network using bash script

I am trying to write a bash script to access a file (nodeNames) that contains ip addresses to different computers in a cluster network, ssh into each of these computers and output some basic information namely: Hostname, Host IPaddress, load average and the process using the most memory and append all these information into a file seperating each wit commas. Also, each of the computers have same user and password. This is my code so far but it isn't working, please I need help here
egrep -ve '^#|^$'nodeNames | while read a
do
ssh $a "$#" &
output1=`hostname`
#This will display the server's IP address
output2=`hostname -i`
#This will output the server's load average
output3=`uptime | grep -oP '(?<=average:).*'| tr -d ','`
#This outputs memory Information
output4=`ps aux --sort=-%mem | awk 'NR<=1{print $0}'`
#This concantenates all output to a single line of text written to
echo "$output1, $output2, $output3, $output4" | tee clusterNodeInfo
done
You need to understand what is executed on which computer. The shell script you start is executed on your host A and you want information from your host B. ssh $a "$#" & will not suddenly make all the commands execute on the remote host B. Therefore, the
output1=`hostname`
will be executed on host A and output1 will have the hostname of host A.
You may also want to put the tee outside the loop or use tee -a to prevent overwriting your output file.
For bash, use $() instead of `` .
So, that would make your script:
egrep -ve '^#|^$'nodeNames | while read a
do
output1=$(ssh $a hostname)
#This will display the server's IP address
output2=$(ssh $a hostname -i)
#This will output the server's load average
output3=$(ssh $a "uptime | grep -oP '(?<=average:).*'| tr -d ','")
#This outputs memory Information
output4=$(ssh $a "ps aux --sort=-%mem | awk 'NR<=1{print $0}'")
#This concantenates all output to a single line of text written to
echo "$output1, $output2, $output3, $output4" | tee -a clusterNodeInfo
done
(have not tested it, but it should be something like this)

Assign output of command to environment variable different from original output (bash)

I have encountered a problem with a script I have originally designed.
I am trying to get the number of lines a command displays and if the number is bigger than a value, something should happen.
My problem is that originally this worked fine, now it doesn't.
In my script I am using the following command
NO_LINES=$(ps -ef | grep "sh monitor.sh" | wc -l)
echo $NO_LINES
echo $NO_LINES prints 0 even though it should print 1, the line for the grep command.
If I execute the command separately (not assigning the result to an environment variable) like this
ps -ef | grep "sh monitor.sh" | wc -l
This will print out 1 which is the correct result.
Why is it that by assigning the result to the variable, the value is lower with 1 than the original result?
The bash version of the machine is 4.3.46(1)-release.
Thanks

A script that places HDD location eg.sda into a variable

What I'm trying to do via command line is have a script take a newly connected HDD and put its device location eg. sda, sdb, sdc etc into a variable i can use.
I've tried:
tail -f /var/log/messages | grep GB/
Which take the line with "GB/" which has the device location.
But i can not manipulate line down using "sed" or anything equivalent as i don't know how to exit the command above once it have found the most recent, relevant information and i also cant get that information into a position to manipulate it.
I have tried the > and >> to output to a file but that didn't work and have also tried putting the above code in brackets and redirecting that which also didn't work.
It's not clear when you would be doing this, like if it's just after you "know" that a new HDD has been connected.
What you could do is capture the output of "ls -1d /sys/block/sd*" before, and then again after, and diff them, which would give the added device.
This works on my machine,
$ HDD=$(dmesg | grep blocks | cut -f3 -d\[ | cut -f1 -d\] | tail -n1)
$ echo $HDD
sdk

Is is possible to pipe the output of a command from a server to a local machine?

I have a series of functionally identical servers provided by my school that run various OS and hardware configurations. For the most part, I can use 5 of these interchangeably. Unfortunately, other students tend to bunch up on some machines and It's a pain to find one that isn't bogged down.
What I want to is ssh into a machine, run the command:
w | wc -l
to get a rough estimate of the load on that server, and use that information to select the least impacted one. A sort of client-side load balancer.
Is there a way to do this or achieve the same result?
I'd put this on your .bashrc file
function choose_host(){
hosts="host1 ... hostn"
for host in $hosts
do
echo $(ssh $host 'w|wc -l') $host
done | sort | head -1 | awk '{print $2}'
}
function ssh_host(){
ssh $(choose_host)
}
choose_host should give you the one you're looking for. This is absolutely overkill but i was feeling playful :D
sort will order the output according to the result of w|wc -l, then head -1 gets the first line and awk will just print the hostname !
You can call ssh_host and should log you automatically.
You can use pdsh command from your desktop which run the specified command on the set of machines you specified and return the results. This way you can find out the one which is least loaded. This will avoid you doing ssh to every single machine and run the w | wc -l.
Yes. See e.g.:
ssh me#host "ls /etc | sort" | wc -l
The part inside "" is done remotely. The part afterwards is local.

Bash command substitution with a variable

I'm new to bash scripting and I've been learning as I go with a small project I'm taking on. However, I've run into a problem that I cannot seem to get past.
I have a variable that I need to include in a command. When ran directly in the shell (with the variable manually typed), the command returns the expected result. However, I can't get it to work when using a variable.
So, if I manually run this, it correctly returns 0 or 1, depending if it is running or not.
ps -ef | grep -v grep | grep -c ProcessName
However, when I try to embed that into this while clause, it always evaluates to 0 because it's not searching for the correct text.
while [ `ps -ef | grep -v grep | grep -c {$1}` -ne 0 ]
do
sleep 5
done
Is there a way I can accomplish this? I've tried a myriad of different things to no avail. I also tried using the $() syntax for command substitution, but I had no luck with that either.
Thanks!
I think that instead of {$1} you mean "$1". Also, you can just do pgrep -c "$1" instead of the two pipes.
In addition, there's also no need to compare the output of grep -c with 0, since you can just see if the command failed or not. So, a much simplified version might be:
while pgrep "$1" > /dev/null
do
sleep 4
done
You should really use -C with ps rather than the messy pipes if you're using the full process name. If you're interested in substring matching, then your way is the only thing I can think of.

Resources