Shell script run with mac address validate - linux

Can we run shell script with check system mac address validation then process.
#!/bin/bash
### here i need to check system mac address
### if mac is not match then script will not run
killall gnome-terminal
echo "End Of Day Session Complete"
plz suggest

Why not. First get the mac address of your system and store it in a variable in your shell script or in a config file. Every time when you run the script first get the mac address of the system and compare with the mac id already stored in your script's variable if same continue and if not just exit. Use ifconfig -a command to get the mac address of your system and store it in a config file or in your script.

A hardware adress can be obtained like this:
LC_ALL=C ifconfig | egrep "HWaddr" | awk '{print $5}'
but note:
there might be more than one interface with a mac
on many systems, the mac can be spoofed
To suspend work, just exit. Don't killall, and if you do, don't expect later echo messages to be seen by someone.

As you want to run your code on multiple systems, you could use an associative array to store the MAC addresses of the systems where the script is to be run
declare -A valid_mac
valid_mac["00:00:00:00:00:00"]=1
valid_mac["00:00:00:00:00:01"]=1
valid_mac["00:00:00:00:00:02"]=1
Your script function is to terminate all gnome-terminal sessions on each machine where it is run if the mac address is valid.
Let's define a function
function kill_all_terms(){
killall gnome-terminal
echo "End Of Day Session Complete"
}
Now on each machine you want to get a list of MAC addresses of this machine. There are many ways to do it. The function below lists MAC addresses of all IPv4 interfaces using shell string parsing capabilities
function macs(){
local macs line l1
while read line ; do
l1="${line#*link/ether }" ;
macs="${l1% brd*} ${macs}" ;
done < <( ip -4 -o link ) ;
echo "${macs}"
}
Now we can define the main loop
function main() {
for mac in $(macs) ; do
if [ -n "${valid_mac[$mac]}" ] ; then
kill_all_terms
break
fi
done
}
Finally just run main() as
main "$#"

Related

Collecting system data from Red Hat Linux

I am planning to write a small program .The input for this should be the IP,username,password for a Linux machine ,and it should give me the system details of that machine as output.
I am planning to write this using Shell ,using RSH for the login . I am in no way asking for a solution ,but could you please point me towards other options that I have ? I am not really comfortable using Shell scripts .
Thanks in advance
i have a same demand. and what i do is:
first write a script which will be executed at target host (T). something like this
> cat check_server.sh
#!/usr/bin/env bash
# execute at target host
all_cmd=(
"uname -a"
"lscpu"
"free -m"
)
function _check {
for one_cmd in "${all_cmd[#]}"; do
echo -e "\n\n$one_cmd" >> /tmp/server_info.txt
eval "$one_cmd" >> /tmp/server_info.txt
done
}
then execute it in target and copy back result, like this
_cmd=`base64 -w0 check_server`
ssh $user#$ip "echo $_cmd | base64 -d | bash"
scp $user#$ip:/tmp/server_info.txt ./

Detect IP-Address change on an interface

I would like to trigger a service when a change of an ip address on a specific interface occurs. Is there a target for this or some other method I am not aware of to achieve this using systemd on Linux (Kernel 3.19)?
The service would be used to send a SIGNAL to a defined process. The Linux is running on an embedded system.
Thanks!
Because you use Systemd you might already use systemd-networkd for managing your devices instead of relying on 3rd party code.
You could use the structured journal output to get the last 2 ADDRESS field of the current BOOD_ID.(sadly, there is no notification mechanism for address changes in systemd-networkd):
→ sudo journalctl -F ADDRESS -u systemd-networkd -n 2
192.168.178.29
So, if there is only one line output, there was no address change.
There is an solution in other question of StackOverflow. Just here:
Detecting a change of IP address in Linux
I like this code, it's easy, you onli need a cron job with frecuency as you need (I made a little change):
#!/bin/bash
OLD_IP=`cat ip.txt`
NEW_IP=`/sbin/ifconfig | awk -F "[: ]+'{ print $4}'`
if [ $NEW_IP != OLD_IP ]; then
YOU_COMMAND <commands>
echo $NEW_IP > ip.txt
fi
exit 0

Getting stty: standard input: Inappropriate ioctl for device when using scp through an ssh tunnel

Per the title, I'm getting the following warning when I try to scp through an ssh tunnel. In my case, I cannot scp directly to foo because port 1234 on device foo is being forwarded to another machine bar on a private network (and bar is the machine that is giving me a tunnel to 192.168.1.23).
$ # -f and -N don't matter and are only to run this example in one terminal
$ ssh -f -N -p 1234 userA#foo -L3333:192.168.1.23:22
$ scp -P 3333 foo.py ubuntu#localhost:
ubuntu#localhost's password:
stty: standard input: Inappropriate ioctl for device
foo.py 100% 1829 1.8KB/s 00:00
Does anyone know why I might be getting this warning about Inappropriate ioctl for device?
I got the exact same problem when I included the following line on my ~/.bashrc:
stty -ixon
The purpose of this line was to allow the use of Ctrl-s in reverse search of bash.
This gmane link has a solution: (original link dead) => Web Archive version of gmane link
'stty' applies to ttys, which you have for interactive login sessions.
.kshrc is executed for all sessions, including ones where stdin isn't
a tty. The solution, other than moving it to your .profile, is to
make execution conditional on it being an interactive shell.
There are several ways to check for interecative shell. The following solves the problem for bash:
[[ $- == *i* ]] && stty -ixon
Got the same issue while executing the script remotely. After many tries didn't get any luck to solve this error. Then got an article to run a shell script through ssh. This was an issue related to ssh, not any other command. ssh -t "command" -t will allocate a pseudo TTY to the ssh and this error won't come.
at the end i created a blank .cshrc file ( for ubuntu 18.04). worked

Parallel SSH with Custom Parameters to Each Host

There are plenty of threads and documentation about parallel ssh, but I can't find anything on passing custom parameters to each host. Using pssh as an example, the hosts file is defined as:
111.111.111.111
222.222.222.222
However, I want to pass custom parameters to each host via a shell script, like this:
111.111.111.111 param1a param1b ...
222.222.222.222 param2a param2b ...
Or, better, the hosts and parameters would be split between 2 files.
Because this isn't common, is this misuse of parallel ssh? Should I just create many ssh processes from my script? How should I approach this?
You could use GNU parallel.
Suppose you have a file argfile:
111.111.111.111 param1a param1b ...
222.222.222.222 param2a param2b ...
Then running
parallel --colsep ' ' ssh {1} prog {2} {3} ... :::: argfile
Would run prog on each host with the corresponding parameters. It is important that the number of parameters be the same for each host.
Here is a solution that you can use, after tailoring it to suit your needs:
#!/bin/bash
#filename: example.sh
#usage: ./example.sh <par1> <par2> <par3> ... <par6>
#set your ip addresses
$traf1=1.1.1.1
$traf2=2.2.2.2
$traf3=3.3.3.3
#set some custom parameters for your scripts and use them as you wish.
#In this example, I use the first 6 command line parameters passed when run the example.sh
ssh -T $traf1 -l username "/export/home/path/to/script.sh $1 $2" 1>traf1.txt 2>/dev/null &
echo "Fetching data from traffic server 2..."
ssh -T $traf2 -l username "/export/home/path/to/script.sh $3 $4" 1> traf2.txt 2>/dev/null &
echo "Fetching data from traffic server 3..."
ssh -T $traf3 -l username "/export/home/path/to/script.sh $5 $6" 1> traf3.txt 2>/dev/null &
#your application will block on this line, and will only continue if all
#3 remotely executed scripts will complete
wait
Keep in mind that the above requires that you setup passwordless login between the machines, otherwise the solution will break to request for password input.
If you can use Perl:
use Net::OpenSSH::Parallel;
use Data::Dumper;
my $pssh = Net::OpenSSH::Parallel->new;
$pssh->add_host('111.111.111.111');
$pssh->add_host('222.222.222.222');
$pssh->push('111.111.111.111', $cmd, $param11, $param12);
$pssh->push('222.222.222.222', $cmd, $param21, $param22);
$pssh->run;
if (my %errors = $ssh->get_errors) {
print STDERR "ssh errors:\n", Dumper \%errors;
}

using a variable in a BASH command?

I have 20 machines, each running a process. The machines are named:
["machine1", "machine2", ...., "machine20"]
To inspect how the process is doing on machine1, I issue the following command from a remote machine:
ssh machine1 cat log.txt
For machine2, I issue the following command:
ssh machine2 cat log.txt
Similarly, for machine20, I issue the following command:
ssh machine20 cat log.txt
Is there a bash command that will allow me to view the output from all machines using one command?
If the machines are nicely numbered like in your example:
for i in {1..20} ; do ssh machine$i cat log.txt; done
If you have the list of machines in a file, you can use:
cat machinesList.txt | xargs -i ssh {} cat log.txt
You could store all your machine names in an array or text file, and loop through it.
declare -a machineList=('host1' 'host2' 'otherHost') # and more...
for machine in ${machineList[#]}
do
ssh $machine cat log.txt
done
I assume your machines aren't literally named 'machine1', 'machine2', etc.
Some links:
bash Array Tutorial
GNU Bash Array Documentation
for i in {1..20}
do
ssh machine$i cat log.txt
done
Use a loop?
for i in {1..20}
do
ssh machine$i cat log.txt
done
But note that you're running cat within a remote shell session, not the current one, so this might not quite work as you expect. Try it and see.
Put your hosts in a file and use a while loop as shown below. Note the use of the -n flag on ssh:
while read host; do ssh -n $host cat log.txt; done < hosts-file
Alternatively you can use PSSH:
pssh -h hosts-file -i "cat log.txt"
I would recommend using a program called Shmux. Despite the name, it works really well. I've used it with more than 100 machines with good results. It also gracefully handles machine failures for you which could be a disadvantage with a bash for loop approach.
I think the coolest thing about this program is the ability to issue multiple threads for your commands which allows to run the commands on all 20 machines in parallel.
Aside from the suggestions for using a loop, you might want to take a look at tools, like pssh or dsh, designed for running commands on multiple clients.

Resources