How to sort lines multiple times? - linux

I have to figure out the physical disk number that belongs to each device in an OmniOS (Solaris 10) storage array. I can get the list of devices by
cfgadm -al | grep disk-path | cut -c 6-21 | tr 'a-z' 'A-Z'
where the output could look like
5000C5005CF65F14
5000C5004F30CC82
...
So my idea is to write a script where I dd each device and watch the leds, and then enter the number of the led that flashed. As there are leds on both sides of the storage array, I need to be able to run the script multiple times, and for each time I enter a disk location, I shouldn't have to enter it again.
My current idea is to loop over the list of device names I get from the above command and then do something like this
system("dd if=/dev/dsk/c1t${device}d0p0 of=/dev/null bs=1k count=100");
print "which led flashed: ";
my $disk = <STDIN>;
chomp $disk;
system("echo $disk $device >> disk.sorted");
which would produce lines like these
21 5000C5005CF65F14
09 5000C5004F30CC82
...
where I have seen led 21 flash in the first case and seen led 9 in the second case. There are 70 disks.
My problem
I can not come up with a good idea how I can write a script which can be run multiple times, and for each time it is run it will not destroy my previous values I have entered.
Any ideas how to do this?
I am prototyping it on Linux.

For each run of your skript, write the output into a different file, say out.1, out.2, and so on. Afterwards run
sort -k +2 out.*
you will have all results for one disk one after another. The sort will sort the contents of all files given according to the second column, which is the disk id.

Script 1
rm -f /tmp/a/*
rm -f /tmp/b/*
mkdir -p /tmp/a
mkdir -p /tmp/b
for f in $(cfgadm -al | grep disk-path | cut -c 6-21 | tr 'a-z' 'A-Z'); do touch /tmp/a/$f; done
Script 2
controller="c3t"
for f in /tmp/a/*; do
dd if=/dev/dsk/$controller${f##*/}d0p0 of=/dev/null bs=1k count=100
echo "Which led flashed? Press RET to skip to next"
read n
if ! [ -z $n ]; then echo $controller${f##*/} > /tmp/b/$n && rm -f $f; fi
done
cat /tmp/b/*
Script 3
for f in /tmp/b/*; do
echo $f
dd if=/dev/dsk/$(cat $f)d0p0 of=/dev/null bs=1k count=100
done

Related

2 Linux scripts nearly identical. Variables getting confused between to different scripts

I have two scripts. The only difference between the two scripts is the log file name and the device ip address that it fetches the data from. The problem is that the log file that concats continuously mixes up and starts writing the contents of one device onto the log of the other. So, 1 particular log file randomly switches from showing the data from one device to the other device..
Here is a sample of what it gets from the curl call.
{"method":"uploadsn","mac":"04786364933C","version":"1.35","server":"HT","SN":"267074DE","Data":[7.2]}
I'm 99% the issue is with the log variable, as one script runs every 30 minutes and one script runs every 15 minutes, so i can tell by the date stamps that the issue is not from fetching from the wrong device, but the concatenating of the files. It appears to concat the wrong file to the new file....
Here is the code of both.
#!/bin/bash
log="/scripts/cellar.log"
if [ ! -f "$log" ]
then
touch "$log"
fi
now=`date +%a,%m/%d/%Y#%I:%M%p`
json=$(curl -m 3 --user *****:***** "http://192.168.1.146/monitorjson" --silent --stderr -)
celsius=$(echo $json | cut -d "[" -f2 | cut -d "]" -f1)
temp=$(echo "scale=4; $celsius*1.8 + 32" | bc)
line=$(echo $now : $temp)
echo $line
echo $line | cat - $log > temp && mv temp $log | sed -n '1,192p' $log
and here is the second
#!/bin/bash
log="/scripts/gh.log"
if [ ! -f "$log" ]
then
touch "$log"
fi
now=`date +%a,%m/%d/%Y#%I:%M%p`
json=$(curl -m 3 --user *****:***** "http://192.168.1.145/monitorjson" --silent --stderr -)
celsius=$(echo $json | cut -d "[" -f2 | cut -d "]" -f1)
temp=$(echo "scale=4; $celsius*1.8 + 32" | bc)
line=$(echo $now : $temp)
#echo $line
echo $line | cat - $log > temp && mv temp $log | sed -n '1,192p' $log
Example of bad log file (shows contents of both devices when should only contain 1):
Mon,11/28/2022#03:30AM : 44.96
Mon,11/28/2022#03:00AM : 44.96
Mon,11/28/2022#02:30AM : 44.96
Tue,11/29/2022#02:15AM : 60.62
Tue,11/29/2022#02:00AM : 60.98
Tue,11/29/2022#01:45AM : 60.98
The problem is that you use "temp" as the filename for a temporary file in both scripts.
I'm not good in understanding sed, but as I read it, you print only the first 192 lines of the logfile with your command. You don't need a temporary file for that.
First: logfiles are usually written from oldest to newest entry (top to bottom), so probably you want to view the 192 newest lines? Then you can make use of the >> output redirection to append your output to the file. Then use tail to get only the bottom of the file. And if necessary, you could reverse that final output.
That last line of your script would then be replaced by:
sed -i '1i '"$line"'
192,$d' $log
Further possible improvements:
Use a single script that gets URL and log filename as parameters
Use the usual log file order (newest entries appended at the end)
Don't truncate log files inside the script, but use logrotate to not exceed a certain filesize

Trying to make a live /proc/ reader using bash script for live process monitoring

Im trying to make a little side project script to sit and monitor all of the /proc/ directories, for the most part I have the concept running and it works(to a degree). What im aiming for here is to scan through all the files and cat their status files and pull out the appropriate info, and then I would like to run this process in an infinite loop to give me live updates of when something is running on and dropping off of the scheduler. Right now every time you run the script, it will print 50+ blank lines and every single time it hits the proper regex it will print it correctly, but Im aiming for it to not roll down the screen the way it does. Any help at all would be appreciated.
regex="[0-9]"
temp=""
for f in /proc/*; do
if [[ -d $f && $f =~ /proc/$regex ]]; then
output=$(cat $f/status | grep "^State") #> /dev/null
process_id=$(cut -b 7- <<< $f)
state=$(cut -b 10-19 <<< $output)
tabs 4
if [[ $state =~ "(running)" ]]; then
echo -e "$process_id:$state\n" | sort >> temp
fi
fi
done
cat temp
rm temp````
To get the PID and status of running all processes, try:
awk -F':[[:space:]]*' '/State:/{s=$2} /Pid:/{p=$2} ENDFILE{if (s~/running/) print p,s; p="X"; s="X"}' OFS=: /proc/*/status
To get this output updated every second:
while sleep 1; do awk -F':[[:space:]]*' '/State:/{s=$2} /Pid:/{p=$2} ENDFILE{if (s~/running/) print p,s; p="X"; s="X"}' OFS=: /proc/*/status; done

Bash scripting wanting to find a size of a directory and if size is greater than x then do a task

I have put the following together with a couple of other articles but it does not seem to be working. What I am trying to do eventually do is for it to check the directory size and then if the directory has new content above a certain total size it will then let me know.
#!/bin/bash
file=private/videos/tv
minimumsize=2
actualsize=$(du -m "$file" | cut -f 1)
if [ $actualsize -ge $minimumsize ]; then
echo "nothing here to see"
else
echo "time to sync"
fi
this is the output:
./sync.sh: line 5: [: too many arguments
time to sync
I am new to bash scripting so thank you in advance.
The error:
[: too many arguments
seems to indicate that either $actualsize or $minimumsize is expanding to more than one argument.
Change your script as follows:
#!/bin/bash
set -x # Add this line.
file=private/videos/tv
minimumsize=2
actualsize=$(du -m "$file" | cut -f 1)
echo "[$actualsize] [$minimumsize]" # Add this line.
if [ $actualsize -ge $minimumsize ]; then
echo "nothing here to see"
else
echo "time to sync"
fi
The set -x will echo commands before attempting to execute them, something which assists greatly with debugging.
The echo "[$actualsize] [$minimumsize]" will assist in trying to establish whether these variables are badly formatted or not, before the attempted comparison.
If you do that, you'll no doubt find that some arguments will result in a lot of output from the du -m command since it descends into subdirectories and gives you multiple lines of output.
If you want a single line of output for all the subdirectories aggregated, you have to use the -s flag as well:
actualsize=$(du -ms "$file" | cut -f 1)
If instead you don't want any of the subdirectories taken into account, you can take a slightly different approach, limiting the depth to one and tallying up all the sizes:
actualsize=$(find . -maxdepth 1 -type f -print0 | xargs -0 ls -al | awk '{s += $6} END {print int(s/1024/1024)}')

linux redirect last line of dwdiff result to a file

I am trying to create a sh that generates a report that will display differences against files from two folders/java projects. I am using dwdiff, and I need only the last line from each comparison (I don't care about differences, the percentage is what I need). I've created the following script:
DEST_FOLDER='./target/'
CLASSES_P1='classesP1.txt'
CLASSES_P2='classesP2.txt'
DEST_FILE='report.txt'
rm -r "$DEST_FOLDER"
mkdir -p "$DEST_FOLDER"
rm -f "$DEST_FOLDER/$CLASSES_P1"
rm -f "$DEST_FOLDER/$CLASSES_P2"
rm -f "$DEST_FOLDER/$DEST_FILE"
find ./p1 -name "*.java" >> "$DEST_FOLDER/$CLASSES_P1"
find ./p2 -name "*.java" >> "$DEST_FOLDER/$CLASSES_P2"
while read p; do
while read t; do
dwdiff -s $p $t | tail --lines=0 >> "$DEST_FOLDER/$DEST_FILE"
done < $DEST_FOLDER/$CLASSES_P2
done < $DEST_FOLDER/$CLASSES_P1
It works fine, but the results are not redirected to the given file. The file is created, but is empty, and the last line from each dwdiff result is displayed to console. Any ideas?
There are a few things going on
The output you want is going to stderr, not stdout. You can merge them with 2>&1 on the dwdiff command
The output you want actually appears to be printed first by dwdiff, but you see a different order due to the two different outputs. So you want head instead of tail
You want 1 line, not 0
So try dwdiff -s ... 2>&1 | head --lines=1
$ dwdiff -s /etc/motd /etc/issue 2>&1 | head --lines=1
old: 67 words 3 4% common 2 2% deleted 62 92% changed
Alternatively, if you want the new line instead of the old, and to simplify the ordering, try throwing away the diff output:
$ dwdiff -s /etc/motd /etc/issue 2>&1 1>/dev/null | tail --lines=1
new: 5 words 3 60% common 0 0% inserted 2 40% changed
Note that the order of redirection is important: First clone stdout into stderr, then redirect stdout to /dev/null.

Processing file with xargs for concurrency

There is an input like:
folder1
folder2
folder3
...
foldern
I would like to iterate over taking multiple lines at once and processes each line, remove the first / (and more but for now this is enough) and echo the. Iterating over in bash with a single thread can be slow sometimes. The alternative way of doing this would be splitting up the input file to N pieces and run the same script with different input and output N times, at the end you can merge the results.
I was wondering if this is possible with xargs.
Update 1:
Input:
/a/b/c
/d/f/e
/h/i/j
Output:
mkdir a/b/c
mkdir d/f/e
mkdir h/i/j
Script:
for i in $(<test); do
echo mkdir $(echo $i | sed 's/\///') ;
done
Doing it with xargs does not work as I would expect:
xargs -a test -I line --max-procs=2 echo mkdir $(echo $line | sed 's/\///')
Obviously I need a way to execute the sed on the input for each line, but using $() does not work.
You probably want:
--max-procs=max-procs, -P max-procs
Run up to max-procs processes at a time; the default is 1. If
max-procs is 0, xargs will run as many processes as possible at
a time. Use the -n option with -P; otherwise chances are that
only one exec will be done.
http://unixhelp.ed.ac.uk/CGI/man-cgi?xargs
With GNU Parallel you can do:
cat file | perl -pe s:/:: | parallel mkdir -p
or:
cat file | parallel mkdir -p {= s:/:: =}

Resources