A script that places HDD location eg.sda into a variable - linux

What I'm trying to do via command line is have a script take a newly connected HDD and put its device location eg. sda, sdb, sdc etc into a variable i can use.
I've tried:
tail -f /var/log/messages | grep GB/
Which take the line with "GB/" which has the device location.
But i can not manipulate line down using "sed" or anything equivalent as i don't know how to exit the command above once it have found the most recent, relevant information and i also cant get that information into a position to manipulate it.
I have tried the > and >> to output to a file but that didn't work and have also tried putting the above code in brackets and redirecting that which also didn't work.

It's not clear when you would be doing this, like if it's just after you "know" that a new HDD has been connected.
What you could do is capture the output of "ls -1d /sys/block/sd*" before, and then again after, and diff them, which would give the added device.

This works on my machine,
$ HDD=$(dmesg | grep blocks | cut -f3 -d\[ | cut -f1 -d\] | tail -n1)
$ echo $HDD
sdk

Related

referencing stdout in a command that has been piped into

I want to make a simple dmenu command that reads a file of commands and names. Then takes the names and displays them using dmenu then takes dmenu's output and runs the associated command using the file again.
I got to the point where dmenu displays the names, but I don't really know where to go from there. Learning bash is a really daunting task to me and I don't really know where to start with this seemingly simple script/command.
here is the file:
Pushbullet
google-chrome-stable --app=https://www.pushbullet.com
Steam
steam
Chrome
google-chrome-stable
Libre Office
libreoffice
Transmission
transmission-qt
Audio Control Panel
sudo pavucontrol & bluberry
and here is what I have so far for my command:
awk 'NR % 2 != 0' /home/rocco/programlist | dmenu | ??(grep -l "stdout" /home/rocco/programlist....)
It was my thinking that I could somehow pipe into grep or awk with the name of the application then get the line number then add one and pipe that into sh.
Thanks
I have no experience with dmenu but if I understand how it works correctly, this should do what you want. Wrapping a command in $(…) returns the output as a variable, which we can pass on to another command.
#!/bin/bash
plist="/home/rocco/programlist"
# pipe every second line to dmenu
selected=$(awk 'NR % 2 != 0' "$plist" | dmenu)
# search for the selected item, get the command after it
cmd=$(grep -A1 "$selected" "$plist" | tail -n 1)
# run the command
$cmd
Worth mentioning a mistake in your question. dmenu sends to stdout, or standard output, but the next program in line would be reading stdin, or standard input. In any case, grep can't take patterns on standard input, which is why I've saved to a variable instead of trying to pipe it somewhere.
Assuming you have programlist.txt in the working directory you can use:
awk 'NR%2 !=0' programlist.txt |dmenu |awk '{system("grep --no-group-separator -A 1 '"'"'"$0"'"'"' programlist.txt");}' |awk '{if(NR==2){system($0);}}'
Note the quoting of the $0 in the first awk envocation. This is necessary to get names with spaces in them like "Libre Office"

Continuous grep, output at same spot on console

I use
tail -f file | grep pattern
all the time for continuous grep.
However, is there a way I can make grep output its pattern at the same spot, say at the top of the screen? so that the screen doesn't scroll all the time?
My case is something like this: tail -f log_file | grep Status -A 2 will show the current status and what changed it to that status. The problem is the screen scrolls and it becomes annoying. I'd rather have the output stuck on the first 3 lines in the screen.
Thank you!
you could use the watch command; which will always execute the same command, but the position on the screen will stay the same. The process might eat some more CPU or memory though:
watch "tail file | grep pattern"
by default watch executes that command every 2 seconds. You can adjust up to 0.1 seconds using:
watch -n 0.1
NOTE
As noted by #etanReisner: this is not exactly the same as tail -f: tail -f will change immediately if something is added to your logfile, the watch command will only notice that when it executes, ie every 2 (or 0.1 seconds).
Assuming you are using a vt100 compatible emulator...
This command will tail a file, pipe it into grep, read it a line at a time and then display it in reverse on the top line of the screen:
TOSL=$(tput sc;tput cup 0 0;tput rev;tput el)
FROMSL=$(tput sgr0; tput rc)
tail -f file | grep --line-buffered pattern | while read line
do
echo -n "$TOSL${line}$FROMSL"
done
It assumes your output appears a line at a time. If you want more than one line, you can read more than a line, but you have to decide how you want to buffer the output. You could also use the csr terminfo command to set up an entire separate scrolling region instead of just having one line.
Here is the scrolling region version with a ten line status area at the top:
TOSL=$(tput sc; tput csr 0 10; tput cup 10 0;tput rev;tput el)
FROMSL=$(tput sgr0; tput rc;tput csr 10 50;tput rc)
tail -f file | grep --line-buffered pattern | while read line
do
echo -n "$TOSL${line}
$FROMSL"
done
Note that it is not impossible that your display will be corrupted from time-to-time as it could be that the output from your main shell and your background task get mixed up.
Simply replace the newlines with carriage returns.
tail -f file | grep --line-buffered whatever | tr '\012' '\015'
The line buffering is to avoid jumpy output; see http://mywiki.wooledge.org/BashFAQ/009
This is quick and dirty. As noted in comments, this will leave the previous contents of the line underneath, so a shorter line will not completely overlay a longer line. You could add some control codes to address that, but then you might as well use Curses for the formatting too, like in rghome's answer.

Redirecting linux cout to a variable and the screen in a script

I am currently trying to make a script file that runs multiple other script files on a server. I would like to display the output of these script to the screen IN ADDITION to passing it into grep so I can do error testing. currently I have written this:
status=$(SOMEPROCESS | grep -i "SOMEPROCESS started completed correctly")
I do further error handling below this using the variable status, so I would like to display SOMEPROCESS's output to the screen for error reference. This is a read only server and I can not save the output to a log file.
You need to use the tee command. It will be slightly fiddly, since tee outputs to a file handle. However you could create a file descriptor using pipe.
Or (simpler) for your use case.
Start the script without grep and pipe it through tee SOMEPROCESS | tee /my/safely/generated/filename. Then use tail -f /my/safely/generated/filename | grep -i "my grep pattern separately.
You can use process substituion together with tee:
SOMEPROCESS | tee >(grep ...)
This will use an anonymous pipe and pass /dev/fd/... as file name to tee (or a named pipe on platforms that don't support /dev/fd/...).
Because SOMEPROCESS is likely to buffer its output when not talking to a terminal, you might see significant lag in screen output.
I'm not sure whether I understood your question exactly.
I think you want to get the output of SOMEPROCESS, test it, print it out when there are errors. If it is, I think the code bellow may help you:
s=$(SOMEPROCESS)
grep -q 'SOMEPROCESS started completed correctly' <<< $s
if [[ $? -ne 0 ]];then
# specified string not found in the output, it means SOMEPROCESS started failed
echo $s
fi
But in this code, it will store the all output in the memory, if the output is big enough, there will be a OOM risk.

Is is possible to pipe the output of a command from a server to a local machine?

I have a series of functionally identical servers provided by my school that run various OS and hardware configurations. For the most part, I can use 5 of these interchangeably. Unfortunately, other students tend to bunch up on some machines and It's a pain to find one that isn't bogged down.
What I want to is ssh into a machine, run the command:
w | wc -l
to get a rough estimate of the load on that server, and use that information to select the least impacted one. A sort of client-side load balancer.
Is there a way to do this or achieve the same result?
I'd put this on your .bashrc file
function choose_host(){
hosts="host1 ... hostn"
for host in $hosts
do
echo $(ssh $host 'w|wc -l') $host
done | sort | head -1 | awk '{print $2}'
}
function ssh_host(){
ssh $(choose_host)
}
choose_host should give you the one you're looking for. This is absolutely overkill but i was feeling playful :D
sort will order the output according to the result of w|wc -l, then head -1 gets the first line and awk will just print the hostname !
You can call ssh_host and should log you automatically.
You can use pdsh command from your desktop which run the specified command on the set of machines you specified and return the results. This way you can find out the one which is least loaded. This will avoid you doing ssh to every single machine and run the w | wc -l.
Yes. See e.g.:
ssh me#host "ls /etc | sort" | wc -l
The part inside "" is done remotely. The part afterwards is local.

search with in Data displayed as a result of tail Operation?

I am working on a Java EE application where its logs will be generated inside a Linux server .
I have used the command tail -f -n -10000 MyLog
It displayed last 1000 lines from that log file .
Now I pressed Ctrl + c in Putty to disconnect the logs updation ( as i am feared it may be updated with new requests and I will loose my data )
In the displayed result, how can I search for a particular keyword ?? (Used / String name to search but it's not working)
Pipe your output to PAGER.
tail -f -n LINE_CNT LOG_FILE | less
then you can use
/SEARCH_STRING
Two ways:
tail -n 10000 MyLog| grep -i "search phrase"
tail -f -n 10000 MyLog | less
The 2nd method will allow you to search with /. It will only search down but you can press g to go back to the top.
Edit: On testing it seems method 2 doesn't work all that well... if you hit the end of the file it will freeze till you ctrl+c the tail command.
You need to redirect the output from tail into a search utility (e.g. grep). You could do this in two steps: save the output to a file, then search in the file; or in one go: pipe the ouput to the search utility
To see what goes into the file (so you can hit Ctlr+c) you can use the tee command, which duplicates the output to the screen and to a file:
tail -f -n -10000 MyLog | tee <filename>
Then search within the file.
If you want to pipe the result into the search utility, you can use the same trick as above, but use your search program instead of tee
Controlling terminal output on the fly
While running any command in a terminal such as Putty you can use CTRL-S and CTRL-Q to stop and start output to the Putty terminal.
Excluding lines using grep
If you want to exclude lines that contain a specific pattern use grep -v the following would remove all line that contain the string INFO
tail -f logfile | grep -v INFO
Show lines that do not contain the words INFO or DEBUG
tail -f logfile | grep -v -E 'INFO|DEBUG'
Finally, the MOTHER AND FATHER of all tailing tools is xtail.pl
If you have perl on your host xtail.pl is a very nice tool to learn and in a nutshell you can use it to tail multiple files. Very handy.
You can just open it with less command
less logfile_name
when you open the file you can use this guide here
Tip: I suggest, first to use G to go to the end of the file and then to you use Backward Search

Resources