How can I get position of cursor in terminal? - linux

I know I may save position using tput sc, but how can I read it's position to the variable? I need the number of row. I don't want to use curses/ncurses.

At ANSI compatible terminals, printing the sequence ESC[6n will report the cursor position to the application as (as though typed at the keyboard) ESC[n;mR, where n is the row and m is the column.
Example:
~$ echo -e "\033[6n"
EDITED:
You should make sure you are reading the keyboard input. The terminal will "type" just the ESC[n;mR sequence (no ENTER key). In bash you can use something like:
echo -ne "\033[6n" # ask the terminal for the position
read -s -d\[ garbage # discard the first part of the response
read -s -d R foo # store the position in bash variable 'foo'
echo -n "Current position: "
echo "$foo" # print the position
Explanation: the -d R (delimiter) argument will make read stop at the char R instead of the default record delimiter (ENTER). This will store ESC[n;m in $foo. The cut is using [ as delimiter and picking the second field, letting n;m (row;column).
I don't know about other shells. Your best shot is some oneliner in Perl, Python or something. In Perl you can start with the following (untested) snippet:
~$ perl -e '$/ = "R";' -e 'print "\033[6n";my $x=<STDIN>;my($n, $m)=$x=~m/(\d+)\;(\d+)/;print "Current position: $m, $n\n";'
For example, if you enter:
~$ echo -e "z033[6n"; cat > foo.txt
Press [ENTER] a couple times and then [CRTL]+[D]. Then try:
~$ cat -v foo.txt
^[[47;1R
The n and m values are 47 and 1. Check the wikipedia article on ANSI escape codes for more information.
Before the Internet, in the golden days of the BBS, old farts like me had a lot of fun with these codes.

Related

bash interactive line replacement

I have a bash loop moving through lines in a file and am wondering if there is a way to interactively replace each line with content.
while read p
do
echo $p
read input
if [ "$input" == "y" ]; then
# DO SOME ON P REPLACEMENT HERE
done<$fname
From read(3), I know that read copies from the file descriptor into a *buffer. I realize that I can use sed substitution directly but cannot get it to work in this bash loop context. For example, say I want to wrap selected lines:
sed 's/\(.*\)/wrap \(\1\)/'
Complication : The bash 'read' command swallows up '\' and continues reading a 'line' (this is what i'm looking for). Sed seems to NOT. This means that line counts will be different, so a naive counter seems not the way to go if it's to work with sed.
Use ex, which is a non-visual mode of vim (it's like a newer ed):
ex -c '%s/\(.*\)/wrap \(\1\)/c' FILE
Note that I needed to add % (do the operation for all lines) and c (prompt before substitution) at the beginning and end of your sed expression, respectively.
When prompted, input y<CR> to substitute, n<CR> to not substitute, q<CR> to stop the substitute command. After inputting q<CR> or reaching the end of file you can save changes with w<CR> (that will overwrite the file) and quit with q<CR>.
Alternatively, you can use ed, but I won't help you with that. ;)
For more general information about ex, check out this question:
https://superuser.com/questions/22455/vim-what-is-the-ex-mode-for-batch-processing-for
I'm not sure I understand what you need and maybe you can give us more details like a sample input and an expected output. Maybe this is helpful?
while read p
do
echo "$p"
read input
if [ "$input" == "y" ]
then
# Sed is fed with "p" and then it replaces any input with a given string.
# In this case "wrap <matched_text>". Its output is then assigned again to "p"
p="$(sed -nre 's/(.*)/wrap \1/p' <<< "$p")"
fi
done < "$fname"

How to achieve AJAX(interactive) kind of SEARCH in LINUX to FIND files?

I am interested in typing a search keyword in the terminal and able to see the output immediately and interactively. That means, like searching in google, I want to get results immediately after every character or word keyed-in.
I tought of doing this by combining WATCH command and FIND command but unable to bring the interactivenes.
Lets assume, to search for a file with name 'hint' in filename, I use the command
$ find | grep -i hint
this pretty much gives me the decent output results.
But what I want is the same behaviour interactively, that means with out retyping the command but only typing the SEARCH STRING.
I tought of writing a shell script which reads from a STDIN and executes the above PIPED-COMMAND for every 1 sec. Therefore what ever I type it takes that as an instruction every time for the command. But WATCH command is not interactive.
I am interested in below kind of OUTPUT:
$ hi
./hi
./hindi
./hint
$ hint
./hint
If anyone can help me with any better alternative way instead of my PSUEDO CODE, that is also nice
Stumbled aross this old question, found it interesting and thought I'd give it a try. This BASH script worked for me:
#!/bin/bash
# Set MINLEN to the minimum number of characters needed to start the
# search.
MINLEN=2
clear
echo "Start typing (minimum $MINLEN characters)..."
# get one character without need for return
while read -n 1 -s i
do
# get ascii value of character to detect backspace
n=`echo -n $i|od -i -An|tr -d " "`
if (( $n == 127 )) # if character is a backspace...
then
if (( ${#in} > 0 )) # ...and search string is not empty
then
in=${in:0:${#in}-1} # shorten search string by one
# could use ${in:0:-1} for bash >= 4.2
fi
elif (( $n == 27 )) # if character is an escape...
then
exit 0 # ...then quit
else # if any other char was typed...
in=$in$i # add it to the search string
fi
clear
echo "Search: \""$in"\"" # show search string on top of screen
if (( ${#in} >= $MINLEN )) # if search string is long enough...
then
find "$#" -iname "*$in*" # ...call find, pass it any parameters given
fi
done
Hope this does what you intend(ed) to do. I included a "start dir" option, because the listings can get quite unwieldy if you search through a whole home folder or something. Just dump the $1 if you don't need it.
Using the ascii value in $n it should be easily possible to include some hotkey functionality like quitting or saving results, too.
EDIT:
If you start the script it will display "Start typing..." and wait for keys to be pressed. If the search string is long enough (as defined by variable MINLEN) any key press will trigger a find run with the current search string (the grep seems kind of redundant here). The script passes any parameters given to find. This allows for better search results and shorter result lists. -type d for example will limit the search to directories, -xdev will keep the search on the current file sytem etc. (see man find). Backspaces will shorten the search string by one, while pressing Escape will quit the script. The current search string is displayed on top. I used -iname for the search to be case-insensitive. Change this to `-name' to get case-sensitive behaviour.
This code below takes input on stdin, a filtering method as a macro in "$1", and outputs go to stdout.
You can use it e.g., as follows:
#Produce basic output, dynamically filter it in the terminal,
#and output the final, confirmed results to stdout
vi `find . | terminalFilter`
The default filtering macro is
grep -F "$pattern"
the script provides the pattern variable as whatever is currently entered.
The immediate results as a function of what is currently entered are displayed on
the terminal. When you press <Enter>, the results become final
and are outputtted to stdout.
#!/usr/bin/env bash
##terminalFilter
del=`printf "\x7f"` #backspace character
input="`cat`" #create initial set from all input
#take the filter macro from the first argument or use
# 'grep -F "$pattern"'
filter=${1:-'grep -F "$pattern"'}
pattern= #what's inputted by the keyboard at any given time
printSelected(){
echo "$input" | eval "$filter"
}
printScreen(){
clear
printSelected
#Print search pattern at the bottom of the screen
tput cup $(tput lines); echo -n "PATTERN: $pattern"
} >/dev/tty
#^only the confirmed results go `stdout`, this goes to the terminal only
printScreen
#read from the terminal as `cat` has already consumed the `stdin`
exec 0</dev/tty
while IFS=$'\n' read -s -n1 key; do
case "$key" in
"$del") pattern="${pattern%?}";; #backspace deletes the last character
"") break;; #enter breaks the loop
*) pattern="$pattern$key";; #everything else gets appended
#to the pattern string
esac
printScreen
done
clear
printSelected
fzf is a fast and powerful command-line fuzzy finder that exactly suits your needs.
Check it out here: https://github.com/junegunn/fzf.
For your example, simple run fzf on the command line and it should work fine.

Line from bash command output stored in variable as string

I'm trying to find a solution to a problem analog to this one:
#command_A
A_output_Line_1
A_output_Line_2
A_output_Line_3
#command_B
B_output_Line_1
B_output_Line_2
Now I need to compare A_output_Line_2 and B_output_Line_1 and echo "Correct" if they are equal and "Not Correct" otherwise.
I guess the easiest way to do this is to copy a line of output in some variable and then after executing the two commands, simply compare the variables and echo something.
This I need to implement in a bash script and any information on how to get certain line of output stored in a variable would help me put the pieces together.
Also, it would be cool if anyone can tell me not only how to copy/store a line, but probably just a word or sequence like : line 1, bytes 4-12, stored like string in a variable.
I am not a complete beginner but also not anywhere near advanced linux bash user. Thanks to any help in advance and sorry for bad english!
An easier way might be to use diff, no?
Something like:
command_A > command_A.output
command_B > command_B.output
diff command_A.output command_B.output
This will work for comparing multiple strings.
But, since you want to know about single lines (and words in the lines) here are some pointers:
# first line of output of command_A
command_A | head -n 1
The -n 1 option says only to use the first line (default is 10 I think)
# second line of output of command_A
command_A | head -n 2 | tail -n 1
that will take the first two lines of the output of command_A and then the last of those two lines. Happy times :)
You can now store this information in a variable:
export output_A=`command_A | head -n 2 | tail -n 1`
export output_B=`command_B | head -n 1`
And then compare it:
if [ "$output_A" == "$output_B" ]; then echo 'Correct'; else echo 'Not Correct'; fi
To just get parts of a string, try looking into cut or (for more powerful stuff) sed and awk.
Also, just learing a good general purpose scripting language like python or ruby (even perl) can go a long way with this kind of problem.
Use the IFS (internal field separator) to separate on newlines and store the outputs in an array.
#!/bin/bash
IFS='
'
array_a=( $(./a.sh) )
array_b=( $(./b.sh) )
if [ "${array_a[1]}" = "${array_b[0]}" ]; then
echo "CORRECT"
else
echo "INCORRECT"
fi

"tail -f" alternate which doesn't scroll the terminal window

I want to check a file at continuous intervals for contents which keep changing. "tail -f" doesn't suffice as the file doesn't grow in size.
I could use a simple while loop in bash to the same effect:
while [ 1 ]; do cat /proc/acpi/battery/BAT1/state ; sleep 10; done
It works, although it has the unwanted effect of scrolling my terminal window.
So now I'm wondering, is there a linux/shell command that would display the output of this file without scrolling the terminal?
watch -n 10 cat /proc/acpi/battery/BAT1/state
You can add the -d flag if you want it to highlight the differences from one iteration to the next.
watch is your friend. It uses curses so it won't scroll your terminal.
Usage: watch [-dhntv] [--differences[=cumulative]] [--help] [--interval=<n>] [--no-title] [--version] <command>
-d, --differences[=cumulative] highlight changes between updates
(cumulative means highlighting is cumulative)
-h, --help print a summary of the options
-n, --interval=<seconds> seconds to wait between updates
-v, --version print the version number
-t, --no-title turns off showing the header
So taking your example it'll be:
watch -n 10 cat /proc/acpi/battery/BAT1/state
Combining several ideas from other answers plus a couple of other tricks, this will output the file without clearing the screen or scrolling (except for the first cycle if the prompt is at the bottom of the screen).
up=$(tput cuu1)$(tput el); while true; do (IFS=$'\n'; a=($(</proc/acpi/battery/BAT1/state)); echo "${a[*]}"; sleep 1; printf "%.0s$up" ${a[#]}); done
It's obviously something you wouldn't type by hand, so you can make it a function that takes the filename, the number of seconds between updates, starting line and number of lines as arguments.
watchit () {
local up=$(tput cuu1)$(tput el) IFS=$'\n' lines
local start=${3:-0} end
while true
do
lines=($(<"$1"))
end=${4:-${#lines[#]}}
echo "${lines[*]:$start:$end}"
sleep ${2:-1}
# go up and clear each line
printf "%.0s$up" "${lines[#]:$start:$end}"
done
}
Run it:
watchit /proc/acpi/battery/BAT1/state .5 0 6
The second argument (seconds between updates) defaults to 1. The third argument (starting line) defaults to 0. The fourth argument (number of lines) defaults to the whole file. If you omit the number of lines and the file grows it may cause scrolling to accommodate the new lines.
Edit: I added an argument to control the frequency of updates.
My favorite, which works in places that don't have watch, is this:
while true; do clear ; cat /proc/acpi/battery/BAT1/state ; sleep 10; done
The canonical (and easiest, and most flexible) answer is watch, as others have said. But if you want to see just the first line of a file, here's an alternative that neither clears nor scrolls the terminal:
while line=`head -n 1 /proc/acpi/battery/BAT1/state` \
&& printf "%s\r" "$line" \
&& sleep 10
do
printf "%s\r" "`echo -n "$line" | sed 's/./ /g'`"
done
echo
The carriage return is the core concept here. It tells the cursor to return to the beginning of the current line, like a newline but without moving to the next line. The printf command is used here because (1) it doesn't automatically add a newline, and (2) it translates \r into a carriage return.
The first printf prints your line. The second one clears it by overwriting it with spaces, so that you don't see garbage if the next line to be printed is shorter.
Note that if the line printed is longer than the width of your terminal, the terminal will scroll anyway.

Quick unix command to display specific lines in the middle of a file?

Trying to debug an issue with a server and my only log file is a 20GB log file (with no timestamps even! Why do people use System.out.println() as logging? In production?!)
Using grep, I've found an area of the file that I'd like to take a look at, line 347340107.
Other than doing something like
head -<$LINENUM + 10> filename | tail -20
... which would require head to read through the first 347 million lines of the log file, is there a quick and easy command that would dump lines 347340100 - 347340200 (for example) to the console?
update I totally forgot that grep can print the context around a match ... this works well. Thanks!
I found two other solutions if you know the line number but nothing else (no grep possible):
Assuming you need lines 20 to 40,
sed -n '20,40p;41q' file_name
or
awk 'FNR>=20 && FNR<=40' file_name
When using sed it is more efficient to quit processing after having printed the last line than continue processing until the end of the file. This is especially important in the case of large files and printing lines at the beginning. In order to do so, the sed command above introduces the instruction 41q in order to stop processing after line 41 because in the example we are interested in lines 20-40 only. You will need to change the 41 to whatever the last line you are interested in is, plus one.
# print line number 52
sed -n '52p' # method 1
sed '52!d' # method 2
sed '52q;d' # method 3, efficient on large files
method 3 efficient on large files
fastest way to display specific lines
with GNU-grep you could just say
grep --context=10 ...
No there isn't, files are not line-addressable.
There is no constant-time way to find the start of line n in a text file. You must stream through the file and count newlines.
Use the simplest/fastest tool you have to do the job. To me, using head makes much more sense than grep, since the latter is way more complicated. I'm not saying "grep is slow", it really isn't, but I would be surprised if it's faster than head for this case. That'd be a bug in head, basically.
What about:
tail -n +347340107 filename | head -n 100
I didn't test it, but I think that would work.
I prefer just going into less and
typing 50% to goto halfway the file,
43210G to go to line 43210
:43210 to do the same
and stuff like that.
Even better: hit v to start editing (in vim, of course!), at that location. Now, note that vim has the same key bindings!
You can use the ex command, a standard Unix editor (part of Vim now), e.g.
display a single line (e.g. 2nd one):
ex +2p -scq file.txt
corresponding sed syntax: sed -n '2p' file.txt
range of lines (e.g. 2-5 lines):
ex +2,5p -scq file.txt
sed syntax: sed -n '2,5p' file.txt
from the given line till the end (e.g. 5th to the end of the file):
ex +5,p -scq file.txt
sed syntax: sed -n '2,$p' file.txt
multiple line ranges (e.g. 2-4 and 6-8 lines):
ex +2,4p +6,8p -scq file.txt
sed syntax: sed -n '2,4p;6,8p' file.txt
Above commands can be tested with the following test file:
seq 1 20 > file.txt
Explanation:
+ or -c followed by the command - execute the (vi/vim) command after file has been read,
-s - silent mode, also uses current terminal as a default output,
q followed by -c is the command to quit editor (add ! to do force quit, e.g. -scq!).
I'd first split the file into few smaller ones like this
$ split --lines=50000 /path/to/large/file /path/to/output/file/prefix
and then grep on the resulting files.
If your line number is 100 to read
head -100 filename | tail -1
Get ack
Ubuntu/Debian install:
$ sudo apt-get install ack-grep
Then run:
$ ack --lines=$START-$END filename
Example:
$ ack --lines=10-20 filename
From $ man ack:
--lines=NUM
Only print line NUM of each file. Multiple lines can be given with multiple --lines options or as a comma separated list (--lines=3,5,7). --lines=4-7 also works.
The lines are always output in ascending order, no matter the order given on the command line.
sed will need to read the data too to count the lines.
The only way a shortcut would be possible would there to be context/order in the file to operate on. For example if there were log lines prepended with a fixed width time/date etc.
you could use the look unix utility to binary search through the files for particular dates/times
Use
x=`cat -n <file> | grep <match> | awk '{print $1}'`
Here you will get the line number where the match occurred.
Now you can use the following command to print 100 lines
awk -v var="$x" 'NR>=var && NR<=var+100{print}' <file>
or you can use "sed" as well
sed -n "${x},${x+100}p" <file>
With sed -e '1,N d; M q' you'll print lines N+1 through M. This is probably a bit better then grep -C as it doesn't try to match lines to a pattern.
Building on Sklivvz' answer, here's a nice function one can put in a .bash_aliases file. It is efficient on huge files when printing stuff from the front of the file.
function middle()
{
startidx=$1
len=$2
endidx=$(($startidx+$len))
filename=$3
awk "FNR>=${startidx} && FNR<=${endidx} { print NR\" \"\$0 }; FNR>${endidx} { print \"END HERE\"; exit }" $filename
}
To display a line from a <textfile> by its <line#>, just do this:
perl -wne 'print if $. == <line#>' <textfile>
If you want a more powerful way to show a range of lines with regular expressions -- I won't say why grep is a bad idea for doing this, it should be fairly obvious -- this simple expression will show you your range in a single pass which is what you want when dealing with ~20GB text files:
perl -wne 'print if m/<regex1>/ .. m/<regex2>/' <filename>
(tip: if your regex has / in it, use something like m!<regex>! instead)
This would print out <filename> starting with the line that matches <regex1> up until (and including) the line that matches <regex2>.
It doesn't take a wizard to see how a few tweaks can make it even more powerful.
Last thing: perl, since it is a mature language, has many hidden enhancements to favor speed and performance. With this in mind, it makes it the obvious choice for such an operation since it was originally developed for handling large log files, text, databases, etc.
print line 5
sed -n '5p' file.txt
sed '5q' file.txt
print everything else than line 5
`sed '5d' file.txt
and my creation using google
#!/bin/bash
#removeline.sh
#remove deleting it comes move line xD
usage() { # Function: Print a help message.
echo "Usage: $0 -l LINENUMBER -i INPUTFILE [ -o OUTPUTFILE ]"
echo "line is removed from INPUTFILE"
echo "line is appended to OUTPUTFILE"
}
exit_abnormal() { # Function: Exit with error.
usage
exit 1
}
while getopts l:i:o:b flag
do
case "${flag}" in
l) line=${OPTARG};;
i) input=${OPTARG};;
o) output=${OPTARG};;
esac
done
if [ -f tmp ]; then
echo "Temp file:tmp exist. delete it yourself :)"
exit
fi
if [ -f "$input" ]; then
re_isanum='^[0-9]+$'
if ! [[ $line =~ $re_isanum ]] ; then
echo "Error: LINENUMBER must be a positive, whole number."
exit 1
elif [ $line -eq "0" ]; then
echo "Error: LINENUMBER must be greater than zero."
exit_abnormal
fi
if [ ! -z $output ]; then
sed -n "${line}p" $input >> $output
fi
if [ ! -z $input ]; then
# remove this sed command and this comes move line to other file
sed "${line}d" $input > tmp && cp tmp $input
fi
fi
if [ -f tmp ]; then
rm tmp
fi
You could try this command:
egrep -n "*" <filename> | egrep "<line number>"
Easy with perl! If you want to get line 1, 3 and 5 from a file, say /etc/passwd:
perl -e 'while(<>){if(++$l~~[1,3,5]){print}}' < /etc/passwd
I am surprised only one other answer (by Ramana Reddy) suggested to add line numbers to the output. The following searches for the required line number and colours the output.
file=FILE
lineno=LINENO
wb="107"; bf="30;1"; rb="101"; yb="103"
cat -n ${file} | { GREP_COLORS="se=${wb};${bf}:cx=${wb};${bf}:ms=${rb};${bf}:sl=${yb};${bf}" grep --color -C 10 "^[[:space:]]\\+${lineno}[[:space:]]"; }

Resources