How can I get my bash script to remove the first n and last n lines from a variable?` - linux

I'm making a script to preform "dig ns google.com" and cut off the all of the result except for the answers section.
So far I have:
#!/bin/bash
echo -n "Please enter the domain: "
read d
echo "You entered: $d"
dr="$(dig ns $d)"
sr="$(sed -i 1,10d $dr)"
tr="$(head -n -6 $sr)"
echo "$tr"
Theoretically, this should work. The sed and head commands work individually outside of the script to cut off the first 10 and last 6 respectively, but when I put them inside my script sed comes back with an error and it looks like it's trying to read the variable as part of the command rather than the input. The error is:
sed: invalid option -- '>'
So far I haven't been able to find a way for it to read the variable as input. I've tried surrounding it in "" and '' but that doesn't work. I'm new to this whole bash scripting thing obviously, any help would be great!

you're assigning the lines to variables, instead pipe them for example
seq 25 | tail -n +11 | head -n -6
will remove the first 10 and last 6 lines and print from 11 to 19.
in your case replace seq 25 with your script
dig ns "$d" | tail -n +11 | head -n -6
no need for echo either.

Related

removing first n and last n lines from multiple text files

I have been stuck for some time now
I have two text files, from which I would like to remove the first two and the last three lines.
So far I have
$tail -n +3 text_1.txt text_2.txt | head -n -3
When I enter this into console, I see that text_2.txt indeed comes out with proper format, but text_1.txt still has that last three lines that need to be removed. I presume that head command is not being applied to text_1.txt.
How can I solve this problem?
for i in text_1.txt text_2.txt; do tail -n +3 "$i" | head -n -3; done

redirect output from one function to another

I'm trying to create a pipeline from user input, but when I redirect the output I'm getting a output with no newlines and it's just one huge single line.Here's the code :
42 function stack(){
43 echo $(history|tail -1|cut -d" " -f5-|cut -d "|" -f1) >> ~/commands
44 local last=$(tail -1 ~/commands)
45 echo $(eval $last) >> ~/output
46 }
Is there a better way to pipe the output from this to a file ? Echo seems to corrupt the output.
I'm not sure to understand the purpose of cuts, but quote are missing around $() so the output is split into words with IFS
echo "$(eval "$last")"
maybe cut -c8- is safer than cut -d" " -f5- for history entries with a number with number of digits different from 3.
also cut -d"|" -f1 can fail if | is used as literal for example echo '|'.
Maybe you can look at Even designators in bash manual : in interactive bash following will run the last command
$ !-1

Bash shell script update and print a variable overwriting the same line [duplicate]

This question already has answers here:
Displaying only single most recent line of a command's output
(2 answers)
Closed 5 years ago.
I been trying to print a variable in the same time for a scrip that pretends automatize a process the content is the output of this
sed "s/Read/\n/g" /tmp/Air/test.txt | tail -1 test.txt | grep ARP
so i put this in a while loop
do
out= sed "s/Read/\n/g" /tmp/Air/test.txt | tail -1 test.txt | grep ARP
echo -n "$out"
sleep 1
done
i read other questions here and i try with different option like echo -ne, echo -ne "$out" \r, printf "\r" or printf "%s" and no luck with no one, all the other example don't have a variable to print just counter o system variables
Update
it seems to appear that the echo -n repeat $out in the same line, if out="this is a test" the output of echo -n is "this is a test this is a test this is a test this is a test ...." maybe im missing some option ?
Update 2
sorry for the miss understood perhaps i was not very clear but what i want is overwrite the same line with the value of $out, the source of $out is the output of the aireplay-ng command that executes along with the script and i get the output with
the ouput is something like this
102415 packets (got 5 ARP requests and 15438 ACKs), sent 37085 packets...(499 pps)
but the number of ARP request is changing constantly
this code for example use echo -ne and overwrite in the same line
#!/bin/bash
for pc in $(seq 1 100); do
echo -ne "$pc%\033[0K\r"
sleep 1
done
the output of this is like a percent indicator that shows "10%" and going instead of "1% 2% 3% 4% 5% .." in the same line and i already try like this but with no luck
if you are trying to execute the sed Please use
`sed "s/Read/\n/g" /tmp/Air/test.txt | tail -1 test.txt | grep ARP`
First of all you are assigning a value of bash command wrongly to variable.
out=$(sed "s/Read/\n/g" /tmp/Air/test.txt | tail -1 test.txt | grep ARP)
Then you can print all your output in one line as you wrote:-
echo -n $out
The recent addendum to your question reads like you're miscommunicating your intent: this is a test this is a test this is a test is what a plain reading of your question indicates you to be asking for (printing this is a test over and over in a loop without newlines, after all, can be expected to do nothing else); why you'd describe this in a context that makes it sound like a bug is thus surprising.
If you want to send the cursor back to the beginning of your current line and overwrite that line, that might be something like the following:
#!/bin/bash
# ^^^^ not /bin/sh; this enables bash extensions
# ask the shell to keep $COLUMNS up-to-date
shopt -s checkwinsize
# defaults to 80-character terminal width, but uses $COLUMNS if available
printf "%-${COLUMNS:-80}s\r" "$out"`
...which prints your string, pads out to 80 characters with spaces, and then returns the cursor to the beginning of the line, such that the next thing you write will overwrite that string.
Of course, if you print that line and then return to a shell prompt, the prompt will start at the beginning of the same line and overwrite the text, so be sure to follow up with an echo.

Error with a script in bash

I have a little error with a script I wrote in bash and I can't figure out what's I'm doing wrong
note that I'm using this script for thousands of calculations and this error happened only a few times (like 20 or so), but it still happened
What the script does is this: basically it takes in input a web page that I got from a site with the utility w3m and it counts all the occurrences of the words in it... After it orders them from the most common to the ones that occur only once
this is the code:
#!/bin/bash
# counts the numbers of words from specific sites #
# writes in a file the occurrences ordered from the most common #
touch check # file used to analyze the occurrences
touch distribution # final file ordered
page=$1 # the web page that needs to be analyzed
occurrences=$2 # temporary file for the occurrences
dictionary=$3 # dictionary used for another purpose (ignore this)
# write the words one by column
cat $page | tr -c [:alnum:] "\n" | sed '/^$/d' > check
# lopp to analyze the words
cat check | while read words
do
word=${words}
strlen=${#word}
# ignores blacklisted words or small ones
if ! grep -Fxq $word .blacklist && [ $strlen -gt 2 ]
then
# if the word isn't in the file
if [ `egrep -c -i "^$word: " $occurrences` -eq 0 ]
then
echo "$word: 1" | cat >> $occurrences
# else if it is already in the file, it calculates the occurrences
else
old=`awk -v words=$word -F": " '$1==words { print $2 }' $occurrences`
### HERE IS THE ERROR, EITHER THE LET OR THE SED ###
let "new=old+1"
sed -i "s/^$word: $old$/$word: $new/g" $occurrences
fi
fi
done
# orders the words
awk -F": " '{print $2" "$1}' $occurrences | sort -rn | awk -F" " '{print $2": "$1}' > distribution
# ignore this, not important
grep -w "1" distribution | awk -F ":" '{print $1}' > temp_dictionary
for line in `cat temp_dictionary`
do
if ! grep -Fxq $line $dictionary
then
echo $line >> $dictionary
fi
done
rm check
rm temp_dictionary
this is the error: (I'm translating it, so it could be different in english)
./wordOccurrences line:30 let:x // where x is a number, usually 9 or 10 (but also 11, 13, etc)
1: syntax error in the espression (the error token is 1)
sed: expression -e #1, character y: command 's' not terminated // where y is another number (this one is also usually 9 or 10) with y being different from x
EDIT:
Talking with kev it looks like it's a newline problem
I added an echo between let and sed to print the sed and it worked perfectly for like 5 to 10 minutes until that error. Usually the sed without error looked like this:
s/^CONSULENTI: 6$/CONSULENTI: 7/g
but when I got the error it was like this:
s/^00145: 1
1$/00145: 4/g
how to fix this?
If you get a new line in $old, it means awk prints two lines so there is a duplicate in $occurences.
The script seems complicated to count words, and not efficient because it launches many processes and process file in a loop ;
maybe you can do something similar with
sort | uniq -c
You should also consider that your case-insensitivity is not consistent throughout the program. I created a page with just "foooo" in it and ran the program, then created one with "Foooo" in it and ran the program again. The 'old=`awk...' line sets 'old' to the empty string because awk is matching case sensitively. This results in the occurrences file not being updated. The subsequent sed and possibly some of the greps are also case sensitive.
This may not be the only error since it doesn't explain the error message you saw, but it is an indication that the same word with different capitalization will be handled erroneously by your script.
The following would separate the words, lowercase them, and then remove the ones smaller than three characters:
tr -cs '[:alnum:]' '\n' <foo | tr '[:upper:]' '[:lower:]' | egrep -v '^.{0,2}$'
Using this at the front of your script would mean that the rest of the script would not have to be case insensitive to be correct.

Quick unix command to display specific lines in the middle of a file?

Trying to debug an issue with a server and my only log file is a 20GB log file (with no timestamps even! Why do people use System.out.println() as logging? In production?!)
Using grep, I've found an area of the file that I'd like to take a look at, line 347340107.
Other than doing something like
head -<$LINENUM + 10> filename | tail -20
... which would require head to read through the first 347 million lines of the log file, is there a quick and easy command that would dump lines 347340100 - 347340200 (for example) to the console?
update I totally forgot that grep can print the context around a match ... this works well. Thanks!
I found two other solutions if you know the line number but nothing else (no grep possible):
Assuming you need lines 20 to 40,
sed -n '20,40p;41q' file_name
or
awk 'FNR>=20 && FNR<=40' file_name
When using sed it is more efficient to quit processing after having printed the last line than continue processing until the end of the file. This is especially important in the case of large files and printing lines at the beginning. In order to do so, the sed command above introduces the instruction 41q in order to stop processing after line 41 because in the example we are interested in lines 20-40 only. You will need to change the 41 to whatever the last line you are interested in is, plus one.
# print line number 52
sed -n '52p' # method 1
sed '52!d' # method 2
sed '52q;d' # method 3, efficient on large files
method 3 efficient on large files
fastest way to display specific lines
with GNU-grep you could just say
grep --context=10 ...
No there isn't, files are not line-addressable.
There is no constant-time way to find the start of line n in a text file. You must stream through the file and count newlines.
Use the simplest/fastest tool you have to do the job. To me, using head makes much more sense than grep, since the latter is way more complicated. I'm not saying "grep is slow", it really isn't, but I would be surprised if it's faster than head for this case. That'd be a bug in head, basically.
What about:
tail -n +347340107 filename | head -n 100
I didn't test it, but I think that would work.
I prefer just going into less and
typing 50% to goto halfway the file,
43210G to go to line 43210
:43210 to do the same
and stuff like that.
Even better: hit v to start editing (in vim, of course!), at that location. Now, note that vim has the same key bindings!
You can use the ex command, a standard Unix editor (part of Vim now), e.g.
display a single line (e.g. 2nd one):
ex +2p -scq file.txt
corresponding sed syntax: sed -n '2p' file.txt
range of lines (e.g. 2-5 lines):
ex +2,5p -scq file.txt
sed syntax: sed -n '2,5p' file.txt
from the given line till the end (e.g. 5th to the end of the file):
ex +5,p -scq file.txt
sed syntax: sed -n '2,$p' file.txt
multiple line ranges (e.g. 2-4 and 6-8 lines):
ex +2,4p +6,8p -scq file.txt
sed syntax: sed -n '2,4p;6,8p' file.txt
Above commands can be tested with the following test file:
seq 1 20 > file.txt
Explanation:
+ or -c followed by the command - execute the (vi/vim) command after file has been read,
-s - silent mode, also uses current terminal as a default output,
q followed by -c is the command to quit editor (add ! to do force quit, e.g. -scq!).
I'd first split the file into few smaller ones like this
$ split --lines=50000 /path/to/large/file /path/to/output/file/prefix
and then grep on the resulting files.
If your line number is 100 to read
head -100 filename | tail -1
Get ack
Ubuntu/Debian install:
$ sudo apt-get install ack-grep
Then run:
$ ack --lines=$START-$END filename
Example:
$ ack --lines=10-20 filename
From $ man ack:
--lines=NUM
Only print line NUM of each file. Multiple lines can be given with multiple --lines options or as a comma separated list (--lines=3,5,7). --lines=4-7 also works.
The lines are always output in ascending order, no matter the order given on the command line.
sed will need to read the data too to count the lines.
The only way a shortcut would be possible would there to be context/order in the file to operate on. For example if there were log lines prepended with a fixed width time/date etc.
you could use the look unix utility to binary search through the files for particular dates/times
Use
x=`cat -n <file> | grep <match> | awk '{print $1}'`
Here you will get the line number where the match occurred.
Now you can use the following command to print 100 lines
awk -v var="$x" 'NR>=var && NR<=var+100{print}' <file>
or you can use "sed" as well
sed -n "${x},${x+100}p" <file>
With sed -e '1,N d; M q' you'll print lines N+1 through M. This is probably a bit better then grep -C as it doesn't try to match lines to a pattern.
Building on Sklivvz' answer, here's a nice function one can put in a .bash_aliases file. It is efficient on huge files when printing stuff from the front of the file.
function middle()
{
startidx=$1
len=$2
endidx=$(($startidx+$len))
filename=$3
awk "FNR>=${startidx} && FNR<=${endidx} { print NR\" \"\$0 }; FNR>${endidx} { print \"END HERE\"; exit }" $filename
}
To display a line from a <textfile> by its <line#>, just do this:
perl -wne 'print if $. == <line#>' <textfile>
If you want a more powerful way to show a range of lines with regular expressions -- I won't say why grep is a bad idea for doing this, it should be fairly obvious -- this simple expression will show you your range in a single pass which is what you want when dealing with ~20GB text files:
perl -wne 'print if m/<regex1>/ .. m/<regex2>/' <filename>
(tip: if your regex has / in it, use something like m!<regex>! instead)
This would print out <filename> starting with the line that matches <regex1> up until (and including) the line that matches <regex2>.
It doesn't take a wizard to see how a few tweaks can make it even more powerful.
Last thing: perl, since it is a mature language, has many hidden enhancements to favor speed and performance. With this in mind, it makes it the obvious choice for such an operation since it was originally developed for handling large log files, text, databases, etc.
print line 5
sed -n '5p' file.txt
sed '5q' file.txt
print everything else than line 5
`sed '5d' file.txt
and my creation using google
#!/bin/bash
#removeline.sh
#remove deleting it comes move line xD
usage() { # Function: Print a help message.
echo "Usage: $0 -l LINENUMBER -i INPUTFILE [ -o OUTPUTFILE ]"
echo "line is removed from INPUTFILE"
echo "line is appended to OUTPUTFILE"
}
exit_abnormal() { # Function: Exit with error.
usage
exit 1
}
while getopts l:i:o:b flag
do
case "${flag}" in
l) line=${OPTARG};;
i) input=${OPTARG};;
o) output=${OPTARG};;
esac
done
if [ -f tmp ]; then
echo "Temp file:tmp exist. delete it yourself :)"
exit
fi
if [ -f "$input" ]; then
re_isanum='^[0-9]+$'
if ! [[ $line =~ $re_isanum ]] ; then
echo "Error: LINENUMBER must be a positive, whole number."
exit 1
elif [ $line -eq "0" ]; then
echo "Error: LINENUMBER must be greater than zero."
exit_abnormal
fi
if [ ! -z $output ]; then
sed -n "${line}p" $input >> $output
fi
if [ ! -z $input ]; then
# remove this sed command and this comes move line to other file
sed "${line}d" $input > tmp && cp tmp $input
fi
fi
if [ -f tmp ]; then
rm tmp
fi
You could try this command:
egrep -n "*" <filename> | egrep "<line number>"
Easy with perl! If you want to get line 1, 3 and 5 from a file, say /etc/passwd:
perl -e 'while(<>){if(++$l~~[1,3,5]){print}}' < /etc/passwd
I am surprised only one other answer (by Ramana Reddy) suggested to add line numbers to the output. The following searches for the required line number and colours the output.
file=FILE
lineno=LINENO
wb="107"; bf="30;1"; rb="101"; yb="103"
cat -n ${file} | { GREP_COLORS="se=${wb};${bf}:cx=${wb};${bf}:ms=${rb};${bf}:sl=${yb};${bf}" grep --color -C 10 "^[[:space:]]\\+${lineno}[[:space:]]"; }

Resources