I have a bash loop moving through lines in a file and am wondering if there is a way to interactively replace each line with content.
while read p
do
echo $p
read input
if [ "$input" == "y" ]; then
# DO SOME ON P REPLACEMENT HERE
done<$fname
From read(3), I know that read copies from the file descriptor into a *buffer. I realize that I can use sed substitution directly but cannot get it to work in this bash loop context. For example, say I want to wrap selected lines:
sed 's/\(.*\)/wrap \(\1\)/'
Complication : The bash 'read' command swallows up '\' and continues reading a 'line' (this is what i'm looking for). Sed seems to NOT. This means that line counts will be different, so a naive counter seems not the way to go if it's to work with sed.
Use ex, which is a non-visual mode of vim (it's like a newer ed):
ex -c '%s/\(.*\)/wrap \(\1\)/c' FILE
Note that I needed to add % (do the operation for all lines) and c (prompt before substitution) at the beginning and end of your sed expression, respectively.
When prompted, input y<CR> to substitute, n<CR> to not substitute, q<CR> to stop the substitute command. After inputting q<CR> or reaching the end of file you can save changes with w<CR> (that will overwrite the file) and quit with q<CR>.
Alternatively, you can use ed, but I won't help you with that. ;)
For more general information about ex, check out this question:
https://superuser.com/questions/22455/vim-what-is-the-ex-mode-for-batch-processing-for
I'm not sure I understand what you need and maybe you can give us more details like a sample input and an expected output. Maybe this is helpful?
while read p
do
echo "$p"
read input
if [ "$input" == "y" ]
then
# Sed is fed with "p" and then it replaces any input with a given string.
# In this case "wrap <matched_text>". Its output is then assigned again to "p"
p="$(sed -nre 's/(.*)/wrap \1/p' <<< "$p")"
fi
done < "$fname"
Related
I need my script to send an email from terminal. Based on what I've seen here and many other places online, I formatted it like this:
/var/mail -s "$SUBJECT" "$EMAIL" << EOF
Here's a line of my message!
And here's another line!
Last line of the message here!
EOF
However, when I run this I get this warning:
myfile.sh: line x: warning: here-document at line y delimited by end-of-file (wanted 'EOF')
myfile.sh: line x+1: syntax error: unexpected end of file
...where line x is the last written line of code in the program, and line y is the line with /var/mail in it. I've tried replacing EOF with other things (ENDOFMESSAGE, FINISH, etc.) but to no avail. Nearly everything I've found online has it done this way, and I'm really new at bash so I'm having a hard time figuring it out on my own. Could anyone offer any help?
The EOF token must be at the beginning of the line, you can't indent it along with the block of code it goes with.
If you write <<-EOF you may indent it, but it must be indented with Tab characters, not spaces. So it still might not end up even with the block of code.
Also make sure you have no whitespace after the EOF token on the line.
The line that starts or ends the here-doc probably has some non-printable or whitespace characters (for example, carriage return) which means that the second "EOF" does not match the first, and doesn't end the here-doc like it should. This is a very common error, and difficult to detect with just a text editor. You can make non-printable characters visible for example with cat:
cat -A myfile.sh
Once you see the output from cat -A the solution will be obvious: remove the offending characters.
Please try to remove the preceeding spaces before EOF:-
/var/mail -s "$SUBJECT" "$EMAIL" <<-EOF
Using <tab> instead of <spaces> for ident AND using <<-EOF works fine.
The "-" removes the <tabs>, not <spaces>, but at least this works.
Note one can also get this error if you do this;
while read line; do
echo $line
done << somefile
Because << somefile should read < somefile in this case.
May be old but I had a space after the ending EOF
<< EOF
blah
blah
EOF <-- this was the issue. Had it for years, finally looked it up here
For anyone stumbling here who googled "bash warning: here-document delimited by end-of-file", it may be that you are getting the
warning: here-document at line 74 delimited by end-of-file
...type warning because you accidentally used a here document symbol (<<) when you meant to use a here string symbol (<<<). That was my case.
Here is a flexible way to do deal with multiple indented lines without using heredoc.
echo 'Hello!'
sed -e 's:^\s*::' < <(echo '
Some indented text here.
Some indented text here.
')
if [[ true ]]; then
sed -e 's:^\s\{4,4\}::' < <(echo '
Some indented text here.
Some extra indented text here.
Some indented text here.
')
fi
Some notes on this solution:
if the content is expected to have simple quotes, either escape them using \ or replace the string delimiters with double quotes. In the latter case, be careful that construction like $(command) will be interpreted. If the string contains both simple and double quotes, you'll have to escape at least of kind.
the given example print a trailing empty line, there are numerous way to get rid of it, not included here to keep the proposal to a minimum clutter
the flexibility comes from the ease with which you can control how much leading space should stay or go, provided that you know some sed REGEXP of course.
When I want to have docstrings for my bash functions, I use a solution similar to the suggestion of user12205 in a duplicate of this question.
See how I define USAGE for a solution that:
auto-formats well for me in my IDE of choice (sublime)
is multi-line
can use spaces or tabs as indentation
preserves indentations within the comment.
function foo {
# Docstring
read -r -d '' USAGE <<' END'
# This method prints foo to the terminal.
#
# Enter `foo -h` to see the docstring.
# It has indentations and multiple lines.
#
# Change the delimiter if you need hashtag for some reason.
# This can include $$ and = and eval, but won't be evaluated
END
if [ "$1" = "-h" ]
then
echo "$USAGE" | cut -d "#" -f 2 | cut -c 2-
return
fi
echo "foo"
}
So foo -h yields:
This method prints foo to the terminal.
Enter `foo -h` to see the docstring.
It has indentations and multiple lines.
Change the delimiter if you need hashtag for some reason.
This can include $$ and = and eval, but won't be evaluated
Explanation
cut -d "#" -f 2: Retrieve the second portion of the # delimited lines. (Think a csv with "#" as the delimiter, empty first column).
cut -c 2-: Retrieve the 2nd to end character of the resultant string
Also note that if [ "$1" = "-h" ] evaluates as False if there is no first argument, w/o error, since it becomes an empty string.
make sure where you put the ending EOF you put it at the beginning of a new line
Along with the other answers mentioned by Barmar and Joni, I've noticed that I sometimes have to leave a blank line before and after my EOF when using <<-EOF.
I am interested in typing a search keyword in the terminal and able to see the output immediately and interactively. That means, like searching in google, I want to get results immediately after every character or word keyed-in.
I tought of doing this by combining WATCH command and FIND command but unable to bring the interactivenes.
Lets assume, to search for a file with name 'hint' in filename, I use the command
$ find | grep -i hint
this pretty much gives me the decent output results.
But what I want is the same behaviour interactively, that means with out retyping the command but only typing the SEARCH STRING.
I tought of writing a shell script which reads from a STDIN and executes the above PIPED-COMMAND for every 1 sec. Therefore what ever I type it takes that as an instruction every time for the command. But WATCH command is not interactive.
I am interested in below kind of OUTPUT:
$ hi
./hi
./hindi
./hint
$ hint
./hint
If anyone can help me with any better alternative way instead of my PSUEDO CODE, that is also nice
Stumbled aross this old question, found it interesting and thought I'd give it a try. This BASH script worked for me:
#!/bin/bash
# Set MINLEN to the minimum number of characters needed to start the
# search.
MINLEN=2
clear
echo "Start typing (minimum $MINLEN characters)..."
# get one character without need for return
while read -n 1 -s i
do
# get ascii value of character to detect backspace
n=`echo -n $i|od -i -An|tr -d " "`
if (( $n == 127 )) # if character is a backspace...
then
if (( ${#in} > 0 )) # ...and search string is not empty
then
in=${in:0:${#in}-1} # shorten search string by one
# could use ${in:0:-1} for bash >= 4.2
fi
elif (( $n == 27 )) # if character is an escape...
then
exit 0 # ...then quit
else # if any other char was typed...
in=$in$i # add it to the search string
fi
clear
echo "Search: \""$in"\"" # show search string on top of screen
if (( ${#in} >= $MINLEN )) # if search string is long enough...
then
find "$#" -iname "*$in*" # ...call find, pass it any parameters given
fi
done
Hope this does what you intend(ed) to do. I included a "start dir" option, because the listings can get quite unwieldy if you search through a whole home folder or something. Just dump the $1 if you don't need it.
Using the ascii value in $n it should be easily possible to include some hotkey functionality like quitting or saving results, too.
EDIT:
If you start the script it will display "Start typing..." and wait for keys to be pressed. If the search string is long enough (as defined by variable MINLEN) any key press will trigger a find run with the current search string (the grep seems kind of redundant here). The script passes any parameters given to find. This allows for better search results and shorter result lists. -type d for example will limit the search to directories, -xdev will keep the search on the current file sytem etc. (see man find). Backspaces will shorten the search string by one, while pressing Escape will quit the script. The current search string is displayed on top. I used -iname for the search to be case-insensitive. Change this to `-name' to get case-sensitive behaviour.
This code below takes input on stdin, a filtering method as a macro in "$1", and outputs go to stdout.
You can use it e.g., as follows:
#Produce basic output, dynamically filter it in the terminal,
#and output the final, confirmed results to stdout
vi `find . | terminalFilter`
The default filtering macro is
grep -F "$pattern"
the script provides the pattern variable as whatever is currently entered.
The immediate results as a function of what is currently entered are displayed on
the terminal. When you press <Enter>, the results become final
and are outputtted to stdout.
#!/usr/bin/env bash
##terminalFilter
del=`printf "\x7f"` #backspace character
input="`cat`" #create initial set from all input
#take the filter macro from the first argument or use
# 'grep -F "$pattern"'
filter=${1:-'grep -F "$pattern"'}
pattern= #what's inputted by the keyboard at any given time
printSelected(){
echo "$input" | eval "$filter"
}
printScreen(){
clear
printSelected
#Print search pattern at the bottom of the screen
tput cup $(tput lines); echo -n "PATTERN: $pattern"
} >/dev/tty
#^only the confirmed results go `stdout`, this goes to the terminal only
printScreen
#read from the terminal as `cat` has already consumed the `stdin`
exec 0</dev/tty
while IFS=$'\n' read -s -n1 key; do
case "$key" in
"$del") pattern="${pattern%?}";; #backspace deletes the last character
"") break;; #enter breaks the loop
*) pattern="$pattern$key";; #everything else gets appended
#to the pattern string
esac
printScreen
done
clear
printSelected
fzf is a fast and powerful command-line fuzzy finder that exactly suits your needs.
Check it out here: https://github.com/junegunn/fzf.
For your example, simple run fzf on the command line and it should work fine.
Say I have some arbitrary multi-line text file:
sometext
moretext
lastline
How can I remove only the last character (the e, not the newline or null) of the file without making the text file invalid?
A simpler approach (outputs to stdout, doesn't update the input file):
sed '$ s/.$//' somefile
$ is a Sed address that matches the last input line only, thus causing the following function call (s/.$//) to be executed on the last line only.
s/.$// replaces the last character on the (in this case last) line with an empty string; i.e., effectively removes the last char. (before the newline) on the line.
. matches any character on the line, and following it with $ anchors the match to the end of the line; note how the use of $ in this regular expression is conceptually related, but technically distinct from the previous use of $ as a Sed address.
Example with stdin input (assumes Bash, Ksh, or Zsh):
$ sed '$ s/.$//' <<< $'line one\nline two'
line one
line tw
To update the input file too (do not use if the input file is a symlink):
sed -i '$ s/.$//' somefile
Note:
On macOS, you'd have to use -i '' instead of just -i; for an overview of the pitfalls associated with -i, see the bottom half of this answer.
If you need to process very large input files and/or performance / disk usage are a concern and you're using GNU utilities (Linux), see ImHere's helpful answer.
truncate
truncate -s-1 file
Removes one (-1) character from the end of the same file. Exactly as a >> will append to the same file.
The problem with this approach is that it doesn't retain a trailing newline if it existed.
The solution is:
if [ -n "$(tail -c1 file)" ] # if the file has not a trailing new line.
then
truncate -s-1 file # remove one char as the question request.
else
truncate -s-2 file # remove the last two characters
echo "" >> file # add the trailing new line back
fi
This works because tail takes the last byte (not char).
It takes almost no time even with big files.
Why not sed
The problem with a sed solution like sed '$ s/.$//' file is that it reads the whole file first (taking a long time with large files), then you need a temporary file (of the same size as the original):
sed '$ s/.$//' file > tempfile
rm file; mv tempfile file
And then move the tempfile to replace the file.
Here's another using ex, which I find not as cryptic as the sed solution:
printf '%s\n' '$' 's/.$//' wq | ex somefile
The $ goes to the last line, the s deletes the last character, and wq is the well known (to vi users) write+quit.
After a whole bunch of playing around with different strategies (and avoiding sed -i or perl), the best way i found to do this was with:
sed '$! { P; D; }; s/.$//' somefile
If the goal is to remove the last character in the last line, this awk should do:
awk '{a[NR]=$0} END {for (i=1;i<NR;i++) print a[i];sub(/.$/,"",a[NR]);print a[NR]}' file
sometext
moretext
lastlin
It store all data into an array, then print it out and change last line.
Just a remark: sed will temporarily remove the file.
So if you are tailing the file, you'll get a "No such file or directory" warning until you reissue the tail command.
EDITED ANSWER
I created a script and put your text inside on my Desktop. this test file is saved as "old_file.txt"
sometext
moretext
lastline
Afterwards I wrote a small script to take the old file and eliminate the last character in the last line
#!/bin/bash
no_of_new_line_characters=`wc '/root/Desktop/old_file.txt'|cut -d ' ' -f2`
let "no_of_lines=no_of_new_line_characters+1"
sed -n 1,"$no_of_new_line_characters"p '/root/Desktop/old_file.txt' > '/root/Desktop/my_new_file'
sed -n "$no_of_lines","$no_of_lines"p '/root/Desktop/old_file.txt'|sed 's/.$//g' >> '/root/Desktop/my_new_file'
opening the new_file I created, showed the output as follows:
sometext
moretext
lastlin
I apologize for my previous answer (wasn't reading carefully)
sed 's/.$//' filename | tee newFilename
This should do your job.
A couple perl solutions, for comparison/reference:
(echo 1a; echo 2b) | perl -e '$_=join("",<>); s/.$//; print'
(echo 1a; echo 2b) | perl -e 'while(<>){ if(eof) {s/.$//}; print }'
I find the first read-whole-file-into-memory approach can be generally quite useful (less so for this particular problem). You can now do regex's which span multiple lines, for example to combine every 3 lines of a certain format into 1 summary line.
For this problem, truncate would be faster and the sed version is shorter to type. Note that truncate requires a file to operate on, not a stream. Normally I find sed to lack the power of perl and I much prefer the extended-regex / perl-regex syntax. But this problem has a nice sed solution.
I am foxed by the following situation.
I have a file list.txt that I want to run through line by line, in a loop, in bash. A typical line in list.txt has spaces in. The problem is that the loop contains a "read" command. I want to write this loop in bash rather than something like perl. I can't do it :-(
Here's how I would usually write a loop to read from a file line by line:
while read p; do
echo $p
echo "Hit enter for the next one."
read x
done < list.txt
This doesn't work though, because of course "read x" will be reading from list.txt rather than the keyboard.
And this doesn't work either:
for i in `cat list.txt`; do
echo $i
echo "Hit enter for the next one."
read x
done
because the lines in list.txt have spaces in.
I have two proposed solutions, both of which stink:
1) I could edit list.txt, and globally replace all spaces with "THERE_SHOULD_BE_A_SPACE_HERE" . I could then use something like sed, within my loop, to replace THERE_SHOULD_BE_A_SPACE_HERE with a space and I'd be all set. I don't like this for the stupid reason that it will fail if any of the lines in list.txt contain the phrase THERE_SHOULD_BE_A_SPACE_HERE (so malicious users can mess me up).
2) I could use the while loop with stdin and then in each loop I could actually launch e.g. a new terminal, which would be unaffected by the goings-on involving stdin in the original shell. I tried this and I did get it to work, but it was ugly: I want to wrap all this up in a shell script and I don't want that shell script to be randomly opening new windows. What would be nice, and what might somehow be the answer to this question, would be if I could figure out how to somehow invoke a new shell in the command and feed commands to it without feeding stdin to it, but I can't get it to work. For example this doesn't work and I don't really know why:
while read p; do
bash -c "echo $p; echo ""Press enter for the next one.""; read x;";
done < list.txt
This attempt seems to fail because "read x", despite being in a different shell somehow, is still seemingly reading from list.txt. But I feel like I might be close with this one -- who knows.
Help!
You must open as a different file descriptor
while read p <&3; do
echo "$p"
echo 'Hit enter for the next one'
read x
done 3< list.txt
Update: Just ignore the lengthy discussion in the comments below. It has nothing to do with the question or this answer.
I would probably count lines in a file and iterate each of those using eg. sed. It is also possible to read infinitely from stdin by changing while condition to: while true; and exit reading with ctrl+c.
line=0 lines=$(sed -n '$=' in.file)
while [ $line -lt $lines ]
do
let line++
sed -n "${line}p" in.file
echo "Hit enter for the next ${line} of ${lines}."
read -s x
done
AWK is also great tool for this. Simple way to iterate through input would be like:
awk '{ print $0; printf "%s", "Hit enter for the next"; getline < "-" }' file
As an alternative, you can read from stderr, which by default is connected to the tty as well. The following then also includes a test for that assumption:
(
tty -s <& 2|| exit 1
while read -r line; do
echo "$line"
echo 'Hit enter'
read x <& 2
done < file
)
i tried to run this script :
for line in $(cat song.txt)
do echo "$line" >> out.txt
done
running it on ubuntu 11.04
when "song.txt" contains :
I read the news today oh boy
About a lucky man who made the grade
after running the script the "out.txt" looks like that:
I
read
the
news
today
oh
boy
About
a
lucky
man
who
made
the
grade
is anyone can tell me what i am doing wrong here?
For per-line input you should use while read, for example:
cat song.txt | while read line
do
echo "$line" >> out.txt
done
Better (more efficient really) would be the following method:
while read line
do
echo "$line"
done < song.txt > out.txt
That's because of the for command that takes every word from the list it is given (in your case, the content of the song.txt file), whether the words are separated by spaces or newline characters.
What's misguiding you here, is that your for variable name is line. for only works with words. Reread your script changing line by word and it should make sense.
In the for-each-in loop the list that is specified is assumed to be white-space separated. A white-space includes a space, new-line, tab etc. In your case the list is the entire text of the file and hence, the loop runs for every word in the file.