reading a file using shell script - linux

I have a text file named sqlfile, with the following content:
a.sql
b.sql
c.sql
d.sql
What I want is that to store them in variables and then print using for loop.
But here I get only d.sql in the output of the script.
The script:
#!/bin/bash
while read line
do
files=`echo $line`
done < /home/abdul_old/Desktop/My_Shell_Script/sqlfile
for file in $files
do
echo $file
done

A variable can only hold one element, what you want is an array
#!/bin/bash
while read line
do
files+=( "$line" )
done < /home/abdul_old/Desktop/My_Shell_Script/sqlfile
for file in "${files[#]}"
do
echo "$file"
done

while read line
do files="$files $line"
done < /home/abdul_old/Desktop/My_Shell_Script/sqlfile
or
files=$(</home/abdul_old/Desktop/My_Shell_Script/sqlfile)
or
files=$(cat /home/abdul_old/Desktop/My_Shell_Script/sqlfile)
You're doing way too much work in your loop.
The middle alternative works with bash; the other two work with most shells. Prefer $(...) to back-quotes.
This code assumes there are no spaces in file names to mess things up. If you do use blanks in file names, you have to work marginally harder - see the array-based solution by SiegeX

I think you need to make the "files" as array. otherwise, as soon as the while finishes, "files" stores the latest "line".
try:
files=( "${files[#]}" $line )

That's right, you assifn last value to "files"
You must use for instance += instead of =
#!/bin/bash
while read line
do
files+=`echo " $line"`
done < /home/abdul_old/Desktop/My_Shell_Script/sqlfile
for file in $files
do
echo $file
done

Using read is fine but you have to set the IFS environment variable first else leading and trailing white space are removed from each line: Preserving leading white space while reading>>writing a file line by line in bash.

All you have to do is:
readarray myData < sqlfile
This will put file lines into an array called myData
Now you can access any of these lines like this:
printf "%s\n" "${myData[0]}" #outputs first line
printf "%s\n" "${myData[2]}" #outputs third line
And you can iterate over it:
for curLine in "${myData[#]}"; do
echo "$curLine"
done
Note that these lines would contain \n character as well. To remove trailing newlines you can use -t flag like this:
readarray -t myData < sqlfile
readarray is a synonym to mapfile. You can read about it in man bash

Related

For loop in command line runs bash script reading from text file line by line

I have a bash script which asks for two arguments with a space between them. Now I would like to automate filling out the prompt in the command line with reading from a text file. The text file contains a list with the argument combinations.
So something like this in the command line I think;
for line in 'cat text.file' ; do script.sh ; done
Can this be done? What am I missing/doing wrong?
Thanks for the help.
A while loop is probably what you need. Put the space separated strings in the file text.file :
cat text.file
bingo yankee
bravo delta
Then write the script in question like below.
#!/bin/bash
while read -r arg1 arg2
do
/path/to/your/script.sh "$arg1" "$arg2"
done<text.file
Don't use for to read files line by line
Try something like this:
#!/bin/bash
ARGS=
while IFS= read -r line; do
ARGS="${ARGS} ${line}"
done < ./text.file
script.sh "$ARGS"
This would add each line to a variable which then is used as the arguments of your script.
'cat text.file' is a string literal, $(cat text.file) would expand to output of command however cat is useless because bash can read file using redirection, also with quotes it will be treated as a single argument and without it will split at space tab and newlines.
Bash syntax to read a file line by line, but will be slow for big files
while IFS= read -r line; do ... "$line"; done < text.file
unsetting IFS for read command preserves leading spaces
-r option preserves \
another way, to read whole file is content=$(<file), note the < inside the command substitution. so a creative way to read a file to array, each element a non-empty line:
read_to_array () {
local oldsetf=${-//[^f]} oldifs=$IFS
set -f
IFS=$'\n' array_content=($(<"$1")) IFS=$oldifs
[[ $oldsetf ]]||set +f
}
read_to_array "file"
for element in "${array_content[#]}"; do ...; done
oldsetf used to store current set -f or set +f setting
oldifs used to store current IFS
IFS=$'\n' to split on newlines (multiple newlines will be treated as one)
set -f avoid glob expansion for example in case line contains single *
note () around $() to store the result of splitting to an array
If I were to create a solution determined by the literal of what you ask for (using a for loop and parsing lines from a file) I would use iterations determined by the number of lines in the file (if it isn't too large).
Assuming each line has two strings separated by a single space (to be used as positional parameters in your script:
file="$1"
f_count="$(wc -l < $file)"
for line in $(seq 1 $f_count)
do
script.sh $(head -n $line $file | tail -n1) && wait
done
You may have a much better time using sjsam's solution however.

Simple sed substitution

I have a text file with a list of files with the structure ABC123456A or ABC123456AA. What I would like to do is check whether the files ABC123456ZZP also exists. i.e I want to substitute the letter(s) after ABC123456 with ZZP
Can I do this using sed?
Like this?
X=ABC123456 ; echo ABC123456AA | sed -e "s,\(${X}\).*,\1ZZP,"
You could use sed as wilx suggests but I think a better option would be bash.
while read file; do
base=${file:0:9}
[[ -f ${base}ZZP ]] && echo "${base}ZZP exists!"
done < file
This will loop over each line in file
then base is set to the first 9 characters of the line (excluding whitespace)
then check to see if a file exists with ZZP on the end of base and print a message if it does.
Look:
$ str="ABC123456AA"
$ echo "${str%[[:alpha:]][[:alpha:]]*}"
ABC123456
so do this:
while IFS= read -r tgt; do
tgt="${tgt%[[:alpha:]][[:alpha:]]*}ZZP"
[[ -f "$tgt" ]] && printf "%s exists!\n" "$tgt"
done < file
It will still fail for file names that contain newlines so let us know if you have that situation but unlike the other posted solutions it will work for file names with other than 9 key characters, file names containing spaces, commas, backslashes, globbing characters, etc., etc. and it is efficient.
Since you said now that you only need the first 9 characters of each line and you were happy with piping every line to sed, here's another solution you might like:
cut -c1-9 file |
while IFS= read -r tgt; do
[[ -f "${tgt}ZZP" ]] && printf "%sZZP exists!\n" "$tgt"
done
It'd be MUCH more efficient and more robust than the sed solution, and similar in both contexts to the other shell solutions.

Print output of cat statement in bash script loop

I'm trying to execute a command for each line coming from a cat command. I'm basing this on sample code I got from a vendor.
Here's the script:
for tbl in 'cat /tmp/tables'
do
echo $tbl
done
So I was expecting the output to be each line in the file. Instead I'm getting this:
cat
/tmp/tables
That's obviously not what I wanted.
I'm going to replace the echo with an actual command that interfaces with a database.
Any help in straightening this out would be greatly appreciated.
You are using the wrong type of quotes.
You need to use the back-quotes rather than the single quote to make the argument being a program running and piping out the content to the forloop.
for tbl in `cat /tmp/tables`
do
echo "$tbl"
done
Also for better readability (if you are using bash), you can write it as
for tbl in $(cat /tmp/tables)
do
echo "$tbl"
done
If your expectations are to get each line (The for-loops above will give you each word), then you may be better off using xargs, like this
cat /tmp/tables | xargs -L1 echo
or as a loop
cat /tmp/tables | while read line; do echo "$line"; done
The single quotes should be backticks:
for tbl in `cat /etc/tables`
Although, this will not get you output/input by line, but by word. To process line by line, you should try something like:
cat /etc/tables | while read line
echo $line
done
With while loop:
while read line
do
echo "$line"
done < "file"
while IFS= read -r tbl; do echo "$tbl" ; done < /etc/tables
read this.
You can do a lot of parsing in bash by redefining the IFS (Input Field Seperator), for example
IFS="\t\n" # You must use double quotes for escape sequences.
for tbl in `cat /tmp/tables`
do
echo "$tbl"
done

How do I use the lines of a file as arguments of a command?

Say, I have a file foo.txt specifying N arguments
arg1
arg2
...
argN
which I need to pass to the command my_command
How do I use the lines of a file as arguments of a command?
If your shell is bash (amongst others), a shortcut for $(cat afile) is $(< afile), so you'd write:
mycommand "$(< file.txt)"
Documented in the bash man page in the 'Command Substitution' section.
Alterately, have your command read from stdin, so: mycommand < file.txt
As already mentioned, you can use the backticks or $(cat filename).
What was not mentioned, and I think is important to note, is that you must remember that the shell will break apart the contents of that file according to whitespace, giving each "word" it finds to your command as an argument. And while you may be able to enclose a command-line argument in quotes so that it can contain whitespace, escape sequences, etc., reading from the file will not do the same thing. For example, if your file contains:
a "b c" d
the arguments you will get are:
a
"b
c"
d
If you want to pull each line as an argument, use the while/read/do construct:
while read i ; do command_name $i ; done < filename
command `< file`
will pass file contents to the command on stdin, but will strip newlines, meaning you couldn't iterate over each line individually. For that you could write a script with a 'for' loop:
for line in `cat input_file`; do some_command "$line"; done
Or (the multi-line variant):
for line in `cat input_file`
do
some_command "$line"
done
Or (multi-line variant with $() instead of ``):
for line in $(cat input_file)
do
some_command "$line"
done
References:
For loop syntax: https://www.cyberciti.biz/faq/bash-for-loop/
You do that using backticks:
echo World > file.txt
echo Hello `cat file.txt`
If you want to do this in a robust way that works for every possible command line argument (values with spaces, values with newlines, values with literal quote characters, non-printable values, values with glob characters, etc), it gets a bit more interesting.
To write to a file, given an array of arguments:
printf '%s\0' "${arguments[#]}" >file
...replace with "argument one", "argument two", etc. as appropriate.
To read from that file and use its contents (in bash, ksh93, or another recent shell with arrays):
declare -a args=()
while IFS='' read -r -d '' item; do
args+=( "$item" )
done <file
run_your_command "${args[#]}"
To read from that file and use its contents (in a shell without arrays; note that this will overwrite your local command-line argument list, and is thus best done inside of a function, such that you're overwriting the function's arguments and not the global list):
set --
while IFS='' read -r -d '' item; do
set -- "$#" "$item"
done <file
run_your_command "$#"
Note that -d (allowing a different end-of-line delimiter to be used) is a non-POSIX extension, and a shell without arrays may also not support it. Should that be the case, you may need to use a non-shell language to transform the NUL-delimited content into an eval-safe form:
quoted_list() {
## Works with either Python 2.x or 3.x
python -c '
import sys, pipes, shlex
quote = pipes.quote if hasattr(pipes, "quote") else shlex.quote
print(" ".join([quote(s) for s in sys.stdin.read().split("\0")][:-1]))
'
}
eval "set -- $(quoted_list <file)"
run_your_command "$#"
If all you need to do is to turn file arguments.txt with contents
arg1
arg2
argN
into my_command arg1 arg2 argN then you can simply use xargs:
xargs -a arguments.txt my_command
You can put additional static arguments in the xargs call, like xargs -a arguments.txt my_command staticArg which will call my_command staticArg arg1 arg2 argN
Here's how I pass contents of a file as an argument to a command:
./foo --bar "$(cat ./bar.txt)"
None of the answers seemed to work for me or were too complicated. Luckily, it's not complicated with xargs (Tested on Ubuntu 20.04).
This works with each arg on a separate line in the file as the OP mentions and was what I needed as well.
cat foo.txt | xargs my_command
One thing to note is that it doesn't seem to work with aliased commands.
The accepted answer works if the command accepts multiple args wrapped in a string. In my case using (Neo)Vim it does not and the args are all stuck together.
xargs does it properly and actually gives you separate arguments supplied to the command.
I suggest using:
command $(echo $(tr '\n' ' ' < parameters.cfg))
Simply trim the end-line characters and replace them with spaces, and then push the resulting string as possible separate arguments with echo.
In my bash shell the following worked like a charm:
cat input_file | xargs -I % sh -c 'command1 %; command2 %; command3 %;'
where input_file is
arg1
arg2
arg3
As evident, this allows you to execute multiple commands with each line from input_file, a nice little trick I learned here.
Both solutions work even when lines have spaces:
readarray -t my_args < foo.txt
my_command "${my_args[#]}"
if readarray doesn't work, replace it with mapfile, they're synonyms.
I formerly tried this one below, but had problems when my_command was a script:
xargs -d '\n' -a foo.txt my_command
After editing #Wesley Rice's answer a couple times, I decided my changes were just getting too big to continue changing his answer instead of writing my own. So, I decided I need to write my own!
Read each line of a file in and operate on it line-by-line like this:
#!/bin/bash
input="/path/to/txt/file"
while IFS= read -r line
do
echo "$line"
done < "$input"
This comes directly from author Vivek Gite here: https://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/. He gets the credit!
Syntax: Read file line by line on a Bash Unix & Linux shell:
1. The syntax is as follows for bash, ksh, zsh, and all other shells to read a file line by line
2. while read -r line; do COMMAND; done < input.file
3. The -r option passed to read command prevents backslash escapes from being interpreted.
4. Add IFS= option before read command to prevent leading/trailing whitespace from being trimmed -
5. while IFS= read -r line; do COMMAND_on $line; done < input.file
And now to answer this now-closed question which I also had: Is it possible to `git add` a list of files from a file? - here's my answer:
Note that FILES_STAGED is a variable containing the absolute path to a file which contains a bunch of lines where each line is a relative path to a file I'd like to do git add on. This code snippet is about to become part of the "eRCaGuy_dotfiles/useful_scripts/sync_git_repo_to_build_machine.sh" file in this project, to enable easy syncing of files in development from one PC (ex: a computer I code on) to another (ex: a more powerful computer I build on): https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles.
while IFS= read -r line
do
echo " git add \"$line\""
git add "$line"
done < "$FILES_STAGED"
References:
Where I copied my answer from: https://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/
For loop syntax: https://www.cyberciti.biz/faq/bash-for-loop/
Related:
How to read contents of file line-by-line and do git add on it: Is it possible to `git add` a list of files from a file?

Looping through the content of a file in Bash

How do I iterate through each line of a text file with Bash?
With this script:
echo "Start!"
for p in (peptides.txt)
do
echo "${p}"
done
I get this output on the screen:
Start!
./runPep.sh: line 3: syntax error near unexpected token `('
./runPep.sh: line 3: `for p in (peptides.txt)'
(Later I want to do something more complicated with $p than just output to the screen.)
The environment variable SHELL is (from env):
SHELL=/bin/bash
/bin/bash --version output:
GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.
cat /proc/version output:
Linux version 2.6.18.2-34-default (geeko#buildhost) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006
The file peptides.txt contains:
RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL
One way to do it is:
while read p; do
echo "$p"
done <peptides.txt
As pointed out in the comments, this has the side effects of trimming leading whitespace, interpreting backslash sequences, and skipping the last line if it's missing a terminating linefeed. If these are concerns, you can do:
while IFS="" read -r p || [ -n "$p" ]
do
printf '%s\n' "$p"
done < peptides.txt
Exceptionally, if the loop body may read from standard input, you can open the file using a different file descriptor:
while read -u 10 p; do
...
done 10<peptides.txt
Here, 10 is just an arbitrary number (different from 0, 1, 2).
cat peptides.txt | while read line
do
# do something with $line here
done
and the one-liner variant:
cat peptides.txt | while read line; do something_with_$line_here; done
These options will skip the last line of the file if there is no trailing line feed.
You can avoid this by the following:
cat peptides.txt | while read line || [[ -n $line ]];
do
# do something with $line here
done
Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do
echo "$p"
done < "$filename"
Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).
#!/bin/bash
filename='peptides.txt'
exec 4<"$filename"
echo Start
while read -u4 p ; do
echo "$p"
done
This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done
This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done
Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt
I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:
OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS
This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.
Best of luck!
A few more things not covered by other answers:
Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
# process the fields
# if the line has less than three fields, the missing fields will be set to an empty string
# if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from the output of another command, using process substitution
while read -r line; do
# process the line
done < <(command ...)
This approach is better than command ... | while read -r line; do ... because the while loop here runs in the current shell rather than a subshell as in the case of the latter. See the related post A variable modified inside a while loop is not remembered.
Reading from a null delimited input, for example find ... -print0
while read -r -d '' line; do
# logic
# use a second 'read ... <<< "$line"' if we need to tokenize the line
done < <(find /path/to/dir -print0)
Related read: BashFAQ/020 - How can I find and safely handle file names containing newlines, spaces or both?
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
# process the lines
# note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Based on #chepner's answer here:
-u is a bash extension. For POSIX compatibility, each call would look something like read -r X <&3.
Reading a whole file into an array (Bash versions earlier to 4)
while read -r line; do
my_array+=("$line")
done < my_file
If the file ends with an incomplete line (newline missing at the end), then:
while read -r line || [[ $line ]]; do
my_array+=("$line")
done < my_file
Reading a whole file into an array (Bash versions 4x and later)
readarray -t my_array < my_file
or
mapfile -t my_array < my_file
And then
for line in "${my_array[#]}"; do
# process the lines
done
More about the shell builtins read and readarray commands - GNU
More about IFS - Wikipedia
BashFAQ/001 - How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Related posts:
Creating an array from a text file in Bash
What is the difference between thee approaches to reading a file that has just one line?
Bash while read loop extremely slow compared to cat, why?
Use a while loop, like this:
while IFS= read -r line; do
echo "$line"
done <file
Notes:
If you don't set the IFS properly, you will lose indentation.
You should almost always use the -r option with read.
Don't read lines with for
If you don't want your read to be broken by newline character, use -
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "$line"
done < "$1"
Then run the script with file name as parameter.
Suppose you have this file:
$ cat /tmp/test.txt
Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR
There are four elements that will alter the meaning of the file output read by many Bash solutions:
The blank line 4;
Leading or trailing spaces on two lines;
Maintaining the meaning of individual lines (i.e., each line is a record);
The line 6 not terminated with a CR.
If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.
Here are the methods that may change the file (in comparison to what cat returns):
1) Lose the last line and leading and trailing spaces:
$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)
2) Using process substitution with cat will reads the entire file in one gulp and loses the meaning of individual lines:
$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
Line 2 has leading space
Line 3 followed by blank line
Line 5 (follows a blank line) and has trailing space
Line 6 has no ending CR'
(If you remove the " from $(cat /tmp/test.txt) you read the file word by word rather than one gulp. Also probably not what is intended...)
The most robust and simplest way to read a file line-by-line and preserve all spacing is:
$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
' Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space '
'Line 6 has no ending CR'
If you want to strip leading and trading spaces, remove the IFS= part:
$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'
(A text file without a terminating \n, while fairly common, is considered broken under POSIX. If you can count on the trailing \n you do not need || [[ -n $line ]] in the while loop.)
More at the BASH FAQ
I like to use xargs instead of while. xargs is powerful and command line friendly
cat peptides.txt | xargs -I % sh -c "echo %"
With xargs, you can also add verbosity with -t and validation with -p
This might be the simplest answer and maybe it don't work in all cases, but it is working great for me:
while read line;do echo "$line";done<peptides.txt
if you need to enclose in parenthesis for spaces:
while read line;do echo \"$line\";done<peptides.txt
Ahhh this is pretty much the same as the answer that got upvoted most, but its all on one line.
#!/bin/bash
#
# Change the file name from "test" to desired input file
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
echo $x
done
Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
echo ParseFPS $line
FPS=parse
fi
if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
echo ParseFPS $line
FPS=${line##*=}
FPS="${FPS%\"}"
FPS="${FPS#\"}"
fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then
echo ParseFPS Unknown frame rate
fi
echo Found $FPS
Declare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.
Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.
This is coming rather very late, but with the thought that it may help someone, i am adding the answer. Also this may not be the best way. head command can be used with -n argument to read n lines from start of file and likewise tail command can be used to read from bottom. Now, to fetch nth line from file, we head n lines, pipe the data to tail only 1 line from the piped data.
TOTAL_LINES=`wc -l $USER_FILE | cut -d " " -f1 `
echo $TOTAL_LINES # To validate total lines in the file
for (( i=1 ; i <= $TOTAL_LINES; i++ ))
do
LINE=`head -n$i $USER_FILE | tail -n1`
echo $LINE
done
#Peter: This could work out for you-
echo "Start!";for p in $(cat ./pep); do
echo $p
done
This would return the output-
Start!
RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL
Another way to go about using xargs
<file_name | xargs -I {} echo {}
echo can be replaced with other commands or piped further.
for p in `cat peptides.txt`
do
echo "${p}"
done

Resources