"No such file or directory" using cut in a while read loop in bash [duplicate] - linux

This question already has answers here:
How to cut an existing variable and assign to a new variable in bash
(1 answer)
printing first word in every line of a txt file unix bash
(5 answers)
Take nth column in a text file
(6 answers)
Bash - While read line from file print first and second column [closed]
(3 answers)
Closed 3 years ago.
I have this sample text file text.txt that is in the form
fruits vegetables
apples cucumbers
oranges squash
and it is tab delimited.
I would like to loop through the file line by line, and extract each column value.
Below is the code the code I have tried.
while read p
do
echo "Line"
fruit="$(cut -f 1 $p)"
echo "${fruit}"
done <test.txt
My expected output should be something like:
Line
fruits
Line
apples
Line
oranges
Instead I get this output:
Line
cut: fruits: No such file or directory
cut: vegetables: No such file or directory
Line
cut: apples: No such file or directory
cut: cucumbers: No such file or directory
Line
cut: oranges: No such file or directory
cut: squash: No such file or directory

I would like to loop through the file line by line, and extract each
column value
Is awk not suitable ?
awk '{ print "Line"; print $1 }' < test.txt

Related

Removing new lines from text file in order for text to appear on a single line using awk,tr or sed [duplicate]

This question already has answers here:
How can I replace each newline (\n) with a space using sed?
(43 answers)
Closed 2 years ago.
How do we remove newlines in a test.txt file so that the text now appears on a single line using tr,awk or sed?
E.g
My name is mo
Learning linux
live in CAD
If I want that text to appear on one line and save it to a new file called passed.txt. What command should I run?
With awk:
awk '{ printf "%s",$0 }' file > passed.txt # print each line ($0) of file with no new lines
With tr:
tr -d '\n' < file > passed.txt # Use -d to delete new lines (\n)
xargs and printf is also an option:
xarg printf "%s" < file > passed.txt # Redirect file into printf and print each default space variable without new lines.
Output is redirected to passed.txt with each example

How to Save 'specific' line from terminal output to file? [duplicate]

This question already has answers here:
Bash tool to get nth line from a file
(22 answers)
Closed 4 years ago.
I am currently using the following to save terminal outputs to file:
$command -someoptions >> output.txt
However, I am only interested in one line from the terminal output.
Is there a way to do this by changing the above expression. Or will I have to delete lines after the 'output.txt' file is formed?
For example: If my output is:
line 1
line 2
line 3
line 4
line 5
and all I want to save is:
line 4
where line 4 contains unknown information.
I am asking as I will later wish to script this command.
Many thanks,
Solution Found:
I ended up using:
$command -someoptions | sed -n '4p' >> output.txt
This is a classic simple grep issue.
$command -someoptions | grep 'line 4' >> output.txt
You could refine that with more pattern complexity, and might need it depending on how precisely you need to match the data.
Try with this command:
$command -someoptions | grep " filter " >> output.txt
filter must be replaced by an element that distinguishes your line 4 from the other lines.

How to delete 1 or more matching line(s) while reading a file in bash script [duplicate]

This question already has answers here:
How to pass a variable containing slashes to sed
(7 answers)
Combining two sed commands
(2 answers)
Linux, find replace on a folder of files using a list of items for replacement?
(1 answer)
Closed 4 years ago.
I want to read file using a bash script and delete line(s) which are matching with my specific scenario (line(s) starting with 'z').
my code works fine if the 'inputFile' contains only alphabetic characters.
but, if a line with 'specific characters of sed' (line eg : z-2.10.3.2 x/y/z F (&)[]+* ) then i got an error,(error : sed: -e expression #1, char 29: unterminated `y' command).
#!/bin/bash
inputFile="test.txt"
while IFS= read -r line
do
echo "$line"
if [[ $line == z* ]];
then
sed -i "/$line/d" $inputFile
fi
done < "$inputFile"
i want to delete 'z-2.10.3.2 x/y/z F (&)[]+*' kind of lines, how can i do this...?
As you mentioned you don't need line which has z*
Simply use grep -v
grep -vE "^[[:blank:]]*z" file
I have created one scenario where I have a file which contains
root#ubuntu:~/T/e/s/t# cat file
hello world
sample line 1
sample line 2 world
sample line 3
sample line 4
In my case, I want to remove the line contains "world"
root#ubuntu:~/T/e/s/t# grep -v "world" file
sample line 1
sample line 3
sample line 4
If you want you can redirect your output in another file.

Bash shell script: appending text to a specific line of an existing file without line break [duplicate]

This question already has answers here:
How to add to the end of lines containing a pattern with sed or awk?
(6 answers)
Closed 4 years ago.
I have a file called tmp.mount with the following contents
[Mount]
Options=mode=1777,strictatime,noexec,nosuid
I'd like to search the file via the word Options= to get the
line number.
I will search the word nodev via the given line number.
If it does not exist, I will insert the word ,nodev to the end of
this line via given line number.
With the results
[Mount]
Options=mode=1777,strictatime,noexec,nodev,nosuid,nodev
All without line break. Most of the solution was to use sed but i'm clueless on how I could incorporate the line search with sed.
awk '/Options=/ && ! /nodev/ {print $0 ",nodev"; next};1' file
no need to get the line number, just append the ",nodev" to the corresponding line

Read a file line by line from last [duplicate]

This question already has answers here:
How can I reverse the order of lines in a file?
(24 answers)
Closed 6 years ago.
I have a requirement to read the file line by line from last line until the first line. Right now I am able to read the file line by line from start with the below piece of code.
while IFS= read line
do
#Logic here
done <"$Input_File"
Kindly help me out with a solution to read the file line by line from last line.
You can use tac to read the file from the last line until the first. Using your example you could do:
while IFS= read line
do
#Logic here
done <<<(tac "$Input_File")
See the manual page for tac (this may not be installed by default in your distribution but should be available using the package manager).
file="path/to/your/file.txt"
awk '{print NR ":" $0}' $file | sort -t: -k 1nr,1 | sed 's/^[0-9][0-9]*://'

Resources