how to delete a line that contains a word in all text files of a folder? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So, in linux, I have a folder with lots of big text files.
I want to delete all the lines of these files that contain a specific keyword.
Is there any easy way to do that across all files?

There already many similar answers. I'd like to add that if you want to match this is a line containing a keyword but not this is a line containing someoneelseskeyword, then you had better added brackets around the word:
sed -i '/\<keyword\>/d' *.txt

I cannot test this right now, but it should get you started
find /path/to/folder -type f -exec sed -i '/foo/d' {} ';'
find files in the directory /path/to/folder
find lines in these files containing foo
delete those lines from those files

Sure:
for x in files*
do
grep -v your_pattern "$x" > x && mv x "$x"
done

try this:
find your_path_filename |xargs sed -i '/key_word/d'

sed -i '/keyword/d' *.txt -- run this in your directory.
sed - stream editor , use it here for deleting lines in individual files
-i option : to make the changes permenent in the input files
'/keywprd/' : specifies the pattern or the key to be searched in the files
option d : informs sed that matching lines need to be deleted.
*.txt : simply tells sed to use all the text files in the directory as input for
processing , you can specify a individual or a extension like *.txt the way i did.

Related

Sort files in a directory by their text character length and copy to other directory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to find the smallest file by character length inside of a directory and, once it is found, I want to rename it and copy it to another directory.
For example, I have two files in one directory ~/Files and these are cars.txt and rabbits.txt
Text in cars.txt:
I like red cars that are big.
Text in rabbits.txt:
I like rabbits.
So far I know how to get the character length of a single file with the command wc -m 'filename' but I don't know how to do it in all the files and sort them in order. I know rabbits.txt is smaller in character length, but how do I compare both of them?
You could sort the files by size, then select the name of the first one:
file=$(wc -m ~/Files/* 2>/dev/null | sort -n | head -n 1 | awk '{print $2}')
echo $file

How to print output twice in Linux? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Which command is use to print the file name twice on output?
I want to write a pipe that List all the files beginning with the character ā€˜Pā€™ on the screen twice in succession.
Something like:
ls -1 | while read i ; do echo $i $i ; done
ā€¦ should do the trick.
ls | sed -E 's/^(P.*)/\1 \1/'
ls, when used with a pipe, puts 1 file per line.
We use sed with extended RE support -E.
We capture the name of any word beginning with P: ^(P.*)
and replace it with itself, a space, followed by itself \1 is a back-reference to what is captured in the parenthesis ( ... ) .
I suggest to use the find utility:
find . -maxdepth 1 -type f -name 'P*' -print -print

Combine PDFs with spaces in file names [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I have a directory with lots of PDFs that have spaces in their file names.
file 1.pdf
file 2.pdf
file 3.pdf
# And so on
I ran this command in that directory.
pdftk `ls -v` cat output combined-report.pdf
But the terminal spat out a bunch of errors like this.
Error: Unable to find file.
Error: Failed to open PDF file:
file
Error: Unable to find file.
Error: Failed to open PDF file:
1.pdf
How do I combine the PDFs using pdftk or any other package in Arch Linux? To clarify, I want to combine the files in the order printed by ls -v
Just use a wildcard while creating combining the pdfs like:
pdftk *.pdf cat output newfile.pdf
Or else you could use something like this:
pdftk file\ 1.pdf file\ 2.pdf cat output newfile.pdf
Try this:
find . -name 'file*.pdf' -print0 | sort -z -V | xargs -0 -I{} pdftk {} cat output combined-report.pdf
or this:
ls -v file*.pdf | xargs -d'\n' -I{} pdftk {} cat output combined-report.pdf
In the first line, "-print0", "-z", and "-0" tell the corresponding command to use null as delimiter. The "-V" parameter for sort specifies "version sort" which I think should produce the sorting you wanted. Normally, the parameters that are piped are appended to the end of xargs. "-I{}" specifies a placeholder, "{}", that you can use to put them in the middle of the command.
The second line is similar, except that it takes parameter from "ls", and use newline '\n' as delimiter.
Note: there are potential problems with using "ls". See the link posted by #stephen-p.

Extract the file name of last slash from path [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am finding the files in specific location now I need to extract the file name which is after last slash from that path without its extension like *.war by using shell scripting.
I tried below to find out the data in the path:
find /data1/jenkins_devops/builds/develop/5bab159c1c40cfc44930262d30511ac7337805fa -mindepth 1 -type f -name '*.war'
Ex.- This folder "5bab159c1c40cfc44930262d30511ac7337805fa" contains multiple .war file like interview.war, auth.war so I am expecting output is interview war.
Can someone please help?
There are much elegant ways to achieve the objectve. The following use awk to achieve the objectiv:
find /data1/jenkins_devops/builds/develop/5bab159c1c40cfc44930262d30511ac7337805fa -mindepth 1 -type f -name *.war | awk -F "/" '{print $NF}' | awk -F "." '{print $1}'
Awk NF returns the number of fields and you can use that to print the last column. First you seperate the columns with / as field seperator in awk and use it to print last column. Then use . as seperator and print the first column to achieve the desired result. It is done in the above script.
Just use basename:
the_path="/data1/jenkins_devops/builds/develop/ab7f302d157d839b4ac3d7917cfa2d550ba2e73e/auth.war"
basename "$the_path" .war

Create mutiple files in multiple directories [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I 've got tree of folders like:
00 -- 0
-- 1
...
-- 9
...
99 -- 0
-- 1
...
-- 9
How is the simplest way to create in every single subfolders a file like:
/00/0/00_0.txt
and save to every files some kind of data?
I tried with touch and with loop but without success.
Any ideas how to make it very simple?
List all directories using globs. Modify the listed paths with sed so that 37/4 becomes 37/4/37_4.txt. Use touch to create empty files for all modified paths.
touch $(printf %s\\n */*/ | sed -E 's|(.*)/(.*)/|&\1_\2.txt|')
This works even if 12/3 was just a placeholder and your actual paths are something like abcdef/123. However it will fail when your paths contain any special symbols like whitespaces, *, or ?.
To handle arbitrary path names use the following command. It even supports linebreaks in path names.
mapfile -td '' a < <(printf %s\\0 */*/ | sed -Ez 's|(.*)/(.*)/|&\1_\2.txt|')
touch "${a[#]}"
You may use find and then run commands using -exec
find . -type d -maxdepth 2 -mindepth 2 -exec bash -c 'f={};
cmd=$(echo "${f}/${f%/*}_${f##*/}.txt"); touch $cmd' \;
the bash substitution ${f%/*}_${f##*/} replaces the last / with _

Resources