I have a small question and would appreciate your help in it please.
I need to merge different text files together using paste command as :
paste -d, ~/Desktop/*.txt > ~/Desktop/Out/merge.txt
However, the files got merged out of order. (text files are numbered 1, 2, 3, etc.)
I am using *.txt since different number of files exist for different scenarios.
Would you mind helping me in it please.
If you use modern bash you can write:
paste -d, ~/Desktop/{1..10}.txt > ~/Desktop/Out/merge.txt
If not, you must use something like:
paste -d, $(seq 1 10 | sed 's#.*#~/Desktop/&.txt) > ~/Desktop/Out/merge.txt
If you don't know which files you have in the directory,
you can list and sort them:
cd ~/Desktop/
paste -d, $(ls -1d *.txt| sort -n) > ~/Desktop/Out/merge.txt
Example:
$ touch {1..20}.txt
$ echo $(ls -1 | sort -n)
1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt 11.txt 12.txt 13.txt 14.txt 15.txt 16.txt 17.txt 18.txt 19.txt 20.txt
Example2:
$ echo hello > 1.txt
$ echo dear > 5.txt
$ echo friend > 11.txt
$ paste -d, $(ls -1d *.txt| sort -n)
hello,dear,friend
Here's a rather long way of doing the same but in one line.
paste -d, $(ls ~/Desktop/*.txt | awk -F/ '{print $NF"/"$0}' | sort -n | cut -d/ -f2-) > ~/Desktop/merge.txt
I like one liners :-)
paste -d, $(ls ~/Desktop/*.txt) > ~/Desktop/Out/merge.txt
The * is being replaced by an alphabetically sorted list of filenames of your directory.
3.5.8 Filename Expansion
Bash scans each word for the characters ‘*’, ‘?’, and ‘[’. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of file names matching the pattern.
So the filenaming does not have to be consecutive ;)
Related
I'm doing a linux online course but im stuck with a question, you can find the question below.
You will get three files called a.bf, b.bf and c.bf. Merge the contents of these three files and write it to a new file called abc.bf. Respect the order: abc.bf must contain the contents of a.bf first, followed by those of b.bf, followed by those of c.bf.
Example
Suppose the given files have the following contents:
a.bf contains +++.
b.bf contains [][][][].
c.bf contains <><><>.
The file abc.bf should then have
+++[][][][]<><><>
as its content.
I know how to merge the 3 files but when i use cat my output is:
+++
[][][]
<><><>
When i use paste my output is "+++ 'a lot of spaces' [][][][] 'a lot of spaces' <><><>"
My output that i need is +++[][][][]<><><>, i dont want the spaces between the content. Can someone help me?
What you want to do is delete the newline characters.
With tr:
cat {a,b,c}.bf | tr --delete '\n' > abc.bf
With echo & sed:
echo $(cat {a,b,c}.bf) | sed -E 's/ //g' > abc.bf
With xargs & sed:
<{a,b,c}.bf xargs | sed -E 's/ //g' > abc.bf
Note that sed is only used to remove the spaces.
With cat & sed:
cat {a,b,c}.bf | sed -z 's/\n//g'
echo -n "$(cat a.bf)$(cat b.bf)$(cat c.bf)" > abc.bf
echo -n will not output trailing newlines
I have more than 50000 files in a directory such as file1.txt, file2.txt, ....., file50000.txt. I would like to concatenate of some files whose file numbers are listed in the following text file (need.txt).
need.txt
1
4
35
45
71
.
.
.
I tried with the following. Though it is working, but I look for more simpler and short way.
n1=1
n2=$(wc -l < need.txt)
while [ $n1 -le $n2 ]
do
f1=$(awk 'NR=="$n1" {print $1}' need.txt)
cat file$f1.txt >> out.txt
(( n1++ ))
done
This might also work for you:
sed 's/.*/file&.txt/' < need.txt | xargs cat > out.txt
Something like this should work for you:
sed -e 's/.*/file&.txt/' need.txt | xargs cat > out.txt
It uses sed to translate each line into the appropriate file name and then hands the filenames to xargs to hand them to cat.
Using awk it could be done this way:
awk 'NR==FNR{ARGV[ARGC]="file"$1".txt"; ARGC++; next} {print}' need.txt > out.txt
Which adds each file to the ARGV array of files to process and then prints every line it sees.
It is possible do it without any sed or awk command. Directly using bash built-in functions and cat (of course).
for i in $(cat need.txt); do cat file${i}.txt >> out.txt; done
And as you want, it is quite simple.
I have file.txt with names one per line as shown below:
ABCB8
ABCC12
ABCC3
ABCC4
AHR
ALDH4A1
ALDH5A1
....
I want to grep each of these from an input.txt file.
Manually i do this one at a time as
grep "ABCB8" input.txt > output.txt
Could someone help to automatically grep all the strings in file.txt from input.txt and write it to output.txt.
You can use the -f flag as described in Bash, Linux, Need to remove lines from one file based on matching content from another file
grep -o -f file.txt input.txt > output.txt
Flag
-f FILE, --file=FILE:
Obtain patterns from FILE, one per line. The empty file
contains zero patterns, and therefore matches nothing. (-f is
specified by POSIX.)
-o, --only-matching:
Print only the matched (non-empty) parts of a matching line, with
each such part on a separate output line.
for line in `cat text.txt`; do grep $line input.txt >> output.txt; done
Contents of text.txt:
ABCB8
ABCC12
ABCC3
ABCC4
AHR
ALDH4A1
ALDH5A1
Edit:
A safer solution with while read:
cat text.txt | while read line; do grep "$line" input.txt >> output.txt; done
Edit 2:
Sample text.txt:
ABCB8
ABCB8XY
ABCC12
Sample input.txt:
You were hired to do a job; we expect you to do it.
You were hired because ABCB8 you kick ass;
we expect you to kick ass.
ABCB8XY You were hired because you can commit to a rational deadline and meet it;
ABCC12 we'll expect you to do that too.
You're not someone who needs a middle manager tracking your mouse clicks
If You don't care about the order of lines, the quick workaround would be to pipe the solution through a sort | uniq:
cat text.txt | while read line; do grep "$line" input.txt >> output.txt; done; cat output.txt | sort | uniq > output2.txt
The result is then in output.txt.
Edit 3:
cat text.txt | while read line; do grep "\<${line}\>" input.txt >> output.txt; done
Is that fine?
So here is the task which I can't solve. I have a directory with .h files and a directory with .i files, which have the same names as the .h files. I want just by typing a command to have all .h files which are not found as .i files. It's not a hard problem, I can do it in some programming language, but I'm just curious how it will look like in cmd :). To be more specific here is the algo:
get file names without extensions from ls *.h
get file names without extensions from ls *.i
compare them
print all names from 1 that are not met in 2
Good luck!
diff \
<(ls dir.with.h | sed 's/\.h$//') \
<(ls dir.with.i | sed 's/\.i$//') \
| grep '$<' \
| cut -c3-
diff <(ls dir.with.h | sed 's/\.h$//') <(ls dir.with.i | sed 's/\.i$//') executes ls on the two directories, cuts off the extensions, and compares the two lists. Then grep '$<' finds the files that are only in the first listing, and cut -c3- cuts off the "< " characters that diff inserted.
ls ./dir_h/*.h | sed -r -n 's:.*dir_h/([^.]*).h$:dir_i/\1.i:p' | xargs ls 2>&1 | \
grep "No such file or directory" | awk '{print $4}' | sed -n -r 's:dir_i/([^:]*).*:dir_h/\1:p'
ls -1 dir1/*.hh dir2/*.ii | awk -F"/" '{print $NF}' |awk -F"." '{a[$1]++;b[$0]}END{for(i in a)if(a[i]==1 && b[i".hh"]) print i}'
explanation:
ls -1 dir1/*.hh dir2/*.ii
above will list all the files *.hh and *.ii files in both the directories.
awk -F"/" '{print $NF}'
above will just print the file name excluding the complete path of the file.
awk -F"." '{a[$1]++;b[$0]}END{for(i in a)if(a[i]==1 && b[i".hh"]) print i}'
above will create two associative arrays one with file name and one with excluding the extension.
if both hh and ii files exist the value in the assosciative array will 2 if there is only one file then the value will be 1.so we need array item whose value is 1 and it should be a header file (.hh).
this can be checked using the asso..array b which is done in the END block.
Assuming bash is your shell:
for file in $( ls dir_with_h/*.h ); do
name=${file%\.h}; # trim trailing ".h" file extension
name=${name#dir_with_h/}; # trim leading folder name
if [ ! -e dir_with_i/${name}.i ]; then
echo ${name};
fi
done
Undoubtedly this can be ported to virtually all other shells. I find this less cryptic than some other approaches (although this is surely my problem) but it is a little wordy. As such. a shell script might help recall it.
Inside a directory, how can I delete files that lack any of the words specified, so that only files that contain ALL the words are left? I tried to write a simple bash shell script using grep and rm commands, but I got lost. I am totally new to Linux, any help would be appreciated
How about:
grep -L foo *.txt | xargs rm
grep -L bar *.txt | xargs rm
If a file does not contain foo, then the first line will remove it.
If a file does not contain bar, then the second line will remove it.
Only files containing both foo and bar should be left
-L, --files-without-match
Suppress normal output; instead print the name of each input
file from which no output would normally have been printed. The
scanning will stop on the first match.
See also #Mykola Golubyev's post for placing in a loop.
list=`Word1 Word2 Word3 Word4 Word5`
for word in $list
grep -L $word *.txt | xargs rm
done
Addition to the answers above: Use the newline character as delimiter to handle file names with spaces!
grep -L $word $file | xargs -d '\n' rm
grep -L word | xargs rm
To do the same matching filenames (not the contents of files as most of the solutions above) you can use the following:
for file in `ls --color=never | grep -ve "\(foo\|bar\)"`
do
rm $file
done
As per comments:
for file in `ls`
shouldn't be used. The below does the same thing without using the ls
for file in *
do
if [ x`echo $file | grep -ve "\(test1\|test3\)"` == x ]; then
rm $file
fi
done
The -ve reverses the search for the regexp pattern for either foo or bar in the filename.
Any further words to be added to the list need to be separated by \|
e.g. one\|two\|three
First, remove the file-list:
rm flist
Then, for each of the words, add the file to the filelist if it contains that word:
grep -l WORD * >>flist
Then sort, uniqify and get a count:
sort flist | uniq -c >flist_with_count
All those files in flsit_with_count that don't have the number of words should be deleted. The format will be:
2 file1
7 file2
8 file3
8 file4
If there were 8 words, then file1 and file2 should be deleted. I'll leave the writing/testing of the script to you.
Okay, you convinced me, here's my script:
#!/bin/bash
rm -rf flist
for word in fopen fclose main ; do
grep -l ${word} *.c >>flist
done
rm $(sort flist | uniq -c | awk '$1 != 3 {print $2} {}')
This removes the files in the directory that didn't have all three words:
You could try something like this but it may break
if the patterns contain shell or grep meta characters:
(in this example one two three are the patterns)
for f in *; do
unset cmd
for p in one two three; do
cmd="fgrep \"$p\" \"$f\" && $cmd"
done
eval "$cmd" >/dev/null || rm "$f"
done
This will remove all files that doesn't contain words Ping or Sent
grep -L 'Ping\|Sent' * | xargs rm