Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
By using UNIX/Linux commands, pipes (“|”) and redirections (“>”, “>>”)
make a listing of the smallest 5 files in the “/etc” directory whose names
contains string “.conf”, sorted by increasing file size.
This will work:
ls -lS /etc | sort -k 5 -n| grep ".conf" | head -n 5
First list files by size, then sort by 5th column of results by number, then filter lines containing the string ".conf" and finally show only 5 lines.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a list of file that are symlinked for original file
I was trying to get the size of those files as
filePath="/ready/PPMI_3651^SI82-SI108^PPMI_WES/PPMI_3651.r1.fq.gz"
size="$(du -ch $filePath | tail -1 | cut -f 1)"
but it only gives me 0 size. How do I get the space occupied by those?
This is how the file looks:
[un#xc transfer_data]$ ls -lht /ready/PPMI_3651^SI82-SI108^PPMI_WES/PPMI_3651.r1.fq.gz
lrwxrwxrwx 1 uuds parkd 70 May 16 2017 /ready/PPMI_3651.r1.fq.gz -> ../splitRG/PPMI_3651.r1.fq.gz
From man du:
-L, --dereference
dereference all symbolic links
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm wondering if I can merge multiple ls results.
I have to combine 3 ls -l results; two directories have to list only directories in them, and the other lists symbolic links only.
The merged list'll finally be sorted by name or by date.
I did it by java code but I want to do it by single Linux command line if possible.
Thanks
I think you are better of doing this in find, but using ls -l to get out your file info for sorting by sort. Like:
find /path/dir1 /path/dir2 /path/dir3 -maxdepth 1 -exec ls -l --full-time {} + | sort -k 6,7
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I try
ls */ | grep "\.txt$"
to find all .txt file in the subdirectory but it seems that it can't work well all the time.
The pattern you want can easily be matched with a single glob:
ls */*.txt
The ls isn't necessary; it just demonstrates that it works. You can also use
echo */*.txt
printf '%s\n' */*.txt
files=( */*.txt )
for f in */*.txt; do ....
The pattern itself (*/*.txt) will expand to the list of the matching files; what you can do with that list is fairly broad.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I want to find patterns that are listed in one file and find them in other file.
The second file has those patterns separated by commas.
for e.g. first file F1 has genes
ENSG00000187546
ENSG00000113492
ENSG00000166971
and second file F2 has those genes along with some more columns which I need
ENSG00000164252
ENSG00000187546
ENSG00000113492
ENSG00000166971,ENSG00000186106
So the gene ENSG00000166971 which is present in the second file does not show up in grep because it has another gene with it,separated by comma.
My code is:
grep -f "F1.txt" "F2.txt" >output.txt
I want those values even if one of them is present,and the associated data with it.Is there any way to do this?
Tried to create the same situation.
getting ENSG00000166971 in the grep result.
may be this is due to different version.
i m using Fedora release 20 with grep 2.14.56-1e3d.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I want to iteratively compare two directories, A and B, under Linux using:
diff -r ./A ./B
but I want to ignore some subdirectory names, e.g. a subdirectory called "svn".
How do I do it under Linux?
You can write:
diff -r --exclude=svn ./A ./B