Ls after cat does not work in Linux - linux

I have a file, containing filepaths, so when I try to list all the path with the following command:
cat whitelist.txt | xargs ls
it displays: No such file or directory.
whitelist.txt contains valid file paths like:
../work/DRA.I3OKGZ.G0200.IB* ../work/DFL.KA6KGZ.G0320.IB*
....
ls works and there are such kinds of files.
So what's the problem?

* does not get expanded.
If you want to keep the "cat | xargs" style, you could do something like
cat whitelist.txt | xargs -I# sh -c "ls #"

Related

Trying to diff two files with generated paths - no such file or directory - but files exist

I want to diff two files in the same directory in a bash script. To get the full paths of these two files (I need this because the script isn't running the same directory), I did:
pathToOld=$(ls -Art /dir/path/here | grep somestring | tail -n2 | head -n1)
pathToOld="/dir/path/here/${pathToOld}"
and
pathToNew=$(ls -Art /dir/path/here | grep somestring | tail -n 1)
pathToNew="/dir/path/here/${pathToNew}"
I was able to figure out the above from the following links: link1, link2, link3
If I echo these path in the .sh script, it comes out correctly, like:
>echo "${pathToOld}"
/dir/path/here/oldFile
But when I try to diff the files like so:
diff pathToOld pathToNew
It tells me:
diff: pathToOld: No such file or directory
diff: pathToNew: No such file or directory
How do I make this work?
Btw, I have also tried to pipe sed -z 's/\n/ /g' (inspired by this) to both lines but that hasn't helped.

Filter directories in piped input

I have a bash command that lists a number of files and directories. I want to remove everything that is not an existing directory. Is there anyway I can do this without creating a script of my own? I.e. I want to use pre-existing programs available in linux.
E.g. Given that I have this folder:
dir1/
dir2/
file.txt
I want to be able to run something like:
echo dir1 dir2 file.txt somethingThatDoesNotExist | xargs [ theCommandIAmLookingFor]
and get
dir1
dir2
It would be better if the command generating the putative paths used a better delimeter, but you might be looking for something like:
... | xargs -n 1 sh -c 'test -d "$0" && echo $0'
You can use this command line using grep -v:
your_command | grep -vxFf <(printf '%s\n' */ | sed 's/.$//') -
This will filter out all the sub-directories in current path from your list.
If in case you want to list only existing directories then remove -v as:
your_command | grep -xFf <(printf '%s\n' */ | sed 's/.$//') -
Note that glob */ prints all sub-directories in current path with a trailing / and sed is used to remove this last /.

viewing file's content for each file-name appearing in a list

I'm creating a list of file-names using the command:
ls | grep "\.txt$"
I'm getting a list of files:
F1.txt
F2.txt
F3.txt
F4.txt
I want to view the content of these files (using less / more / cat /...)
is there a way to do this by pipping?
(Btw, I got a list of file-names using a more complex command, this is just a simpler example for clarification)
Would this be enough?
$ cat *txt
For richer queries, you could use find and xargs:
$ find . -name "*txt" | xargs cat
you can try something like this:
#!/bin/bash
for i in *.txt
do
echo Displaying file $i ...
more $i
done
What about:
cat $(ls | grep "\.txt$")

ls in a directory for a list of files

I have a C codebase, all resides in the same directory.
I want to find all the header files that have a code file with the same name.
Right now, I have the following command:
ls *.h | sed s/.h/.c/
This returns a 'list' of filenames that I want to search for. How can I pass this list to another command so that I can see which header files have code files sharing the same name?
Without any external command:
$ for i in *.h
> do
> [ -f ${i/.h/.c} ] && echo $i
> done
The first line loops through every file.
The third line is a test construct. The -f flag to test (aka man [) checks to see if the file exists. If it does, it returns 0 (which is considered true in shell). The && only operates if the following command if the previous line returned successfully.
${i/.h/.c} is an in-place in-shell regex substitution so that the file tested is the corresponding .c to the .h.
you could use xargs which transforms its input:
a
b
c
to an argument list:
a b c
So this should print "a b c":
echo -e "a\nb\nc" | xargs echo
ls `ls *.h|sed s/.h/.c/` 2>/dev/null
should do the trick
ls -1 *.c* *.h*|awk -F. '{a[$1]++}END{for(i in a){if(a[i]==2)print i".hh"} }'
ls *.h | sed s/.h/.c/ | xargs ls 2>/dev/null
The remainder of the command runs ls again with the new *.c filenames. ls will complain about every file that doesn't exist, so we redirect stderr to nowhere.
Example without 2>/dev/null:
$ ls
a.c a.h b.c c.c d.h
$ ls *.h | sed s/.h/.c/ | xargs ls
ls: d.c: No such file or directory
a.c

Linux: cat matching files in date order?

I have a few files in a directory with names similar to
_system1.log
_system2.log
_system3.log
other.log
but they are not created in that order.
Is there a simple, non-hardcoded, way to cat the files starting with the underscore in date order?
Quick 'n' dirty:
cat `ls -t _system*.log`
Safer:
ls -1t _system*.log | xargs -d'\n' cat
Use ls:
ls -1t | xargs cat
ls -1 | xargs cat
You can concatenate and also store them in a single file according to their time of creation and also you can specify the files which you want to concatenate. Here, I find it very useful. The following command will concatenate the files which are arranged according to their time of creaction and have common string 'xyz' in their file name and store all of them in outputfile.
cat $(ls -t |grep xyz)>outputfile

Resources