I am trying something like this:
rpm -ql `rpm -qa | grep openssh-5`
This lists all the files that are installed using the openssh-5 rpm.
I wish to copy all these files to a folder: eg: myfolder
So i try this:
cp ``rpm -ql `rpm -qa | grep openssh-5`` myfolder
but it fails.
Is there any way out, any shell scripts as such?
you can't nest backticks. this should work:
cp $(rpm -ql $(rpm -qa | grep openssh-5)) myfolder
anoter way would be:
rpm -qa | grep openssh-5 | xargs -d $'\n' rpm -ql | xargs -d $'\n' cp -t myfolder
(this also handles filenames containing spaces. won't handle filenames with newlines tough...)
Try
rpm -qla \*openssh-5\* | while read filename; do
cp -a "$file" "$myfolder"
done
. Note the "" around the variables which takes account for whitespace in filenames (which are, admittedly, quite rare in RPM packages - but not impossible).
Note as well the rpm -qla construct which makes things easier.
files=${rpm -ql `rpm -qa | grep openssh-5`};
while read file do
cp $file "myfolder";
done < $files;
maybe?
Related
I have searched about this in internet, but all are saying to use -T option or find /DIR | tar -c.
But i am having list of files in a variable and i have to give input via pipe to tar.
example,
grep -v "*" ${filelist} | tar -cf gk.tar
I dint want to create intermediate file and get output from that file.
Try
grep -v "*" ${filelist} | xargs tar -cf gk.tar
Checkout the xargs manual. It may not work if the file list is too long.
I found the answer , as using '-T -' will get input from pipe.
example,
grep -v "*" ${cpiolist} |tar -T - -cf gk.tar
I have txt file with content like this
/home/username/Desktop/folder/folder3333/IMAGw488.jpg
/home/username/Desktop/folder/folder3333/IMAG04f88.jpg
/home/username/Desktop/folder/folder3333/IMAGe0488.jpg
/home/username/Desktop/folder/folder3333/IMAG0r88.jpg
/home/username/Desktop/folder/folder3333/
/home/username/Desktop/folder/
/home/username/Desktop/folder/IMAG0488.jpg
/home/username/Desktop/folder/fff/fff/feqw/123.jpg
/home/username/Desktop/folder/fffa/asd.png
....
these are filenames paths but also paths of folders.
The problem I want to solve is to create all folders that doesn't exist.
I want to call mkdir command for every folder that does not exist
How can I do this on easy way ?
Thanks
This can be done in native bash syntax without any calls to external binaries:
while read line; do mkdir -p "${line%/*}"; done < infile
Or perhaps with a just a single call to mkdir if you have bash 4.x
mapfile -t arr < infile; mkdir -p "${arr[#]%/*}"
How about...
for p in $(xargs < somefile.txt);
do
mkdir -p $(dirname ${p})
done
xargs -n 1 dirname <somefile.txt | xargs mkdir -p
It can be done without loop also (provided input file not huge):
mkdir -p $(perl -pe 's#/(?!.*/).*$##' file.txt)
If you have file "file1" with filenames you could try this oneliner:
cat file1 |xargs -I {} dirname "{}"| sort -u | xargs -I{} mkdir -p "{}"
Use of:
xargs -I{} mkdir -p "{}"
ensures that even path names with spaces will be created
Using a perl one-liner and File::Path qw(make_path):
perl -MFile::Path=make_path -lne 'make_path $_' dirlist.txt
I untarred something into a directory that already contained a lot of things. I wanted to untar into a separate directory instead. Now there are too many files to distinguish between. However the files that I have untarred have been created just now (right ?) and the original files haven’t been modified for long (at least a day). Is there a way to delete just these untarred files based on their creation information ?
Tar usually restores file timestamps, so filtering by time is not likely to work.
If you still have the tar file, you can use it to delete what you unpacked with something like:
tar tf file.tar --quoting-style=shell-always |xargs rm -i
The above will work in most cases, but not all (filenames that have a carriage return in them will break it), so be careful.
You could remove the directories by adding -r to that, but it's probably safer to just remove the toplevel directories manually.
find . -mtime -1 -type f | xargs rm
but test first with
find . -mtime -1 -type f | xargs echo
There are several different answers to this question in order of increasing complexity.
First, if this is a one off, and in this particular instance you are absolutely sure that there are no weird characters in your filenames (spaces are OK, but not tabs, newlines or other control characters, nor unicode characters) this will work:
tar -tf file.tar | egrep '^(\./)?[^/]+(/)?$' | egrep -v '^\./$' | tr '\n' '\0' | xargs -0 rm -r
All that egrepping is to skip out on all the subdirectories of the subdirectories.
Another way to do this that works with funky filenames is this:
mkdir foodir
cd foodir
tar -xf ../file.tar
for file in *; do rm -rf ../"$file"; done
That will create a directory in which your archive has been expanded, but it sounds like you wanted that already anyway. It also will not handle any files who's names start with ..
To make that method work with files that start with ., do this:
mkdir foodir
cd foodir
tar -xf ../file.tar
find . -mindepth 1 -maxdepth 1 -print0 | xargs -0 sh -c 'for file in "$#"; do rm -rf ../"$file"; done' junk
Lastly, taking from Mat's answer, you can do this and it will work for any filename and not require you to untar the directory again:
tar -tf file.tar | egrep '^(\./)?[^/]+(/)?$' | grep -v '^\./$' | tr '\n' '\0' | xargs -0 bash -c 'for fname in "$#"; do fname="$(echo -ne "$fname")"; echo -n "$fname"; echo -ne "\0"; done' junk | xargs -0 rm -r
You can handle files and directories in one pass with:
tar -tf ../test/bob.tar --quoting-style=shell-always | sed -e "s/^\(.*\/\)'$/rmdir \1'/; t; s/^\(.*\)$/rm \1/;" | sort | bash
You can see what is going to happen leave off the pipe to 'bash'
tar -tf ../test/bob.tar --quoting-style=shell-always | sed -e "s/^\(.*\/\)'$/rmdir \1'/; t; s/^\(.*\)$/rm \1/;" | sort
to handle filenames with linefeeds you need more processing.
i have a lot of different type of files in one folder. i need to delete the files but except the pdf file.
I tried to display the pdf file only. but i need to delete the other than pdf files
ls -1 | xargs file | grep 'PDF document,' | sed 's/:.*//'
You could do the following - I've used echo rm instead of rm for safety:
for i in *
do
[ x"$(file --mime-type -b "$i")" != xapplication/pdf ] && echo rm "$i"
done
The --mime-type -b options to file make the output of file easier to deal with in a script.
$ ls
aa.txt a.pdf bb.cpp b.pdf
$ ls | grep -v .pdf | xargs rm -rf
$ ls
a.pdf b.pdf
:) !
ls |xargs file|awk -F":" '!($2~/PDF document/){print $1}'|xargs rm -rf
Try inverting the grep match:
ls -1 | xargs file | grep -v 'PDF document,' | sed 's/:.*//'
It's rare in my experience to encounter PDF files which don't have a .pdf extension. You don't state why "file" is necessary in the example, but I'd write this as:
# find . -not -name '*.pdf' -delete
Note that this will recurse into subdirectories; use "-maxdepth 1" to limit to the current directory only.
I am trying to delete erroneous emails based on finding the email address in the file via Linux CLI.
I can get the files with
find . | xargs grep -l email#example.com
But I cannot figure out how to delete them from there as the following code doesn't work.
rm -f | xargs find . | xargs grep -l email#example.com
Solution for your command:
grep -l email#example.com * | xargs rm
Or
for file in $(grep -l email#example.com *); do
rm -i $file;
# ^ prompt for delete
done
For safety I normally pipe the output from find to something like awk and create a batch file with each line being "rm filename"
That way you can check it before actually running it and manually fix any odd edge cases that are difficult to do with a regex
find . | xargs grep -l email#example.com | awk '{print "rm "$1}' > doit.sh
vi doit.sh // check for murphy and his law
source doit.sh
You can use find's -exec and -delete, it will only delete the file if the grep command succeeds. Using grep -q so it wouldn't print anything, you can replace the -q with -l to see which files had the string in them.
find . -exec grep -q 'email#example.com' '{}' \; -delete
I liked Martin Beckett's solution but found that file names with spaces could trip it up (like who uses spaces in file names, pfft :D). Also I wanted to review what was matched so I move the matched files to a local folder instead of just deleting them with the 'rm' command:
# Make a folder in the current directory to put the matched files
$ mkdir -p './matched-files'
# Create a script to move files that match the grep
# NOTE: Remove "-name '*.txt'" to allow all file extensions to be searched.
# NOTE: Edit the grep argument 'something' to what you want to search for.
$ find . -name '*.txt' -print0 | xargs -0 grep -al 'something' | awk -F '\n' '{ print "mv \""$0"\" ./matched-files" }' > doit.sh
Or because its possible (in Linux, idk about other OS's) to have newlines in a file name you can use this longer, untested if works better (who puts newlines in filenames? pfft :D), version:
$ find . -name '*.txt' -print0 | xargs -0 grep -alZ 'something' | awk -F '\0' '{ for (x=1; x<NF; x++) print "mv \""$x"\" ./matched-files" }' > doit.sh
# Evaluate the file following the 'source' command as a list of commands executed in the current context:
$ source doit.sh
NOTE: I had issues where grep could not match inside files that had utf-16 encoding.
See here for a workaround. In case that website disappears what you do is use grep's -a flag which makes grep treat files as text and use a regex pattern that matches any first-byte in each extended character. For example to match Entité do this:
grep -a 'Entit.e'
and if that doesn't work then try this:
grep -a 'E.n.t.i.t.e'
Despite Martin's safe answer, if you've got certainty of what you want to delete, such as in writing a script, I've used this with greater success than any other one-liner suggested before around here:
$ find . | grep -l email#example.com | xargs -I {} rm -rf {}
But I rather find by name:
$ find . -iname *something* | xargs -I {} echo {}
rm -f `find . | xargs grep -li email#example.com`
does the job better. Use `...` to run the command to offer the file names containing email.#example.com (grep -l lists them, -i ignores case) to remove them with rm (-f forcibly / -i interactively).
find . | xargs grep -l email#example.com
how to remove:
rm -f 'find . | xargs grep -l email#example.com'
Quick and efficent. Replace find_files_having_this_text with the text you want to search.
grep -Ril 'find_files_having_this_text' . | xargs rm