extracting nested different types of archives from different folders - linux

I have got an archive of many fonts but i have troubble extracting them all into one folder. i tried to write a long script for 3 hours now, it somehow breaks on a path issue. i tried piping like find . -name *.zip|unzip -d ~/fonts but it doesnt work. i changed so much in the script i wrote, that it is not really presentable :(.
each fontfile is supposedly (i didnt check all, there are really many) inside a rar archive which together with a readme is in a zip archive which together with another readme is in each its own folder. can this be done in one line?

Try changing the one line like this
find . -name "*.zip" | xargs unzip -d ~/fonts

Try this
find . -name "*.zip" -exec unzip -d ~/fonts {} \;

Related

How to pipe find results to unzip?

I have a lot of folders with a zip file in each. Most of the zip files in the folders have been opened already. I just want to unzip those which have not been opened, which I know all have the same date.
I'm trying to use the following but I'm getting hit back with Unzip rules. The first part finds all the files I need, but piping the results to unzip, as I have done, isn't enough.
find *2019-01-05* | unzip
you can try to use xargs to get prior results and then unzip them:
find *2019-01-05* | xargs unzip
That's:
find -type f -name \*2019-01-05\*.zip -exec unzip {} +
-type f for good measure, in case there are similarly named directories.

find tekst in files in subfolders

So this question might have been asked before, but after some hours of searching (or searching wrongfully) I decided to ask this question.
If it's already been answered before, please link me the question and close this one.
here's my issue.
I have a folder on my filesystem, ie "files". this folder has got a lot of subfolders, with their subfolders. some levels deep, they all have a file which is called the same in all folders. In that file, a lot of text is in it, but it's not ALL the same. I need to have a list of files that contains a certain string.
I KNOW I can do this with
find ./ -type f -exec grep -H 'text-to-find-here' {} \;
but the main problem is: it will get over every single file on that filesystem. as the filesystem contains MILLIONS of files, this would take up a LONG time, specially when I know the exact file this piece of text should be in.
visually it looks like this:
foobar/foo/bar/file.txt
foobar/foobar/bar/file.txt
foobar/barfoo/bar/file.txt
foobar/raboof/bar/file.txt
foobar/oof/bar/file.txt
I need a specific string out of file.txt (if that string exists..)
(and yes: the file in /bar/ is ALLWAYS called file.txt...)
Can anyone help me on how to do so? i'm breaking my head on an "easy" solution :o
Thnx,
Daniel
Use the -name option to filter by name:
find . -type f -name file.txt -exec grep -H 'text-to-find-here' {} +
And if it's always in a directory named bar, you can use -path with a wildcard:
find . -type f -path '*/bar/file.txt' -exec grep -H 'text-to-find-here' {} +
With single GNU grep command:
grep -rl 'pattern' --include=*file.txt
--include=glob
Search only files whose name matches glob, using wildcard

Copy files matching a name in different folders

I am using
find ../../ -type f -name <filename>*.PDF -print0 | xargs -0 cp --target-directory=Directory name with path>;
but it is copy only one file. It doesn't copy all files which is having same name. I need number of files to be searched and copied which is having same name but it it created on different date and different folder. how to solve this issue. I have already created lot's more I am facing the problem in this regard.
This will give you the duplicate files. Once you have the name, you can find them and delete them using your script:
for i in `find .|grep pom.xml`; do
basename $i;
done |sort|uniq -c|sort -n|cut -b9-
PS: everyone's in a hurry. Adding urgency to your posts is usually frowned upon in StackOverflow, and you might prompt the opposite reaction

Finding return chars in unix filenames

This is part of a bigger problem but, of course, it's the bit that's giving me the most trouble. So there's something going on with our users where they're creating files with return characters by doing something like this:
touch "tpseports
old"
That's another problem and not for me to mess with. What I'm trying to do is find files like that with our script to remove outdated files. Right now, we run a find command to place the old file names into a temp file and then remove based off of that list. Something like this here:
find /home/userid \( -type f -a -mtime +365 \) 1>> TEMP
while read FILELIST
do
rm -f $FILELIST
done < TEMP
The problem is when we come across a file like:
/home/userid/tpseports
old
Because it will try to remove "/home/userid/tpseports" AND "old".
Has anybody run into something like this before and know the solution? I'm still searching around the web for ideas so if I find a solution I'll post it here.
Cheers
find has the -print0 option for cases like this:
find -type f -mtime +365 -print0 | xargs -0 rm -f

Zip multiple folders and files depending on filesize in Linux/Ubuntu

I have a directory "mapnik" with hundreds of sub-directories, each containing more than 10000 files. I would like to zip "mapnik" recursively, preserving the folder-structure but only adding files greater than 103 Byte to the archive.
How can I accomplish this? I tried using find and pipes, but with the wrong syntax and the huge number of files, "trial and error" is not the best way to get it done ;)
Thanks for your help guys!
How about
find -size +103c -print0 | xargs -0 zip -r outname.zip
Delan's suggestion produced some kind of zip-error whith files of the same name. But it got me on the right track. This is what worked for me:
cd mapnik
find . -size +103c -print | zip archive.zip -#

Resources