Find a zip file, print path and zip contents - linux

I have a series of numbered sub-directories that may or may not contain zip files, and within those zip files are some single-line .txt files I need. Is it possible to use a combination of find and unzip -p to list the file path and the single line contents on the same output line? I'd like to save the results to a .txt and import it into excel to work with.
From the main directory I can successfully find and output the single line:
find . -name 'file.zip' -exec unzip -p {} file.txt \;
How can I prefix the find output (i.e. the file path) to the output of this unzip command? Ideally, I'd like each line of the text file to resemble:
./path/to/file1.zip "Single line of file1.txt file"
./path/to/file2.zip "Single line of file2.txt file"
and so on. Can anyone provide some suggestions? I'm not very experienced with linux command line beyond simple commands.
Thank you.

Put all the code you want to execute into a shell script, then use the exec feature to call the shell script, i.e.
cat finder.bash
#!/bin/bash
printf "$# : " # prints just the /path/to/file/file.zip
unzip -p "$#" file.txt
For now, get that to work, you can make it generic to pass others besides file.txt later.
Make the script executable
chmod 755 finder.bash
Call it from find. i.e.
find . -name 'file.zip' -exec /path/to/finder.bash {} \;
(I don't have an easy way to test this, so reply in comments with error msgs).

Related

Shell script to search a string and output lines for specific files in all directories

How can I search for a string in specific files in all directories and output results to a file?
I am using below code but how can I write a shell script for this?
find. -name *.txt | grep *%text* > result.xls
I don't get your point. I think your command line will work fine.
To make this into a script, just save this command line into a text file, and add x permission.
Save following content into a file. Ex: xxx.sh
#!/bin/bash
find . -name '*.txt' | grep '*%text*' > result.xls
Add x permission for the file by following command.
chmod +x xxx.sh
Run the script.
./xxx.sh
Check this output file result.xls.

why the file command in a for loop behaves so ridiculous

#!/bin/sh
file_list=`find . -type f`
IFS=$(echo) #this enables for loop to break on newline
for file_ in $file_list; do
file $file_
done
This shell script will amazingly report that (File name too long). I guess that the script feeds file with 3 times $file_list !!!
But if I change the file command with a simple echo, then the script will print all files in the current directory line by line which is expected.
It's not a good idea to iterate over the results of find. Use it's exec option instead:
find -type f -exec file {} \;

Rename file in Linux if file exist in a single command

There is need that I want to rename file in Linux if file exist in a single command.
Suppose I want to search test.text file and I want to replace it with test.text.bak then I fire the following command
find / -name test.text
if it exist then I fire the command
mv test.text test.text.bak
In this scenario I am executing two commands but I want this should be happen in single command.
Thanks
Just:
mv test.text test.test.bak
If the file doesn't exist nothing will be renamed.
To supress the error message, when no file exits, use that syntax:
mv test.text test.test.bak 2>/dev/null
If you want to find test.txt somewhere in a subdirectory of dir and move it, try
find dir -name test.txt -exec mv {} {}.bak \;
This will move all files matching the conditions. If you want to traverse from the current directory, use . as the directory instead of dir.
Technically, this will spawn a separate command in a separate process for each file matched by find, but it's "one command" in the sense that you are using find as the only command you are actually starting yourself. (Think of find as a crude programming language if you will.)
for FILE in `find . -name test.test 2>/dev/null`; do mv $FILE $FILE.bak; done
This will search all the files named "test.test" in current as well as in child direcroties and then rename each file to .bak

Shell Script to Recursively Loop Through Directory and print location of important files

So I am trying to write a command line shell script or a shell script that will be able to recursively loop through a directory, all its files, and sub-directories for certain files and then print the location of these files to a text file.
I know that this is possible using BASH commands such as find, locate, exec, and >.
This is what I have so far. find <top-directory> -name '*.class' -exec locate {} > location.txt \;
This does not work though. Can any BASH, Shell scripting experts help me out please?
Thank-you for reading this.
The default behavior of find (if you don't specify any other action) is to print the filename. So you can simply do:
find <top-directory> -name '*.class' > location.txt
Or if you want to be explicit about it:
find <top-directory> -name '*.class' -print > location.txt
You can save the redirection by using find's -fprint option:
find <top-directory> -name '*.class' -fprint location.txt
From the man page:
-fprint file
[...] print the full file name into file file. If file does not exist when find is run, it is created; if it does exist, it is truncated.
A less preferred way to do it is to use ls:
ls -d $PWD**/* | grep class
let's break it down:
ls -d # lists the directory (returns `.`)
ls -d $PWD # lists the directory - but this time $PWD will provide full path
ls -d $PWD/** # list the directory with full-path and every file under this directory (not recursively) - an effect which is due to `/**` part
ls -d $PWD/**/* # same like previous one, only that now do it recursively to the folders below (achieved by adding the `/*` at the end)
A better way of doing it:
After reading this due to recommendation from Charles Duffy, it appears as a bad idea to use both ls as well as find (article also says: "find is just as bad as ls in this context".) The reason it's a bad idea is because you can't control the output of ls: for example, you can't configure ls to terminate filenames with NUL. The reason it's problematic is that unix allows all kind of weird characters in a file-name (newline, pipe etc) and will "break" ls in a way you can't anticipate.
Better use a shell script for the task, and it's pretty simple task too:
Create a file my_script.sh, edit the file to contain:
for i in **/*; do
echo $PWD/$i
done
Give it execute permissions (by running: chmod +x my_script.sh).
Run it from the same directory with:
./my_script.sh
and you're good to go!

Linux shell script - find all files and run a command on each one of them

I'm trying to port a windows batch file to a linux shell script (bash).
Here is the part that is giving me some headache:
for /r %%G in (*Metadata*.xml) DO (
java -jar %SAXON%\saxon9he.jar -o:%%~nG.csv "%%G" %WORKDIR%\transformBQT.xsl)
What this does is find all .xml file containing the text Metadata and then running the an XSLT transformation on each of these files. This takes 3 arguments
-o is the output file (this will be a .csv with the same name as the .xml)
next is the target file
final argument is the .xsl file
I am thinking of using the following:
find /results/ -type f -name "*Metadata*.xml" -exec
java -jar $SAXON/saxon9he.jar -o:??? {} $WORKDIR/transformXMI.xsl
but this doesn't quite work as I don't know how to make the output file have the same name as the .xml (with .csv extension)
Any tips?
You could process the results from find line by line and transform <file>.xml into <file>.csv:
find /results/ -type f -name "*Metadata*.xml" | while read file; do java -jar $SAXON/saxon9h3.jar -o:${file%.xml}.csv $file $WORKDIR/transform.XMI.xsl; done
This simple approach fails in case the file names have spaces in their paths/names.
Per the docs, the string {} is replaced by the current file name bring processed. With that, along with using Bash's parameter expansion, you should be able to rename the output files to CSV.
find /results/ -type f -name "*Metadata*.xml" -exec
bash -c 'fname="{}"; java -jar $SAXON/saxon9he.jar -o:"${fname%.xml}.csv" "$fname" $WORKDIR/transformXMI.xsl' \;
The important bit here is the shell parameters that you create when you execute Bash. fname="{}" creates a new shell parameter with the contents of the current XML file. After you have that, you can use parameter expansions: ${fname%.xml} strips of the .xml extension, which is then replaced with .csv.

Resources