how to find file in jar without unjarring it and opening in editor from command line? - linux

I have a foo.jar which has classes, a lib directory and a META-INT directory.
I am trying to find if persistence.xml exists in this jar and open it in vi editor.
I tried the following.
tar tf foo.jar | grep 'persistence.xml'
This shows META-INF/persistence.xml.
I am wondering if it is possible to first find the file in jar and then open in vi in single command line if possible.

It's not actually the single command but in one line you can do it as follows:
jar=foo.jar; filename=$(unzip -l "$jar" | grep 'persistence.xml' | awk '{print $4}'); test -n "$filename" && vim <(unzip -qc "$jar" "$filename")
First you have to set jar file that you will be evaluated and then command is looking for the file name that matches the pattern. Finally if file name is non zero length string, file is being opened in vim editor.

Related

How to clean ctrl+z in linux file?

I have copied some text files from windows to redhat machine using putty. When i try to execute the files . I am getting error because ctrl+z was added in that file.
I have used this command
tr -d '\15\32' < /path/gems/spec/rms.spec > /path/gems/spec/rms.spec
But the above command is annoying because I have 1000+ .spec files under various folder.
Is there any option in linux command to identify the .spec files in the directory and clean the ctrl+z added in the file.
Thanks in advance
If you are using bash, you can try with:
find . -type f -name '*.spec' -print0 | \
while IFS= read -r -d '' dot_spec_file; do \
cat "$dot_spec_file" | \
tr -d '\15\32' | \
sponge "$dot_spec_file" \
; done
Execute it on the parent directory of your .spec files.
sponge reads all its input before write the file; from its manual:
sponge reads standard input and writes it out to the specified file.
Unlike a shell redirect, sponge soaks up all its input before opening
the output file. This allows constructing pipelines that read from and
write to the same file.

Script to open latest text file from a directory

I need a shell script to open latest text file from a given directory. it will be then copied to another directory. How can i achieve it?
I need a logic which will search and give the latest file from a directory (name of the text file can be anything (not fixed), so i need to find out latest text file)
Here you can do something like this
#!/bin/sh
SOURCE_DIR=/home/juned/Downloads
DEST_DIR=/tmp/
LAST_MODIFIED_FILE=`ls -t ${SOURCE_DIR}| head -1`
echo $LAST_MODIFIED_FILE
#Open file
vim $SOURCE_DIR/$LAST_MODIFIED_FILE
#Copy file
cp $SOURCE_DIR/$LAST_MODIFIED_FILE $DEST_DIR
echo "File copied successfully"
You can specify any application name in which you want to open that file like gedit, kate etc. Here I've used vim.
xdg-open - opens a file or URL in the user's preferred application
Not an expert in bash but you can try this logic:
First, grab the latest file using ls -t -t sorts by time head -1 gets the first file
F=`ls -t * | head -1`
Then open the file using and editor:
xdg-open $F
gedit $F
...
As suggested by # AJefferiss you can directly do :
xdg-open $(ls -t * | head -1)
gedit $(ls -t * | head -1)
For editing the latest modified / created,
vim $(ls -t | head -1)
For editing the latest in alphanumerical order,
vim $(ls -1 | tail -1)
In one line (if are you sure that there are only files):
vim `ls -t .|head -1`
it will be opened in vim (or use other txt editor)
if there are directories you should write script with loop and test every file (if it's not a dir):
if [ -f $FILE ];
or you can also use find, or use pipe for get latest file:
ls -lt .|sed -n 2p|grep -v '^d'
The existing answers are helpful, but fall short when it comes to dealing with filenames with embedded spaces or other shell metacharacters.[1]
# Get the most recently modified *.txt file.
# (On *assignment*, names with spaces, ... are not a concern.)
f=$(ls -t *.txt | head -n 1)
# *Use* the variable enclosed in *double-quotes* to ensure that it is passed
# to the target command unmodified.
xdg-open "$f" # could also use "$(ls -t *.txt | head -n 1)" directly
Additionally, some answer user all-uppercase shell variable names, which should be avoided so as to avoid conflicts with environment variables.
[1] Due to use of ls, filenames with embedded newlines won't be handled correctly, but that's rarely a real-world concern.

Using grep to overwrite its current file

I have a list of directories within directories and this is what I am trying to attempt:
find a specific file format which is .xml
within all these .xml files, read the contents in the files and remove line 3
For line 3, its string is as follows: dxflib <Name of whatever folder it is in>.dxb
I tried using find -name "*.xml" | xargs grep -v "dxflib" in the terminal (I am using linux) and I found out that while my code works and it displays the results, it did not overwrite the changes to the file.
And as I googled online, it is mentioned that I will need to add in >> output.txt etc
And hence, are there anyways in which I can make it to save / overwrite its own file?
Removes third line in file:
sed -i '3d' file

Remove all files of a certain type except for one type in linux terminal

On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.

Command line tool to search docx file under ms dos or cygwin

Is there a command line tool that is able to search docx file under ms dos or cygwin ?
I have tried grep, it's not working with docx while working fine with txt file.
I know I could always convert the docx to txt 1st then search using grep, but I am wondering
is there a command tool that I can search directly under command line?
Thanks
i wrote a small bash script, which would help you:
#!/bin/bash
export DOCKEY="$#"
function searchdoc(){
VK1=$(cat "$#" | grep -i "$DOCKEY" | wc -c)
VK2=$(unzip -c "$#" | grep -i "$DOCKEY" | wc -c)
let NUM=$VK1+$VK2
if [ "$NUM" -gt 0 ]; then
echo $NUM occurences in $#
echo opening file.
gnome-open "$#"
fi
}
export -f searchdoc
echo searching for $DOCKEY ...
find . -exec bash -c 'searchdoc "{}" 2>/dev/null' \;
save it as docfind.sh and you can invoke
$#> docfind.sh searchterm
from any folder you want to scan.
After a trying out the stuff , I found the easiest way to do this is to use a linux utility to batch convert all docx files into txt files, then do grep with those txt files easily.
zgrep might work for you? It usually works in OpenOffice documents, and both are compressed archives containing XML:
zgrep "some string" *.xdoc
I have no .xdoc files to test this with, but in theory it should work...
You can use zipgrep, which calls grep on all files of a zip archive (which a docx file is).
You might be disappointed with the result, though, as it returns raw content of XML files containing both the text and XML tags.
save it as docfind.sh and you can invoke
Newbies like me might need to be told that for the .sh script to be executable from any directory, it needs to have the executable property set and be located in /usr/bin or elsewhere in your Path.
I was able to set up the nemo file manager in Linux Mint to open a terminal from any folder's context menu (information here).

Resources