Script to open latest text file from a directory - linux

I need a shell script to open latest text file from a given directory. it will be then copied to another directory. How can i achieve it?
I need a logic which will search and give the latest file from a directory (name of the text file can be anything (not fixed), so i need to find out latest text file)

Here you can do something like this
#!/bin/sh
SOURCE_DIR=/home/juned/Downloads
DEST_DIR=/tmp/
LAST_MODIFIED_FILE=`ls -t ${SOURCE_DIR}| head -1`
echo $LAST_MODIFIED_FILE
#Open file
vim $SOURCE_DIR/$LAST_MODIFIED_FILE
#Copy file
cp $SOURCE_DIR/$LAST_MODIFIED_FILE $DEST_DIR
echo "File copied successfully"
You can specify any application name in which you want to open that file like gedit, kate etc. Here I've used vim.
xdg-open - opens a file or URL in the user's preferred application

Not an expert in bash but you can try this logic:
First, grab the latest file using ls -t -t sorts by time head -1 gets the first file
F=`ls -t * | head -1`
Then open the file using and editor:
xdg-open $F
gedit $F
...
As suggested by # AJefferiss you can directly do :
xdg-open $(ls -t * | head -1)
gedit $(ls -t * | head -1)

For editing the latest modified / created,
vim $(ls -t | head -1)
For editing the latest in alphanumerical order,
vim $(ls -1 | tail -1)

In one line (if are you sure that there are only files):
vim `ls -t .|head -1`
it will be opened in vim (or use other txt editor)
if there are directories you should write script with loop and test every file (if it's not a dir):
if [ -f $FILE ];
or you can also use find, or use pipe for get latest file:
ls -lt .|sed -n 2p|grep -v '^d'

The existing answers are helpful, but fall short when it comes to dealing with filenames with embedded spaces or other shell metacharacters.[1]
# Get the most recently modified *.txt file.
# (On *assignment*, names with spaces, ... are not a concern.)
f=$(ls -t *.txt | head -n 1)
# *Use* the variable enclosed in *double-quotes* to ensure that it is passed
# to the target command unmodified.
xdg-open "$f" # could also use "$(ls -t *.txt | head -n 1)" directly
Additionally, some answer user all-uppercase shell variable names, which should be avoided so as to avoid conflicts with environment variables.
[1] Due to use of ls, filenames with embedded newlines won't be handled correctly, but that's rarely a real-world concern.

Related

Keeping *nix Format

My code works, but not in the way I want to exactly. Basically, what my code does is that it looks through the current directory, searches for folders, and within those folders it looks at the files. If one of the file extensions is the value of the $Example variable, then it should delete all other files with the same beginning file name, regardless of extension and rename the one with the $Example extension to the same name, just without the $Example extension. Here is the code:
#!/bin/sh
set -o errexit
Example=dummy
for d in *; do
if test "$(ls -A "$d" 2>/dev/null)"; then
if [ $(ls -1 ${d}/*.$Example 2>/dev/null | wc -l) -ge 1 ]; then
cd $(pwd)/$d;
for f in *.$Example; do
fileName="${f%.$Example}";
mv "$f" "${f%.$Example}";
#tr "\r" "\n" < "${f%.$Example}" > "${f%.$Example}"
listToDelete=$(find -name "$fileName.*");
for g in $listToDelete; do
rm $g;
done;
done;
cd ..;
fi;
fi;
done
The files being used have been created in VIM, so are supposed to have Linux formatting, rather than Windows formatting. For some reason or other, once the extension has been stripped, using this code, it gets formatted with \r, and the file fails to run. I added the comment where my temporary solution is located, but I was wondering if either there is some way to alter the mv function to keep the Linux formatting, or maybe there is another way to achieve what I want. Thanks
The files being used have been created in VIM, so are supposed to have Linux formatting, rather than Windows formatting.
That has no impact on the line separator being used.
But I can think of 3 different possible causes.
ViM substitution
The \r and \n may have been mixed up or the Windows line separator (\r\n) may have been striped out incorrectly. If you're just trying to convert the Windows line separator at some point, convert them to to Unix line separated files with dos2unix instead of sed or vim magic if possible and then edit them. If the files are adding or replacing line separators using ViM, remember that in ViM line separators are searched for using \n and replaced with \r since ViM just loads file data into a buffer. e.g. %s/\n/somestring/ and %s/somestring/\r/g.
Cygwin
If you're using Cygwin, I'm pretty sure it defaults to ViM using Windows line separators, but you can change it to Unix line separators. Don't remember how to off the top of my head though.
ViM default line separators
Not sure how this would have happened if you're on a GNU/Linux system, but the default line separator to be used in ViM can be specified to Unix using :e ++ff=unix
This answer does not address the issue with a possible stray \r in the script source code, but rather gives an alternative way of doing what I believe is what the user wants to achieve:
#!/bin/bash
ext='dummy'
tmpfile=$(mktemp)
find . -type f -name "*.$ext" \
-execdir bash -x -c \
'cp "$3" "$2" &&
tee "${3%.$1}"* <"$2" >/dev/null' bash "$ext" "$tmpfile" {} \;
rm -f "$tmpfile"
This finds all files with the given extension and for each of them executes
cp "$3" "$2" && tee "${3%.$1}"* <"$2" >/dev/null
$1 will be the extension without the dot.
$2 will be the name of a temporary file.
$3 will be the found file.
The command will first copy the found file to a temporary file, then it will feed this temporary file through tee which will duplicate its contents to all files with the same prefix (${3%.$1} will strip the extension from the found filename and ${3%.$1}* will expand to all files in the same directory that has the same prefix).
Most modern implementations of find supports -execdir which works like -exec with the difference that the given utility will be executed in the directory of the found name. Also, {} will be the pathless basename of the found thing.
Given the following files in a directory:
$ ls test/
f1.bar f1.foo f2.bar f2.foo f3.bar f3.foo
f1.dummy f1.txt f2.dummy f2.txt f3.dummy f3.txt
This script does the following:
$ bash -x script.sh
+ ext=dummy
++ mktemp
+ tmpfile=/tmp/tmp.9v5JMAcA12
+ find . -type f -name '*.dummy' -execdir bash -x -c 'cp "$3" "$2" &&
tee "${3%.$1}"* <"$2" >/dev/null' sh dummy /tmp/tmp.9v5JMAcA12 '{}' ';'
+ cp f1.dummy /tmp/tmp.9v5JMAcA12
+ tee f1.bar f1.dummy f1.foo f1.txt
+ cp f2.dummy /tmp/tmp.9v5JMAcA12
+ tee f2.bar f2.dummy f2.foo f2.txt
+ cp f3.dummy /tmp/tmp.9v5JMAcA12
+ tee f3.bar f3.dummy f3.foo f3.txt
+ rm -f /tmp/tmp.9v5JMAcA12
Remove -x from the invocation of bash (and from bash on the command line) to disable tracing.
I used bash rather than sh because my sh can't grok ${3%.$1} properly.

Linux bash output fdirectory files to a text file with xargs and add new line

I want to generate a text file with the list of files present in the folder
ls | xargs echo > text.txt
I want to prepend the IP address to each file so that I can run parallel wget as per this post : Parallel wget in Bash
So my text.txt file content will have these lines :
123.123.123.123/file1
123.123.123.123/file2
123.123.123.123/file3
How can I append a string as the ls feeds xargs? (and also add line break at the end.)
Thank you
Simply printf and globbing to get the filenames:
printf '123.123.123.123/%s\n' * >file.txt
Or longer approach, leverage a for construct with help from globbing:
for f in *; do echo "123.123.123.123/$f"; done >file.txt
Assuming no filename with newline exists.

Move files and rename - one-liner

I'm encountering many files with the same content and the same name on some of my servers. I need to quarantine these files for analysis so I can't just remove the duplicates. The OS is Linux (centos and ubuntu).
I enumerate the file names and locations and put them into a text file.
Then I do a for statement to move the files to quarantine.
for file in $(cat bad-stuff.txt); do mv $file /quarantine ;done
The problem is that they have the same file name and I just need to add something unique to the filename to get it to save properly. I'm sure it's something simple but I'm not good with regex. Thanks for the help.
Since you're using Linux, you can take advantage of GNU mv's --backup.
while read -r file
do
mv --backup=numbered "$file" "/quarantine"
done < "bad-stuff.txt"
Here's an example that shows how it works:
$ cat bad-stuff.txt
./c/foo
./d/foo
./a/foo
./b/foo
$ while read -r file; do mv --backup=numbered "$file" "./quarantine"; done < "bad-stuff.txt"
$ ls quarantine/
foo foo.~1~ foo.~2~ foo.~3~
$
I'd use this
for file in $(cat bad-stuff.txt); do mv $file /quarantine/$file.`date -u +%s%N`; done
You'll get everyfile with a timestamp appended (in nanoseconds).
You can create a new file name composed by the directory and the filename. Thus you can add one more argument in your original code:
for ...; do mv $file /quarantine/$(echo $file | sed 's:/:_:g') ; done
Please note that you should replace the _ with a proper character which is special enough.

Remove all files of a certain type except for one type in linux terminal

On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.

Command line tool to search docx file under ms dos or cygwin

Is there a command line tool that is able to search docx file under ms dos or cygwin ?
I have tried grep, it's not working with docx while working fine with txt file.
I know I could always convert the docx to txt 1st then search using grep, but I am wondering
is there a command tool that I can search directly under command line?
Thanks
i wrote a small bash script, which would help you:
#!/bin/bash
export DOCKEY="$#"
function searchdoc(){
VK1=$(cat "$#" | grep -i "$DOCKEY" | wc -c)
VK2=$(unzip -c "$#" | grep -i "$DOCKEY" | wc -c)
let NUM=$VK1+$VK2
if [ "$NUM" -gt 0 ]; then
echo $NUM occurences in $#
echo opening file.
gnome-open "$#"
fi
}
export -f searchdoc
echo searching for $DOCKEY ...
find . -exec bash -c 'searchdoc "{}" 2>/dev/null' \;
save it as docfind.sh and you can invoke
$#> docfind.sh searchterm
from any folder you want to scan.
After a trying out the stuff , I found the easiest way to do this is to use a linux utility to batch convert all docx files into txt files, then do grep with those txt files easily.
zgrep might work for you? It usually works in OpenOffice documents, and both are compressed archives containing XML:
zgrep "some string" *.xdoc
I have no .xdoc files to test this with, but in theory it should work...
You can use zipgrep, which calls grep on all files of a zip archive (which a docx file is).
You might be disappointed with the result, though, as it returns raw content of XML files containing both the text and XML tags.
save it as docfind.sh and you can invoke
Newbies like me might need to be told that for the .sh script to be executable from any directory, it needs to have the executable property set and be located in /usr/bin or elsewhere in your Path.
I was able to set up the nemo file manager in Linux Mint to open a terminal from any folder's context menu (information here).

Resources