Un-archiving single file hanging - linux

I am trying to unarchive a very large directory. Here is what works to untar the entire thing:
$ sudo tar -xjf itunes20140618.tbz --verbose
x itunes20140618/
x itunes20140618/genre_artist
x itunes20140618/imix_type
...etc...
However, if I try and un-archive only a single file, it will correctly do so, but then the command will hang indefinitely. In addition, it doesn't print any of the output when using the --verbose statement. Here is an example:
$ sudo tar -xjf itunes20140618.tbz itunes20140618/imix --verbose
[ nothing prints...it just hangs. But it does un-tar that single file ]

Tar doesn't have a central table of contents; each file is concatenated, one after the other, so it's continuing to scan the file. It will take about as long to extract one file as it would the whole archive. Per Mark Plotnick's comment below, on GNU tar, you can use --occurrence=1 to have it stop scanning after it finds your file.
For OS X and other places that have a tar that doesn't support the --occurrence=1 argument, a solution would be a monitoring process that watches for the appearance of the file, and once it remains the same size for a couple of seconds, kills the tar process. Here's a bash function to provide that:
untarOneFile () {
options=$1
archivename=$2
filename=$3
if [[ -f "$archivename" ]]; then
rm -rf "$filename" 2> /dev/null
tar "$options" "$archivename" "$filename" &
pid=$!
size=0
quitCount=0
oldsize=0
while true; do
[[ -f "$filename" ]] && size=$(ls -l "$filename" | cut -c 27- | sed 's/^ *//' | cut -f 1 -d ' ')
if (( $oldsize > 0 )); then
if (( $oldsize == $size )); then
(( quitCount++ ))
else
quitCount=0
fi
fi
(( $quitCount == 4 )) && break
oldsize=$size
sleep 0.5
done
kill $pid
else
echo "Archive file not found."
fi
}
Error checking is not robust, but if you give it legit info, it works. Usage is:
untarOneFile -jxvf tarArchiveFile.tar.bz file/you/want/to/extract

Related

Cannot echo the changes made into the files

I am checking if the files have been modified and I need to echo what new strings have been added. When I try this script for a single file it works, but when I iterate through multiple files in a directory it does not work as it should. Any suggestion?
#! /bin/bash
GAP=5
while :
do
FILES=/home/Desktop/*
for f in $FILES
do
len=`wc -l $f | awk '{ print $1 }'`
if [ -N $f ]; then
echo "`date`: New entries in $f:"
newlen=`wc -l $f | awk '{ print $1 }'`
newlines=`expr $newlen - $len`
tail -$newlines $f
len=$newlen
fi
sleep $GAP
done
done
Continuing from the comments, here is the original solution I envisioned using inotifywait (from the inotify-tools package) and an associative array. The benefit here is inotifywait will block and will not waste resources endlessly checking the line count of each file on each loop iteration. I'll work on a solution using a temporary file, but when you go that route you open yourself up to a change occurring in between loop iterations. Here is the first solution:
#!/bin/bash
watchdir="${1:-$PWD}"
events="-e modify -e attrib -e close_write -e create -e delete -e move"
declare -A lines
for i in "$watchdir"/*; do
[ -f "$i" ] && lines[$i]=$(wc -l <"$i")
done
while :; do ## watch for changes in chosen dir
fname="${watchdir}/$(inotifywait -q $events --format '%f' "$watchdir")"
newlc=$(wc -l <"$fname") ## get line count for changed file
if [ "${lines[$fname]}" -ne "$newlc" ]; then ## if changed, print
printf " lines chanaged : %s -> %s (%s)\n" \
"${lines[$fname]}" "$newlc" "$fname"
lines[$fname]=$newlc ## update saved line count for file
fi
done
Original testfile.txt
$ cat dat/tmp/testfile.txt
1 1.2
2 2.2
Example Use/Output
Script saved in watchdir.sh. Start watchdir.sh so inotifywait is watching the dat/tmp directory
$ ./watchdir.sh dat/tmp
Using a second terminal, modify file in the dat/tmp directory
$ echo "newline" >> ~/scr/tmp/stack/dat/tmp/testfile.txt
$ echo "newline" >> ~/scr/tmp/stack/dat/tmp/testfile.txt
Output of watchdir.sh running in separate terminal (or background)
$ ./watchdir.sh dat/tmp
lines chanaged : 2 -> 3 (dat/tmp/testfile.txt)
lines chanaged : 3 -> 4 (dat/tmp/testfile.txt)
Resulting testfile.txt
$ cat dat/tmp/testfile.txt
1 1.2
2 2.2
newline
newline
Second Solution Using [ -N file ]
Here is a second solution a bit closer to your first attempt. It is a less robust way to approach the solution (it will miss multiple changes between tests, etc.). Look it over and let me know if you have questions
#!/bin/bash
watchdir="${1:-$PWD}"
gap=5
tmpfile="$TMPDIR/watchtmp" ## temp file in system $TMPDIR (/tmp)
:>"$tmpfile"
trap 'rm $tmpfile' SIGTERM EXIT ## remove tmpfile on exit
for i in "$watchdir"/*; do ## populate tmpfile with line counts
[ -f "$i" ] && echo "$i,$(wc -l <"$i")" >> "$tmpfile"
done
while :; do ## loop every $gap seconds
for i in "$watchdir"/*; do ## for each file
if [ -N "$i" ]; then ## check changed
cnt=$(wc -l <"$i") ## get new line count
oldcnt=$(grep "$i" "$tmpfile") ## get old count
oldcnt=${oldcnt##*,}
if [ "$cnt" -ne "$oldcnt" ]; then ## if not equal, print
printf " lines chanaged : %s -> %s (%s)\n" \
"$oldcnt" "$cnt" "$i"
## update tmpfile with new count
sed -i "s|^${i}[,][0-9][0-9]*.*$|${i},$cnt|" "$tmpfile"
fi
fi
done
sleep $gap
done
Use/Output
Start watchdir.sh
$ ./watchdir2.sh dat/tmp
In second terminal modify file
$ echo "newline" >> ~/scr/tmp/stack/dat/tmp/testfile.txt
wait for $gap to expire (if changed twice - it will not register)
$ echo "newline" >> ~/scr/tmp/stack/dat/tmp/testfile.txt
Results
$ ./watchdir2.sh dat/tmp
lines chanaged : 10 -> 11 (dat/tmp/testfile.txt)
lines chanaged : 11 -> 12 (dat/tmp/testfile.txt)

shell script to wait for gunzip to complete before going to next line

I created a shell script to loop through some files in a directory, unzip them, add in a date field, zip them back up, and then move them to the hadoop file system. But, when I run the script, it seems to go right to the next line without waiting for gunzip to complete. How do I tell it to wait for it to complete before moving to the next line?
FILENAME="/datatst/toproc/*"
for i in $FILENAME
do
echo "file name is: " $i
FILENAMEv2=$(basename "${i}" .gz )
echo "Stripped file name is: " $FILENAMEv2
DATEPART=$(echo $i| cut -d"." -f1| cut -d"-" -f2-)
echo "Datepart is: " $DATEPART
FileDir="/datatst/unzip/$FILENAMEv2"
echo "unzip directory file is: " $FileDir
echo "unzipping file..."
gunzip $i > -c $FileDir && trash $i
echo "unzipping done..."
echo "sed operation begin..."
sed -i 's/^/'$DATEPART' /g' $FileDir
echo "sed operation done..."
echo "zip operation begin..."
gzip $FileDir -c > /datatst/tomove/$i && trash $FileDir
su hadoop fs -put /datatst/tomove/$i /user/hdfs/
done
I think you should try changing ...
gunzip $i > -c $FileDir && trash $i
... to ...
gunzip $i -c > $FileDir && trash $i
... I am not sure what the trash is.
Updated as per comment from user*

Running diff and have it stop on a difference

I have a script running that is checking multiples directories and comparing them to expanded tarballs of the same directories elsewhere.
I am using diff -r -q and what I would like is that when diff finds any difference in the recursive run it will stop running instead of going through more directories in the same run.
All help appreciated!
Thank you
#bazzargh I did try it like you suggested or like this.
for file in $(find $dir1 -type f);
do if [[ $(diff -q $file ${file/#$dir1/$dir2}) ]];
then echo differs: $file > /tmp/$runid.tmp 2>&1; break;
else echo same: $file > /dev/null; fi; done
But this only works with files that exist in both directories. If one file is missing I won't get information about that. Also the directories I am working with have over 300.000 files so it seems to be a bit of overhead to do a find for each file and then diff.
I would like something like this to work, with and elif statement that checks if $runid.tmp contains data and breaks if it does. I added 2> after the first if statement so stderr is sent to the $runid.tmp file.
for file in $(find $dir1 -type f);
do if [[ $(diff -q $file ${file/#$dir1/$dir2}) ]] 2> /tmp/$runid.tmp;
then echo differs: $file > /tmp/$runid.tmp 2>&1; break;
elif [[ -s /tmp/$runid.tmp ]];
then echo differs: $file >> /tmp/$runid.tmp 2>&1; break;
else echo same: $file > /dev/null; fi; done
Would this work?
You can do the loop over files with 'find' and break when they differ. eg for dirs foo, bar:
for file in $(find foo -type f); do if [[ $(diff -q $file ${file/#foo/bar}) ]]; then echo differs: $file; break; else echo same: $file; fi; done
NB this will not detect if 'bar' has directories that do not exist in 'foo'.
Edited to add: I just realised I overlooked the really obvious solution:
diff -rq foo bar | head -n1
It's not 'diff', but with 'awk' you can compare two files (or more) and then exit when they have a different line.
Try something like this (sorry, it's a little rough)
awk '{ h[$0] = ! h[$0] } END { for (k in h) if (h[k]) exit }' file1 file2
Sources are here and here.
edit: to break out of the loop when two files have the same line, you may have to do the loop in awk. See here.
You can try the following:
#!/usr/bin/env bash
# Determine directories to compare
d1='./someDir1'
d2='./someDir2'
# Loop over the file lists and diff corresponding files
while IFS= read -r line; do
# Split the 3-column `comm` output into indiv. variables.
lineNoTabs=${line//$'\t'}
numTabs=$(( ${#line} - ${#lineNoTabs} ))
d1Only='' d2Only='' common=''
case $numTabs in
0)
d1Only=$lineNoTabs
;;
1)
d2Only=$lineNoTabs
;;
*)
common=$lineNoTabs
;;
esac
# If a file exists in both directories, compare them,
# and exit if they differ, continue otherwise
if [[ -n $common ]]; then
diff -q "$d1/$common" "$d2/$common" || {
echo "EXITING: Diff found: '$common'" 1>&2;
exit 1; }
# Deal with files unique to either directory.
elif [[ -n $d1Only ]]; then # fie
echo "File '$d1Only' only in '$d1'."
else # implies: if [[ -n $d2Only ]]; then
echo "File '$d2Only' only in '$d2."
fi
# Note: The `comm` command below is CASE-SENSITIVE, which means:
# - The input directories must be specified case-exact.
# To change that, add `I` after the last `|` in _both_ `sed commands`.
# - The paths and names of the files diffed must match in case too.
# To change that, insert `| tr '[:upper:]' '[:lower:]' before _both_
# `sort commands.
done < <(comm \
<(find "$d1" -type f | sed 's|'"$d1/"'||' | sort) \
<(find "$d2" -type f | sed 's|'"$d2/"'||' | sort))
The approach is based on building a list of files (using find) containing relative paths (using sed to remove the root path) for each input directory, sorting the lists, and comparing them with comm, which produces 3-column, tab-separated output to indicated which lines (and therefore files) are unique to the first list, which are unique to the second list, and which lines they have in common.
Thus, the values in the 3rd column can be diffed and action taken if they're not identical.
Also, the 1st and 2nd-column values can be used to take action based on unique files.
The somewhat complicated splitting of the 3 column values output by comm into individual variables is necessary, because:
read will treat multiple tabs in sequence as a single separator
comm outputs a variable number of tabs; e.g., if there's only a 1st-column value, no tab is output at all.
I got a solution to this thanks to #bazzargh.
I use this code in my script and now it works perfectly.
for file in $(find ${intfolder} -type f);
do if [[ $(diff -q $file ${file/#${intfolder}/${EXPANDEDROOT}/${runid}/$(basename ${intfolder})}) ]] 2> ${resultfile}.tmp;
then echo differs: $file > ${resultfile}.tmp 2>&1; break;
elif [[ -s ${resultfile}.tmp ]];
then echo differs: $file >> ${resultfile}.tmp 2>&1; break;
else echo same: $file > /dev/null;
fi; done
thanks!

Create new file but add number if filename already exists in bash

I found similar questions but not in Linux/Bash
I want my script to create a file with a given name (via user input) but add number at the end if filename already exists.
Example:
$ create somefile
Created "somefile.ext"
$ create somefile
Created "somefile-2.ext"
The following script can help you. You should not be running several copies of the script at the same time to avoid race condition.
name=somefile
if [[ -e $name.ext || -L $name.ext ]] ; then
i=0
while [[ -e $name-$i.ext || -L $name-$i.ext ]] ; do
let i++
done
name=$name-$i
fi
touch -- "$name".ext
Easier:
touch file`ls file* | wc -l`.ext
You'll get:
$ ls file*
file0.ext file1.ext file2.ext file3.ext file4.ext file5.ext file6.ext
To avoid the race conditions:
name=some-file
n=
set -o noclobber
until
file=$name${n:+-$n}.ext
{ command exec 3> "$file"; } 2> /dev/null
do
((n++))
done
printf 'File is "%s"\n' "$file"
echo some text in it >&3
And in addition, you have the file open for writing on fd 3.
With bash-4.4+, you can make it a function like:
create() { # fd base [suffix [max]]]
local fd="$1" base="$2" suffix="${3-}" max="${4-}"
local n= file
local - # ash-style local scoping of options in 4.4+
set -o noclobber
REPLY=
until
file=$base${n:+-$n}$suffix
eval 'command exec '"$fd"'> "$file"' 2> /dev/null
do
((n++))
((max > 0 && n > max)) && return 1
done
REPLY=$file
}
To be used for instance as:
create 3 somefile .ext || exit
printf 'File: "%s"\n' "$REPLY"
echo something >&3
exec 3>&- # close the file
The max value can be used to guard against infinite loops when the files can't be created for other reason than noclobber.
Note that noclobber only applies to the > operator, not >> nor <>.
Remaining race condition
Actually, noclobber does not remove the race condition in all cases. It only prevents clobbering regular files (not other types of files, so that cmd > /dev/null for instance doesn't fail) and has a race condition itself in most shells.
The shell first does a stat(2) on the file to check if it's a regular file or not (fifo, directory, device...). Only if the file doesn't exist (yet) or is a regular file does 3> "$file" use the O_EXCL flag to guarantee not clobbering the file.
So if there's a fifo or device file by that name, it will be used (provided it can be open in write-only), and a regular file may be clobbered if it gets created as a replacement for a fifo/device/directory... in between that stat(2) and open(2) without O_EXCL!
Changing the
{ command exec 3> "$file"; } 2> /dev/null
to
[ ! -e "$file" ] && { command exec 3> "$file"; } 2> /dev/null
Would avoid using an already existing non-regular file, but not address the race condition.
Now, that's only really a concern in the face of a malicious adversary that would want to make you overwrite an arbitrary file on the file system. It does remove the race condition in the normal case of two instances of the same script running at the same time. So, in that, it's better than approaches that only check for file existence beforehand with [ -e "$file" ].
For a working version without race condition at all, you could use the zsh shell instead of bash which has a raw interface to open() as the sysopen builtin in the zsh/system module:
zmodload zsh/system
name=some-file
n=
until
file=$name${n:+-$n}.ext
sysopen -w -o excl -u 3 -- "$file" 2> /dev/null
do
((n++))
done
printf 'File is "%s"\n' "$file"
echo some text in it >&3
Try something like this
name=somefile
path=$(dirname "$name")
filename=$(basename "$name")
extension="${filename##*.}"
filename="${filename%.*}"
if [[ -e $path/$filename.$extension ]] ; then
i=2
while [[ -e $path/$filename-$i.$extension ]] ; do
let i++
done
filename=$filename-$i
fi
target=$path/$filename.$extension
Use touch or whatever you want instead of echo:
echo file$((`ls file* | sed -n 's/file\([0-9]*\)/\1/p' | sort -rh | head -n 1`+1))
Parts of expression explained:
list files by pattern: ls file*
take only number part in each line: sed -n 's/file\([0-9]*\)/\1/p'
apply reverse human sort: sort -rh
take only first line (i.e. max value): head -n 1
combine all in pipe and increment (full expression above)
Try something like this (untested, but you get the idea):
filename=$1
# If file doesn't exist, create it
if [[ ! -f $filename ]]; then
touch $filename
echo "Created \"$filename\""
exit 0
fi
# If file already exists, find a similar filename that is not yet taken
digit=1
while true; do
temp_name=$filename-$digit
if [[ ! -f $temp_name ]]; then
touch $temp_name
echo "Created \"$temp_name\""
exit 0
fi
digit=$(($digit + 1))
done
Depending on what you're doing, replace the calls to touch with whatever code is needed to create the files that you are working with.
This is a much better method I've used for creating directories incrementally.
It could be adjusted for filename too.
LAST_SOLUTION=$(echo $(ls -d SOLUTION_[[:digit:]][[:digit:]][[:digit:]][[:digit:]] 2> /dev/null) | awk '{ print $(NF) }')
if [ -n "$LAST_SOLUTION" ] ; then
mkdir SOLUTION_$(printf "%04d\n" $(expr ${LAST_SOLUTION: -4} + 1))
else
mkdir SOLUTION_0001
fi
A simple repackaging of choroba's answer as a generalized function:
autoincr() {
f="$1"
ext=""
# Extract the file extension (if any), with preceeding '.'
[[ "$f" == *.* ]] && ext=".${f##*.}"
if [[ -e "$f" ]] ; then
i=1
f="${f%.*}";
while [[ -e "${f}_${i}${ext}" ]]; do
let i++
done
f="${f}_${i}${ext}"
fi
echo "$f"
}
touch "$(autoincr "somefile.ext")"
without looping and not use regex or shell expr.
last=$(ls $1* | tail -n1)
last_wo_ext=$($last | basename $last .ext)
n=$(echo $last_wo_ext | rev | cut -d - -f 1 | rev)
if [ x$n = x ]; then
n=2
else
n=$((n + 1))
fi
echo $1-$n.ext
more simple without extension and exception of "-1".
n=$(ls $1* | tail -n1 | rev | cut -d - -f 1 | rev)
n=$((n + 1))
echo $1-$n.ext

Smarter Vim recovery?

When a previous Vim session crashed, you are greeted with the "Swap file ... already exists!" for each and every file that was open in the previous session.
Can you make this Vim recovery prompt smarter? (Without switching off recovery!) Specifically, I'm thinking of:
If the swapped version does not contain unsaved changes and the editing process is no longer running, can you make Vim automatically delete the swap file?
Can you automate the suggested process of saving the recovered file under a new name, merging it with file on disk and then deleting the old swap file, so that minimal interaction is required? Especially when the swap version and the disk version are the same, everything should be automatic.
I discovered the SwapExists autocommand but I don't know if it can help with these tasks.
I have vim store my swap files in a single local directory, by having this in my .vimrc:
set directory=~/.vim/swap,.
Among other benefits, this makes the swap files easy to find all at once.
Now when my laptop loses power or whatever and I start back up with a bunch of swap files laying around, I just run my cleanswap script:
TMPDIR=$(mktemp -d) || exit 1
RECTXT="$TMPDIR/vim.recovery.$USER.txt"
RECFN="$TMPDIR/vim.recovery.$USER.fn"
trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15
for q in ~/.vim/swap/.*sw? ~/.vim/swap/*; do
[[ -f $q ]] || continue
rm -f "$RECTXT" "$RECFN"
vim -X -r "$q" \
-c "w! $RECTXT" \
-c "let fn=expand('%')" \
-c "new $RECFN" \
-c "exec setline( 1, fn )" \
-c w\! \
-c "qa"
if [[ ! -f $RECFN ]]; then
echo "nothing to recover from $q"
rm -f "$q"
continue
fi
CRNT="$(cat $RECFN)"
if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then
echo "removing redundant $q"
echo " for $CRNT"
rm -f "$q"
else
echo $q contains changes
vim -n -d "$CRNT" "$RECTXT"
rm -i "$q" || exit
fi
done
This will remove any swap files that are up-to-date with the real files. Any that don't match are brought up in a vimdiff window so I can merge in my unsaved changes.
--Chouser
I just discovered this:
http://vimdoc.sourceforge.net/htmldoc/diff.html#:DiffOrig
I copied and pasted the DiffOrig command into my .vimrc file and it works like a charm. This greatly eases the recovery of swap files. I have no idea why it isn't included by default in VIM.
Here's the command for those who are in a hurry:
command DiffOrig vert new | set bt=nofile | r # | 0d_ | diffthis
\ | wincmd p | diffthis
The accepted answer is busted for a very important use case. Let's say you create a new buffer and type for 2 hours without ever saving, then your laptop crashes. If you run the suggested script it will delete your one and only record, the .swp swap file. I'm not sure what the right fix is, but it looks like the diff command ends up comparing the same file to itself in this case. The edited version below checks for this case and gives the user a chance to save the file somewhere.
#!/bin/bash
SWAP_FILE_DIR=~/temp/vim_swp
IFS=$'\n'
TMPDIR=$(mktemp -d) || exit 1
RECTXT="$TMPDIR/vim.recovery.$USER.txt"
RECFN="$TMPDIR/vim.recovery.$USER.fn"
trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15
for q in $SWAP_FILE_DIR/.*sw? $SWAP_FILE_DIR/*; do
echo $q
[[ -f $q ]] || continue
rm -f "$RECTXT" "$RECFN"
vim -X -r "$q" \
-c "w! $RECTXT" \
-c "let fn=expand('%')" \
-c "new $RECFN" \
-c "exec setline( 1, fn )" \
-c w\! \
-c "qa"
if [[ ! -f $RECFN ]]; then
echo "nothing to recover from $q"
rm -f "$q"
continue
fi
CRNT="$(cat $RECFN)"
if [ "$CRNT" = "$RECTXT" ]; then
echo "Can't find original file. Press enter to open vim so you can save the file. The swap file will be deleted afterward!"
read
vim "$CRNT"
rm -f "$q"
else if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then
echo "Removing redundant $q"
echo " for $CRNT"
rm -f "$q"
else
echo $q contains changes, or there may be no original saved file
vim -n -d "$CRNT" "$RECTXT"
rm -i "$q" || exit
fi
fi
done
Great tip DiffOrig is perfect. Here is a bash script I use to run it on each swap file under the current directory:
#!/bin/bash
swap_files=`find . -name "*.swp"`
for s in $swap_files ; do
orig_file=`echo $s | perl -pe 's!/\.([^/]*).swp$!/$1!' `
echo "Editing $orig_file"
sleep 1
vim -r $orig_file -c "DiffOrig"
echo -n " Ok to delete swap file? [y/n] "
read resp
if [ "$resp" == "y" ] ; then
echo " Deleting $s"
rm $s
fi
done
Probably could use some more error checking and quoting but has worked so far.
I prefer to not set my VIM working directory in the .vimrc. Here's a modification of chouser's script that copies the swap files to the swap path on demand checking for duplicates and then reconciles them. This was written rushed, make sure to evaluate it before putting it to practical use.
#!/bin/bash
if [[ "$1" == "-h" ]] || [[ "$1" == "--help" ]]; then
echo "Moves VIM swap files under <base-path> to ~/.vim/swap and reconciles differences"
echo "usage: $0 <base-path>"
exit 0
fi
if [ -z "$1" ] || [ ! -d "$1" ]; then
echo "directory path not provided or invalid, see $0 -h"
exit 1
fi
echo looking for duplicate file names in hierarchy
swaps="$(find $1 -name '.*.swp' | while read file; do echo $(basename $file); done | sort | uniq -c | egrep -v "^[[:space:]]*1")"
if [ -z "$swaps" ]; then
echo no duplicates found
files=$(find $1 -name '.*.swp')
if [ ! -d ~/.vim/swap ]; then mkdir ~/.vim/swap; fi
echo "moving files to swap space ~./vim/swap"
mv $files ~/.vim/swap
echo "executing reconciliation"
TMPDIR=$(mktemp -d) || exit 1
RECTXT="$TMPDIR/vim.recovery.$USER.txt"
RECFN="$TMPDIR/vim.recovery.$USER.fn"
trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15
for q in ~/.vim/swap/.*sw? ~/.vim/swap/*; do
[[ -f $q ]] || continue
rm -f "$RECTXT" "$RECFN"
vim -X -r "$q" \
-c "w! $RECTXT" \
-c "let fn=expand('%')" \
-c "new $RECFN" \
-c "exec setline( 1, fn )" \
-c w\! \
-c "qa"
if [[ ! -f $RECFN ]]; then
echo "nothing to recover from $q"
rm -f "$q"
continue
fi
CRNT="$(cat $RECFN)"
if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then
echo "removing redundant $q"
echo " for $CRNT"
rm -f "$q"
else
echo $q contains changes
vim -n -d "$CRNT" "$RECTXT"
rm -i "$q" || exit
fi
done
else
echo duplicates found, please address their swap reconciliation manually:
find $1 -name '.*.swp' | while read file; do echo $(basename $file); done | sort | uniq -c | egrep '^[[:space:]]*[2-9][0-9]*.*'
fi
I have this on my .bashrc file. I would like to give appropriate credit to part of this code but I forgot where I got it from.
mswpclean(){
for i in `find -L -name '*swp'`
do
swpf=$i
aux=${swpf//"/."/"/"}
orif=${aux//.swp/}
bakf=${aux//.swp/.sbak}
vim -r $swpf -c ":wq! $bakf" && rm $swpf
if cmp "$bakf" "$orif" -s
then rm $bakf && echo "Swap file was not different: Deleted" $swpf
else vimdiff $bakf $orif
fi
done
for i in `find -L -name '*sbak'`
do
bakf=$i
orif=${bakf//.sbak/}
if test $orif -nt $bakf
then rm $bakf && echo "Backup file deleted:" $bakf
else echo "Backup file kept as:" $bakf
fi
done }
I just run this on the root of my project and, IF the file is different, it opens vim diff. Then, the last file to be saved will be kept. To make it perfect I would just need to replace the last else:
else echo "Backup file kept as:" $bakf
by something like
else vim $bakf -c ":wq! $orif" && echo "Backup file kept and saved as:" $orif
but I didn't get time to properly test it.
Hope it helps.
find ./ -type f -name ".*sw[klmnop]" -delete
Credit: #Shwaydogg
https://superuser.com/questions/480367/whats-the-easiest-way-to-delete-vim-swapfiles-ive-already-recovered-from
Navigate to directory first

Resources