How to configure vim to save extra copies - linux

Is there a way to configure vim so that, instead of creating a temporary .swp, every time a save is made, suppose that I'm editing the file name_of_the_file.txt, the program automatically creates a file containing the previous save, named with the data and time of the saving, e.g. name_of_the_file-05-17-2017-11:20.txt in a folder, let say ~/.vim-bckp/name_of_the_file/? Or, better, having a custom command for saving that do the above request and avoid flooding the HDD with minor changes.

This script will do the trick. You could name it vimBackup or anything, do not use existing commands name. Then you could copy the command somewhere into the paths in $PATH variable or append $PATH variable with a custom script folder containing this script. Then you able to use it without using the full path to execute it.
#!/bin/sh
[[ $1 == "" ]] && echo "Command expect a file path" && exit 1
[[ ! $# == 1 ]] && echo "Command expect only one parameter" && exit 1
[[ -w $1 ]] && cp $1 $1_$(date +%Y-%m-%d_%H:%M) # if exist and writable make a copy
vim $1

I've written the writebackup plugin for that. That turned into a full plugin suite:
The writebackupVersionControl plugin plugin complements this script with additional commands and enhances the :WriteBackup command with more checks, but is not required.
The writebackupToAdjacentDir plugin implements a WriteBackup-dynamic-backupdir configuration that puts the backup files in an adjacent backup directory if one exists. This helps where the backups cannot be placed into the same directory.
The writebackupAutomator plugin automatically writes a backup on a day's first write of a file that was backed up in the past, but not yet today. It can be your safety net when you forget to make a backup.
So, once I have made a backup, I don't need to think about explicitly triggering one. I can view diffs, and even restore previous versions. Of course, if you can use a "full" version control system (like Git or Subversion), use that. But this one is nice for all the little configuration changed distributed throughout the file system.

Related

Recursive Text Substitution and File Extension Rename

I am using an application that creates a text file on a Linux server. I then have the ability to execute a shell script (BASH 3.2.57) in which I need to convert the text file from Unix line endings to DOS and also change the extension of the file from .txt to .log.
I currently have a sed based command to do this. This command is rewritten by the application at run time to point to the specific folder and file name, in this example where you see ABC (all capital 3 letters in all my examples are a variable that can be any 3 letters).
pushd /rootfolder/parentfolder/ABC/
sed 's/$/\r/' prABC.txt > prABC.log
popd
The problem with this is that if a user runs the application for 2 different groups, say ABC and DEF at nearly the same time, the script will get overwritten with the DEF variables before ABC had a chance to fire off and do its thing with the file. Additionally the .txt is left in the folder regardless and I would like that to be removed.
A friend of mine came up with the following code that seems to work if its determined to be our best solution, but I would think and hope we have a cleaner more dynamic way to do this. Also this current method requires that when my user decides to add a GHI directory and file I now have to update the code, which i can program my application to do for me but i don't want this script to have to be rewritten every time the application wants to use it.
pushd /rootfolder/parentfolder/ABC
if [[ -f prABC.txt ]]
then
sed 's/$/\r/' prABC.txt > prABC.log
rm prABC.txt
fi
popd
pushd /rootfolder/parentfolder/DEF
if [[ -f prABC.txt ]]
then
sed 's/$/\r/' prABC.txt > prABC.log
rm prABC.txt
fi
popd
I would like to call this script at anytime from my application and it find any file named pr*.txt below the /rootfolder/parentfolder/ directory (if that has to include the parentfolder in its search that won't be a problem) and convert the line endings from LF to CRLF and change the extension of the file from .txt to .log.
I've done a ton of searching and have found near solutions for this but not exactly what I need and I want to be sure it's as safe as possible (issues with using "find with for". I don't know what utilities are installed on this build so i would like to keep it as basic/supportable as possible Thanks in advance :)
You should almost never need pushd and popd in scripts. In fact, you rarely need cd, either.
#!/bin/bash
for d in /rootfolder/parentfolder/ABC /rootfolder/parentfolder/DEF
do
if [[ -f "$d/prABC.txt" ]]
then
sed 's/$/\r/' "$d/prABC.txt" > "$d/prABC.log" &&
rm "$d/prABC.txt"
fi
done
Recall that a && b is shorthand for
if a; then
b
fi
In other words, if sed fails (because the source file can't be read, or the destination can't be written) we don't rm the source file. There should be an error message already so we don't add another one.
Not only is this more succinct, it is also easier to change if you decide that the old file should be renamed instead of removed, or you want to filter out all lines which contain "beef" in the sed script. Generally you should avoid repeated code; see also the DRY principle on Wikipedia.
Something is seriously wrong somewhere if you require DOS line endings in your files on Unix.

Linux console equivalent to gui copy/paste file/directory scenario

How to simply recreate copy/paste functionality like in gui environments?
My typical scenario for copying file/directory in Linux console is:
cp source_path target_path
Sometimes paths are relative, sometimes absolute, but I need to provide them both. It works, but there are situations where I would like to recreate scenario from gui which is:
1. go to source directory
2. copy file/directory
3. go to target directory
4. paste file/directory
I imagine something like
cd source_directory_path
copy_to_stash source_name
cd target_directory_path
paste_from_stash [optional_new_target_name]
I know that there is a xclip app, but a documentation says that it copies content of a file, not a file handle. Also, I can use $OLDPWD variable and expand it when I copy file, but this is not a solution without some cumbersome.
Is there some simple, general, keyboard only, not awkward to use equivalent?
I've also asked the same question on superuser and answer that I've received is good enough for me.
In short: two additional scripts and temporary variable to hold intermediate value.
Below is a code and link to original answer.
#!/bin/bash
# source me with one of:
# source [file]
# . [file]
# Initialize
sa_file=
sa(){
# Fuction to save a file in the current PWD
if [[ -e "$PWD/$1" ]]; then
sa_file=$PWD/$1
echo "Saved for later: $sa_file"
else
echo "Error: file $PWD/$1 does not exist"
fi
}
pa(){
# Paste if file exists, to $1 if exists
if [[ -e "$sa_file" ]]; then
if [[ $1 ]]; then
cp -v "$sa_file" "$1"
else
cp -v "$sa_file" .
fi
else
echo "Error: file $sa_file does not exist, could not copy"
fi
}
https://superuser.com/a/1405953/614464
The way I see it your only option is to write a script to do all of those steps. You could easily implement the clipboard functionality by copying the file to the /tmp directory before copying again from it.
This should work as intended.
Usage: script [from] [to]
filename=$(basename "$0")
tmpfile=/tmp/$filename.$RANDOM
cd $(dirname "$0")
cp $tmpfile $filename
cd $(dirname "$1")
cp $tmpfile $(basename "$1")
One option: you can either copy-paste the filename using mouse, using copy-paste feature from your terminal emulator (e.g. Konsole or GNOME Terminal), but this: 1) requires a GUI since the terminal emulator software run in GUI; 2) well, requires a mouse.
Another option: utilize shell tab completion. You still need to type the filename, but not all of it.
Third option, and this is closer to how you work in a GUI file explorer: use a TUI-based file explorer, e.g. the dual-pane style Midnight Commander. You can use arrow keys (if you turn on the Lynx-like motion setting, which is very recommended) to quickly navigate the directory tree. Then select files using the Insert, +, -, or * keys, then copy/move files from one pane to another. It's very convenient. In fact half of the time I spend in CLI, I spend in MC.

Apply bash variables for directories (recursive)

I have a project that requires me to keep a lot of bash files with installation/maintenance/whatever scripts, and most of them need to know where other folders are. Right now that is all made with relative paths, but that makes me keep the folders structure, which might not be the best idea on the long run.
So, as an example, I have this file (script.sh):
THINGS_DIR=..\..\things
myprogram $THINGS_DIR
But now I would like to have 2 files, one with global variables from these directories (let's call it conf.sh):
THINGS_DIR=./things/
OTHERS_DIR=./things/others/
and, on script.sh, somehow, I would use those variables.
The best way I could find it's to keep that conf.sh in a fixed place and all of the others run it before it starts, but I was trying to find a better solution.
EDIT
I forgot to say that this is in a Git repository, which is a fair assumption to keep along the way. That being said and because I wanted to keep this as self contained as possible I ended up using this in every script that needs those variables:
. $(git rev-parse --show-toplevel)/my_conf_file.conf
This command executes what's inside my_conf_file.conf located in the git repository root. It's not ideal (nor completelly safe) but it does the trick with minimum configuration.
A common idiom is to have a config file in one (or more) of several directories and have your script check each one of them in order. For instance you could search:
/etc/script.cfg # global
~/.script.cfg # user-specific, hidden with leading dot
You might also search the environment as a last resort. This search strategy is very easy to implement in bash. All you have to do is:
[[ -e /etc/script.cfg ]] && . /etc/script.cfg
[[ -e ~/script.cfg ]] && . ~/script.cfg
echo "THINGS_DIR=$THINGS_DIR"
echo "OTHERS_DIR=$OTHERS_DIR"
It sources the two config files if they exist. If the user copy exists it overrides the global settings. If neither exists, the script will naturally use any environment variables. This way the user could override the config settings like so:
THINGS_DIR=/overridden/things/dir script.sh

Avoid having subversion modify Linux file permissions.

All of my code base is being stored in a subversion repository that I disperse amongst my load balanced Apache web servers, making it easy to check out code, run updates, and seamlessly get my code in development onto production.
One of the inconveniences that I'm sure there is a easy work around for (other than executing a script upon every checkout), is getting the Linux permissions set (back) on files that are updated or checked out with subversion. Our security team has it set that the Owner and Group set in the httpd.conf files, and all directories within the documentRoot receive permissions of 700, all non-executable files (e.g. *.php, *.smarty, *.png) receive Linux permissions of 600, all executable files receive 700 (e.g. *.sh, *.pl, *.py). All files must have owner and group set to apache:apache in order to be read by the httpd service since only the file owner is set to have access via the permissions.
Every time I run an svn update, or svn co, even though the files may not be created (i.e. svn update), I'm finding that the ownership of the files is getting set to the account that is running the svn commands, and often times, the file permissions are getting set to something other than what they were originally (i.e. a .htm file before an update is 600, but after and svn update, it gets set to 755, or even 777).
What is the easiest way to bypass subversion's attempts at updating the file permissions and ownership? Is there something that can be done within the svn client, or on the Linux server to retain the original file permissions? I'm running RHEL5 (and now 6 on a few select instances).
the owner of the files will be set to the user that is running the svn command because of how it implements the underlying up command - it removes and replaces files that are updated, which will cause the ownership to 'change' to the relevant user. The only way to prevent this is to actually perform the svn up as the user that the files are supposed to be owned as. If you want to ensure that they're owned by a particular user, then run the command as that user.
With regards to the permissions, svn is only obeying the umask settings of the account - it's probably something like 066 - in order to ensure that the file is inaccessible to group and other accounts, you need to issue 'umask 077' before performing the svn up, this ensures that the files are only accessible to the user account issuing the command.
I'd pay attention to the security issue of deploying the subversion data into the web server unless the .svn directories are secured.
You can store properties on a file in Subversion (see http://svnbook.red-bean.com/en/1.0/ch07s02.html). You're particularly interested in the svn:executable property, which will make sure that the executable permission is stored.
There's no general way to do this for all permissions, though. Subversion doesn't store ownership either - it assumes that, if you check something out, you own it.
You can solve this. Use setgid.
You have apache:apache running the server
Set group permission on all files and directories. The server will read files by it's group
Set setgid on all directories - only on directories: setting this on files has a different function
Example ('2' is setgid):
chmod 2750
Make apache the group of all directories
What happens is
New files and directories created by any account will be owned by the apache group
New directories will inherit the setgid and thus preserve the structure without any effort
See https://en.wikipedia.org/wiki/Setuid#setuid_and_setgid_on_directories
One thing you may consider doing is installing the svn binary outside your path, and putting a replacement script (at and called /usr/bin/svn, or whatever) in the path. The script would look something like this:
#!/bin/sh
# set umask, whatever else you need to do before svn commands
/opt/svn/svn $* # pass all arguments to the actual svn binary, stored outside the PATH
# run chmod, whatever else you need to do after svn commands
A definite downside is that you'll probably have to do some amount of parsing of the arguments passed to svn, i.e. so you can pass the same path to your chmod, not run chmod for most svn commands, etc.
There are also probably some security considerations here. I don't know what your deployment environment is like, but you should probably investigate that a bit further.
I wrote a small script that stores permissions and owner, executes your SVN command and restores permissions and owner.
It is probably is not hackerproof but for private use it does the job.
svnupdate.sh:
#!/usr/bin/env bash
if [ $# -eq 0 ]; then
echo "Syntax: $0 <filename>"
exit
fi
IGNORENEXT=0
COMMANDS=''
FILES='';
for FILENAME in "$#"
do
if [[ $IGNORENEXT > 0 ]]; then
IGNORENEXT=0
else
case $FILENAME in
# global, shift argument if needed
--username|--password|--config-dir|--config-option)
IGNORENEXT=1
;;
--no-auth-cache|--non-interactive|--trust-server-cert)
;;
# update arguments, shift argument if needed
-r|--revision|--depth|--set-depth|--diff3-cmd|--changelist|--editor-cmd|--accept)
IGNORENEXT=1
;;
-N|--non-recursive|-q|--quiet|--force|--ignore-externals)
;;
*)
if [ -f $FILENAME ]; then
FILES="$FILES $FILENAME"
OLDPERM=$(stat -c%a $FILENAME)
OLDOWNER=$(stat -c%U $FILENAME)
OLDGROUP=$(stat -c%G $FILENAME)
FILECOMMANDS="chmod $OLDPERM $FILENAME; chown $OLDOWNER.$OLDGROUP $FILENAME;"
COMMANDS="$COMMANDS $FILECOMMANDS"
echo "COMMANDS: $FILECOMMANDS"
else
echo "File not found: $FILENAME"
fi
;;
esac
fi
done
OUTPUT=$(svn update "$#")
echo "$OUTPUT"
if [[ ( $? -eq 0 ) && ( $OUTPUT != Skipped* ) && ( $OUTPUT != "At revision"* ) ]]; then
bash -c "$COMMANDS"
ls -l $FILES
fi
I also had a similar problem.
I found a cool script: asvn (Archive SVN).
You can download it here:
https://svn.apache.org/repos/asf/subversion/trunk/contrib/client-side/asvn
Description:
Archive SVN (asvn) will allow the recording of file types not
normally handled by svn. Currently this includes devices,
symlinks and file ownership/permissions.
Every file and directory has a 'file:permissions' property set and
every directory has a 'dir:devices' and 'dir:symlinks' for
recording the extra information.
Run this script instead of svn with the normal svn arguments.
This blog entry (which helps me to find script) http://jon.netdork.net/2010/06/28/configuration-management-part-ii-setting-up-svn/ shows a simple usage.

Need to monitor directory change, and perform action

1st of all: I'm not programmer, neither Linux guru, just have to work with Linux, Oracle, shell scripts.
My current task is to monitor a table in Oracle (tool: sqlplus), and if it contains a certain row, then watch a linux directory for a growing tmp file, and log its attributes (e.g. ls -l), in every 5 second.
The most important part is: this tmp file will be deleted if the above record is deleted from the oracle table, and I need the last contents of this tmp file.
I can't control the Oracle data, just got query rights.
The available tools are: bash, awk, sed, some old version of perl, ruby (not 1.9*), and python (2.5). I don't have install rights, so most of the outside libraries are not accessible. I know I can run some libraries from my $HOME, but I don't have internet connection on that machine: so can't download any library.
Inotify is not available (older kernel).
Any idea where to start/how to do it? Thanks in advance.
How about creating a hard link in another directory, then, when the file "disappears" in the original location, the hard link will still have access to the content.
This is ugly and naive... but...
#!/bin/bash
WASTHERE=0
MONITORING=/tmp/whatever.dat
LASTBACKUP=/tmp/mybackup.dat
LOGFILE=/tmp/mylog.log
# Just create an empty file to start with
touch "$LASTBACKUP"
while [ 1 ];
do
if [[ ! -e "$MONITORING" ]]; then
if [[ $WASTHERE -ne 0 ]]; then
echo "File is gone! Do something with $LASTBACKUP";
WASTHERE=0
fi
else
WASTHERE=1
ls -l "$MONITORING" >> $LOGFILE
cp "$MONITORING" "$LASTBACKUP"
fi
sleep 5
done
The unfortunate part about this is that if anything happens to the file being 'monitored' while the script is sleeping (content is written to it, for example) and the file is then deleted before the script wakes up, the newly written content will not be in the 'backup.'

Resources