How does one create a wrapper around a program? - vim

I want to learn to create a wrapper around a program in linux. How does one do this? A tutorial reference web-page/link or example will do. To clarify what I want to learn, I will explain with an example.
I use vim for editing text files. And use rcs as my simple revision control system. rcs allows you to check-in and checkout-files. I would like to create a warpper program named vir which when I type in the shell as:
$ vir temp.txt
will load the file temp.txt into rcs with ci -u temp.txt and then allows me to edit the file using vim.
When I get out and go back in, It will need to check out the file first, using ci -u temp.txt and allow me to edit the file as one normally does with vim, and then when I save and exit, it should check-in the file using co -u temp.txt and as part of that I should be able to add a version control comment.
Basically, all I want to be doing on the command line is:
$ vir temp.txt
as one would with vim. And the wrapper should take care of the version control for me.

Take a look at rcsvers.vim, a vim plugin for automatically saving versions in RCS; you could modify that. There are also other RCS plugins for vim at vim.org

I have a wrapper to enhance the ping command (using zsh) it could, maybe help you:
# ping command wrapper - Last Change: out 27 2019 18:47
# source: https://www.cyberciti.biz/tips/unix-linux-bash-shell-script-wrapper-examples.html
ping(){
# Name: ping() wrapper
# Arg: (url|domain|ip)
# Purpose: Send ping request to domain by removing urls, protocol, username:pass using system /usr/bin/ping
local array=( $# ) # get all args in an array
local host=${array[-1]} # get the last arg
local args=${array[1,-2]} # get all args before last arg in $#
#local _ping="/usr/bin/ping"
local _ping="/bin/ping"
local c=$(_getdomainnameonly "$host")
[ "$host" != "$c" ] && echo "Sending ICMP ECHO_REQUEST to \"$c\"..."
# pass args and host
# $_ping $args $c
# default args for ping
$_ping -n -c 2 -i 1 -W1 $c
}
_getdomainnameonly(){
# Name: _getdomainnameonly
# Arg: Url/domain/ip
# Returns: Only domain name
# Purpose: Get domain name and remove protocol part, username:password and other parts from url
# get url
local h="$1"
# upper to lowercase
local f="${h:l}"
# remove protocol part of hostname
f="${f#http://}"
f="${f#https://}"
f="${f#ftp://}"
f="${f#scp://}"
f="${f#scp://}"
f="${f#sftp://}"
# Remove username and/or username:password part of hostname
f="${f#*:*#}"
f="${f#*#}"
# remove all /foo/xyz.html*
f=${f%%/*}
# show domain name only
echo "$f"
}
What it hides the local ping using a function called "ping", so if your script has precedence on your path it will find at first the function ping. Then inside the script I define an internal variable called ping that points out to the real ping command:
local _ping="/bin/ping"
You can also notice that the args are stored in one array.

Related

Find patterns and rename multiple files

I have a list of machine names and hostnames
ex)
# cat /tmp/machine_list.txt
[one]apple machine #1 myserver1
[two]apple machine #2 myserver2
[three]apple machine #3 myserver3
and, server each directory
and each directory contains an tar file and a file with the host name written on it.
# ls /tmp/sos1/*
sosreport1.tar.gz
hostname_map.txt
# cat /tmp/sos1/hostname_map.txt
myserver1
# ls /tmp/sos2/*
sosreport2.tar.gz
hostname_map.txt
# cat /tmp/sos2/hostname_map.txt
myserver2
# ls /tmp/sos3/*
sosreport3.tar.gz
hostname_map.txt
# cat /tmp/sos3/hostname_map.txt
myserver3
Is it possible to rename the sosreport*.tar.gz by referencing the hostname_map in each directory relative to the /tmp/machine_list.txt file? (like below)
# ls /tmp/sos1/*
[one]apple_machine_#1_myserver1_sosreport1.tar.gz
# ls /tmp/sos2/*
[two]apple_machine_#2_myserver2_sosreport2.tar.gz
# ls /tmp/sos3/*
[three]apple_machine_#3_myserver3_sosreport3.tar.gz
A single change is possible, but what about multiple changes?
Something like this?
srvname () {
awk -v srv="$(cat "$1")" -F '\t' '$2==srv { print $1; exit }' machine_list.txt
}
for dir in /tmp/sos*/; do
server=$(srvname "$dir"/hostname_map.txt)
mv "$dir"/sosreport*.tar.gz "$dir/$server.tar.gz"
done
Demo: https://ideone.com/TS5VyQ
The function assumes your mapping file is tab-delimited. If you want underscores instead of spaces in the server names, change the mapping file.
This should be portable to POSIX sh; the cat could be replaced with a Bash redirection, but I feel that it's not worth giving up portability for such a small change.
If this were my project, I'd probably make the function into a self-contained reusable script (with the input file replaced with a here document in the script itself) since there will probably be more situations where you need to perform the same mapping.

Can I find out who called a zsh script?

Assume a script master.sh, which is called as
./foo/bar/master.sh
and contains the lines
#!/bin/zsh
. ./x/y/slave.sh
Is it possible to find out from within slave.sh, that the script which is doing the sourcing, is ./foo/bar/master.sh ?
I can not use $0 here, because this would return ./x/y/slave.sh.
I'm using zsh 5.0.6
one way you can achieve this is that for the child script to take as optional argument the name of the caller. Thus this would be accessible with `$1``
ex:
#!/bin/zsh
# master/leader
. ./x/y/slave.sh $0 # or hardcoded path
#!/bin/zsh
# slave/worker
echo "Here is my master $1"
(you can also do another custom protocol using a environment variable set by the master)
(this solution would also works on bash, and other shell)
The information can already be obtained in zsh right now (thanks to Bart Schaefer, who pointed out to me the existence of the variable functrace in the zsh/parameter module):
#!/bin/zsh
# slave/worker
zmodload zsh/parameter
echo "Here is my master ${functrace[$#functrace]%:*}"
The '%:*' is necessary, because the entries in the functrace array also contain the line number of the call.

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

Use shell to load in variables to replace placeholders

I have a problem where my config files contents are placed within my deployment script because they get their settings from my setting.sh file. This causes my deployment script to be very large a bloated.
I was wondering if it would be possible in bash to do something like this
setting.sh
USER="Tom"
log.conf
log=/$PLACEHOLDER_USER/full.log
deployment.sh
#!/bin/bash
# Pull in settings file
. ./settings.sh
# Link config to right location
ln -s /home/log.conf /home/logging/log.conf
# Write variables on top of placeholder variables in the file
for $PLACEHOLDER_* in /home/logging/log.conf
do
(Replace $PLACEHOLDER_<VARAIBLE> with $VARIABLE)
done
I want this to work for any variable found in the config file which starts with $placeholder_
This process would allow me to move a generic config file from my repository and then add the proper variables from my setting file on top of the placeholder variables in the config.
I'm stuck on how I can get this to actually work using my deployment.sh.
This small script will read all variable lines from settings.sh and replace the PLACEHOLDER_xxx in file for each. Does this help you?
while IFS== read variable value
do
sed -i "s/\$PLACEHOLDER_$variable/$value/g" file
done < settings.sh
#!/usr/local/env bash
set -x
ln -s /home/log.conf /home/logging/log.conf
while read user
do
usertmp=$(echo "${user}" | sed s'#USER=\"##' \
sed s'#"$##')
user="${usertemp}"
log="${user}"/full.log
done < setting.sh
I don't really understand the rest of what you're trying to do, I will confess, but this will hopefully give you the idea. Use read.

cat multiple files over one ssh connection and get return value for each

As said in the title, i'm trying to cat multiple files (content needs to be appended to existing files on host) over one ssh connection and get return value for each, i.e. if that cat for the particular file was successful or not.
Up to now, i did this for each file individually, by just repeating the following command for each and checking the return value.
cat specific_file | ssh user#host -i /root/.ssh/id_rsa "cat >> result/specific_file"
I then just checked the return value for each transfer (automatically) and thereby could determine the status for each file. My question is: is it possible to to this over one single ssh connection, but to obtain a return value for every single file ?
Thanks in advance !
EDIT:
(b) As you can see in the following, i generate a command for one specific file and then check the return code. The method containing this part of the program is then called for the different types o
If the file was successfully logged, it can be removed from the client, and that further logging to the type of file is (still) allowed. If the file is exceeding its limits, this bit is unset in another part of the program.
sprintf(command,"/bin/cat /root/%s%s | /usr/bin/ssh log-bot#192.168.5.1 -i"
" /root/.ssh/id_rsa \"/bin/cat >> result/%s%s\"", apmac,
ending_str[source], apmac, ending_str[dest]);
rc = system(command);
if(rc != 0)
{
sprintf(buffer, "[%i] - LOG ERROR from %s to %s CODE %i ", (int)time(0),
ending_str[source], ending_str[dest], rc);
sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
}
else
{
sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
remove(filename);
sprintf(buffer, "[%i] - LOG process from %s to %s ok", (int)time(0),
ending_str[source], ending_str[dest]);
switch(source)
{
case PROBE_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_PROB);
break;
case LIST_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_LIST);
break;
case SCAN_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_SCAN);
break;
default:
/* Other cases not YET considered */
break;
}
}
SECOND EDIT:
I'm running this code on a router as a part of a program. Please note that i'm not allowed to add new libraries or non basic functions to the system. In addition, the ssh client on the system itself doesn't allow "-M" mode.
Edit in response to the added info (and code):
To the code: I'd strongly consider writing a script/program on the receiving end to talk to the sending process over the ssh pipe. That way you have full flexibility.
The simplest thing that could work, would still appear to be sending an archive over to the receiving host. On the receiving end, filter the archive with a script that
untars each file into a temporary location
tries the appending operation cat >> specific_file
prints a 'result record' to stdout as feedback to the sender
So you'd do:
tar c file1 file2 file3 |
ssh log-bot#remote /home/log-bot/handle_logappends.sh |
while read resultcode filename
do
echo "$filename" resulted in code "resultcode"
done
To handle the feedback in C/C++ you'd look at popen, that will allow you to read the streaming feedback as if from a file, simple!
An example of such a handle_logappends.sh script on the receiving end:
#!/bin/bash
set -e # bail on error
TEMPDIR="/tmp/.receiving_$RANDOM"
mkdir "$TEMPDIR"
trap "rm -rf '$TEMPDIR/'" INT ERR EXIT
tar x -v -C "$TEMPDIR/" | while read filename
do
echo "unpacked file $filename" > /dev/stderr
## implement your file append logic here :)
## e.g. (?):
cat "$TEMPDIR/$filename" >> "result/$filename"
## HERE COMES THE FEEDBACK PART: '<code> <filename>'
echo "$?" "$filename"
done
The really neat part of this is, that since everything is in streaming mode, the feedback for the first file(s) may be arriving while the sending tar is still sending the later files to the receiving host. No unnecessary delays!
I included a tiny bit of sane error handling/cleanup but I would suggest
perhaps receiving the whole archive first, then iterating through the files?
doing the appends in atomic fashion (i.e. on a copy, then move the copy into place only if the whole append operation succeeded; this prevents partially appended logs)
Hope that helps!
Older answer:
You'd usually employ devious little tricks (not) like:
tar c file1 file2 file3 | ssh user#host -i /root/.ssh/id_rsa "tar x -C result/ -"
Add a verbose flag to see progress details
tar c file1 file2 file3 | ssh user#host -i /root/.ssh/id_rsa "tar xvC result/ -"
If you want, you can substitute cpio for tar. Add options to get more functionality (-p for preserve permissions, e.g.)
To do various separate steps over a single logical connection, you can use a ssh Master connection:
ssh user#host -i /root/.ssh/id_rsa -MNf # login, master, background without a command
for specific_file in file1 file2 file3
do
cat "$specific_file" |
ssh user#host -Mi /root/.ssh/id_rsa "cat >> 'result/$specific_file'"
# check/use error code
done
How about building on libssh2 instead of scripting ssh, and using the sftp subsystem instead of building your own file-transfer system in shell?
There's an example of performing one file append in libssh2/examples/sftp_append.c, just repeat it for the multiple files you want.
if you look at the problem from a different tactical view, you could cat all the files over from another master file. That master file is a shell script that has here documents embedded with the files' contents. Then exec the master shell script and ls the files - all in one ssh session. It's not pretty or elegant but will be successful.

Resources