Apply bash variables for directories (recursive) - linux

I have a project that requires me to keep a lot of bash files with installation/maintenance/whatever scripts, and most of them need to know where other folders are. Right now that is all made with relative paths, but that makes me keep the folders structure, which might not be the best idea on the long run.
So, as an example, I have this file (script.sh):
THINGS_DIR=..\..\things
myprogram $THINGS_DIR
But now I would like to have 2 files, one with global variables from these directories (let's call it conf.sh):
THINGS_DIR=./things/
OTHERS_DIR=./things/others/
and, on script.sh, somehow, I would use those variables.
The best way I could find it's to keep that conf.sh in a fixed place and all of the others run it before it starts, but I was trying to find a better solution.
EDIT
I forgot to say that this is in a Git repository, which is a fair assumption to keep along the way. That being said and because I wanted to keep this as self contained as possible I ended up using this in every script that needs those variables:
. $(git rev-parse --show-toplevel)/my_conf_file.conf
This command executes what's inside my_conf_file.conf located in the git repository root. It's not ideal (nor completelly safe) but it does the trick with minimum configuration.

A common idiom is to have a config file in one (or more) of several directories and have your script check each one of them in order. For instance you could search:
/etc/script.cfg # global
~/.script.cfg # user-specific, hidden with leading dot
You might also search the environment as a last resort. This search strategy is very easy to implement in bash. All you have to do is:
[[ -e /etc/script.cfg ]] && . /etc/script.cfg
[[ -e ~/script.cfg ]] && . ~/script.cfg
echo "THINGS_DIR=$THINGS_DIR"
echo "OTHERS_DIR=$OTHERS_DIR"
It sources the two config files if they exist. If the user copy exists it overrides the global settings. If neither exists, the script will naturally use any environment variables. This way the user could override the config settings like so:
THINGS_DIR=/overridden/things/dir script.sh

Related

How to configure vim to save extra copies

Is there a way to configure vim so that, instead of creating a temporary .swp, every time a save is made, suppose that I'm editing the file name_of_the_file.txt, the program automatically creates a file containing the previous save, named with the data and time of the saving, e.g. name_of_the_file-05-17-2017-11:20.txt in a folder, let say ~/.vim-bckp/name_of_the_file/? Or, better, having a custom command for saving that do the above request and avoid flooding the HDD with minor changes.
This script will do the trick. You could name it vimBackup or anything, do not use existing commands name. Then you could copy the command somewhere into the paths in $PATH variable or append $PATH variable with a custom script folder containing this script. Then you able to use it without using the full path to execute it.
#!/bin/sh
[[ $1 == "" ]] && echo "Command expect a file path" && exit 1
[[ ! $# == 1 ]] && echo "Command expect only one parameter" && exit 1
[[ -w $1 ]] && cp $1 $1_$(date +%Y-%m-%d_%H:%M) # if exist and writable make a copy
vim $1
I've written the writebackup plugin for that. That turned into a full plugin suite:
The writebackupVersionControl plugin plugin complements this script with additional commands and enhances the :WriteBackup command with more checks, but is not required.
The writebackupToAdjacentDir plugin implements a WriteBackup-dynamic-backupdir configuration that puts the backup files in an adjacent backup directory if one exists. This helps where the backups cannot be placed into the same directory.
The writebackupAutomator plugin automatically writes a backup on a day's first write of a file that was backed up in the past, but not yet today. It can be your safety net when you forget to make a backup.
So, once I have made a backup, I don't need to think about explicitly triggering one. I can view diffs, and even restore previous versions. Of course, if you can use a "full" version control system (like Git or Subversion), use that. But this one is nice for all the little configuration changed distributed throughout the file system.

Bash script to test if a file contains multiple strings otherwise add missing string?

I'm attempting to make a post-install script for Ubuntu 13.04 so that when run it will automatically install my desired programs and apply my desired settings such as create aliases so that apt-get=apt-fast, wget=aria2c and sudo="sudo ". At the beginning of the script I have it set to check for the alias list and expand it for use within the script if it exists so that the aliases are used throughout the script versus the original commands.
What I wish for the script to do is test does ~/bash_aliases exist, if so then look for string1, string2, string3. If file exists and all strings found then echo Aliases already in place otherwise add the missing strings or create the file if it doesn't exist containing all aliases. After having searched for a while I have a basic layout, but my problem is it's searching as OR not AND. Once it finds any of the strings it says ok they are there and doesn't add the missing strings.
Here is what I have so far:
if [ -f ~/.bash_aliases ]; then
if grep -q "# Additional Alias Definitions" "alias apt-get='apt-fast'" "alias wget='aria2c'" "alias sudo='sudo '" ~/.bash_aliases; then
echo 'Aliases already in place.'
else
printf "%s\n" "# Additional Alias Definitions" "alias apt-get='apt-fast'" "alias wget='aria2c'" "alias sudo='sudo '" >> ~/.bash_aliases
fi
else
printf "%s\n" "# Additional Alias Definitions" "alias apt-get='apt-fast'" "alias wget='aria2c'" "alias sudo='sudo '" >> ~/.bash_aliases
fi
How do I get it to work how I desire?
Edit: I managed to get it to work, not exactly done the way I wanted but works nonetheless... http://paste.progval.net/show/582/
I think you are taking the wrong approach here: you try to alter files provided by other packages. Such thing is always done in good intentions, but nevertheless it often fails, since no one can predict how those other packages will change.
Instead a modular concept has proved more stable, which adds separate files for each package installed and references those files from a central place. The first distribution to consequently follow this strategy was openSUSE some years ago, these days most other distributions have adapted that strategy or are switching over to it. The only exception probably are minimalistic distributions being limited in storage place and/or computation capacity. For all other situations the additional file to be opened and processed is an accepted penalty which is justified by a more stable and transparent installation or setup scheme.
Please note: your approach certainly is possible, I just describe an alternative one which has proven more stable, convenient and transparent.
Define a separate file holding your alias definitions. Put that file into a separate package (rpm or deb) as /etc/profile.d/local-alias or the like, the exact path obviously depending on the distribution you use. That package is to be installed into any system where those definitions are required. This makes the installation a transparent and above all a cleanly revertable process. The origin of the additional definitions is well documented inside the package management.
The setup routines on modern systems scan the profile.d folder and will execute any script installed in there. This processing is typically defined in a file like /etc/profile, have a look for the details of your distribution. This mechanism enables packages to drop their own requirements and definitions inside the system without each having to alter files installed by other packages, a process which can never work out stable in the long run with thousands of packages getting installed and removed, all bringing their own requirements and dependencies. Other examples where this strategy is typically applied are modules for the http server getting installed on an optional base or specific shell environments which typically require their own set of definition files.
Note that the names and paths I specified are openSUSE specific. I have little experience with Ubuntu but you will probably find their implementation of this approach easily.

How can I loop through some files in my script?

I am very much a beginner at this and have searched for answers my question but have not found any that I understand how to implement. Any help would be greatly appreciated.
I have a script:
FILE$=`ls ~/Desktop/File_Converted/`
mkdir /tmp/$FILE
mv ~/Desktop/File_Converted/* /tmp/$FILE/
So I can use Applescript to say when a file is dropped into this desktop folder, create a temp directory, move the file there and the do other stuff. I then delete the temp directory. This is fine as far as it goes, but the problem is that if another file is dropped into File_Converted directory before I am doing doing stuff to the file I am currently working with it will change the value of the $FILE variable before the script has completed operating on the current file.
What I'd like to do is use a variable set up where the variable is, say, $FILE1. I check to see if $FILE1 is defined and, if not, use it. If it is defined, then try $FILE2, etc... In the end, when I am done, I want to reclaim the variable so $FILE1 get set back to null again and the next file dropped into the File_Converted folder can use it again.
Any help would be greatly appreciated. I'm new to this so I don't know where to begin.
Thanks!
Dan
Your question is a little difficult to parse, but I think you're not really understanding shell globs or looping constructs. The globs are expanded based on what's there now, not what might be there earlier or later.
DIR=$(mktemp -d)
mv ~/Desktop/File_Converted/* "$DIR"
cd "$DIR"
for file in *; do
: # whatever you want to do to "$file"
done
You don't need a LIFO -- multiple copies of the script run for different events won't have conflict over their variable names. What they will conflict on is shared temporary directories, and you should use mktemp -d to create a temporary directory with a new, unique, and guaranteed-nonconflicting name every time your script is run.
tempdir=$(mktemp -t -d mytemp.XXXXXX)
mv ~/Desktop/File_Converted/* "$tempdir"
cd "$tempdir"
for f in *; do
...whatever...
done
What you describe is a classic race condition, in which it is not clear that one operation will finish before a conflicting operation starts. These are not easy to handle, but you will learn so much about scripting and programming by handling them that it is well worth the effort to do so, even just for learning's sake.
I would recommend that you start by reviewing the lockfile or flock manpage. Try some experiments. It looks as though you probably have the right aptitude for this, for you are asking exactly the right questions.
By the way, I suspect that you want to kill the $ in
FILE$=`ls ~/Desktop/File_Converted/`
Incidentally, #CharlesDuffy correctly observes that "using ls in scripts is indicative of something being done wrong in and of itself. See mywiki.wooledge.org/ParsingLs and mywiki.wooledge.org/BashPitfalls." One suspects that the suggested lockfile exercise will clear up both points, though it will probably take you several hours to work through it.

how to separate source code and data while minimizing directory changes during working? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a general software engineering problem about working on Linux. Suppose I have source code, mainly scripts. They manipulate text data, take text files as input and output. I am thinking about how to appropriately separate src code and data while minimizing directory changes during working. I see two possibilities:
mix code and data together. In this way, it minimizes directory transitions and eliminating the need of typing paths to files during working. Most of the time I just call:
script1 data-in data-out # call script
vi data-out # view result
The problem is that as the number of code and data files grows, it looks messy facing a long list of both code and data files.
Separate code and data in two folders, say "src" and "data". When I am in "src" folder, doing the above actions would require:
script1 ../data/data-in ../data/data-out # call script
vi ../data/data-out or cd data; vi data-out # view result
The extra typing of parent directories "../data" causes hassle, especially when there are lots of quick testings of scripts.
You might suggest I do it the other way around, in the data folder. But then similarly I need to call ../src/script1, again a hassle of typing prefix "../src". Yeah, we could add "src" to PATH. But what if there are dependencies among scripts across parent-child directories? e.g., suppose under "src" there are "subsrc/script2", and within script1, it calls "./subsrc/script2 ..."? Then calling script1 in "data" folder would throw error, because there is no "subsrc" folder under "data" folder.
Well separation of code & data, and minimizaing directory changes seem to be conflicting requirements. Do you have any suggestions? Thanks.
I would use the cd - facility of the shell plus setting the PATH to sort this out — possibly with some scripts to help.
I'd ensure that the source directory, where the programs are built, is on my PATH, at the front. I'd cd into either the data directory or the source directory, (maybe capture the directory with d=$PWD for the data directory, or s=$PWD for the source directory), then switch to the other (and capture the directory name again). Now I can switch back and forth between the two directories using cd - to switch.
Depending on whether I'm in 'code work' or 'data work' mode, I'd work primarily in the appropriate directory. I might have a simple script to (cd $source_directory; make "$#") so that if I need to build something, I can do so by running the script. I can edit files in either directory with a minimum of fuss, either with a swift cd - plus vim, or with vim $other_dir/whichever.ext. Because the source directory is on PATH, I don't have to specify full paths to the commands in it.
I use an alias alias r="fc -e -" to repeat a command. For example, to repeat the last vim command, r v; the last make command, r m; and so on.
I do this sort of stuff all the time. The software I work on has about 50 directories for the full build, but I'm usually just working in a couple at a time. I have sets of scripts to rebuild the system based on on where I'm working (chk.xyzlib and chk.pqrlib to build in the corresponding sets of directories, for example; two directories for each of the libraries). I prefer scripts to aliases; you can interpolate arguments more easily with scripts whereas with aliases, you can only append the arguments. The (cd $somewhere; make "$#") notation doesn't work with aliases.
It's a little more coding, but can you set environment variables from the command line to specify the data directory?
export DATA_INPUT_DIR=/path/to/data
export DATA_OUTPUT_DIR=/path/to/outfiles
Then your script can process files relative to these directories:
# Set variables at the top of your scripts:
in_dir="${DATA_INPUT_DIR:-.}" # Default to current directory
out_dir="${DATA_OUTPUT_DIR:-.}" # Defailt to current directory
# 1st arg is input file. Prepend $DATA_INPUT_DIR unless path is absolute.
infile = "$1"
[ "${1::1}" == "/" ] || infile="$DATA_INPUT_DIR/$infile"
# 2nd arg is output file. Prepend $DATA_OUTPUT_DIR unless path is absolute.
outfile = "$2"
[ "${2::1}" == "/" ] || outfile="$DATA_OUTPUT_DIR/$outfile"
# Remainder of the script uses $infile and $outfile.
Of course, you could also open several terminal windows: some for working on the code and others for executing it. :-)

Proper way to run a script using cron?

When running a script with cron, any executable called inside must have the full path. I discovered this trying to run wondershaper, when many errors showed when it tried to call tc. So my question is, what's the proper way to overcome this problem?
Possible solutions:
cd to the executable folder and prepare symbolic links to any other called executable there (not sure if it works - low portability)
use full paths in the script (it works - low portability across different distros)
exporting a path variable with the needed paths inside the script (not sure if it works)
Well, thanks in advance for anyone helping.
If you're on linux/bsd/mac you can set some environment variables like PATH right in the crontab, and with that you're generally good to go.
If you're on Solaris, well, I pray for you. But, I do have an answer too: I generally source .profile before running anything:
0 0 * * 0 . /home/myuser/.profile && cd /path && ./script
Mind you, my .profile loads .bash_profile and .bashrc. Just be sure whatever file you source has what you need.
Declaring variables inside your cron job is more explicit and easier to maintain : all you have to modify is contained in your cron job, and you don't need to transfer multiple files should you move it to another system.
PATH=/usr/bin:/your/fancy/dir
MYAPPROOT=/var/lib/myapp
*/2 * * * * myappinpath
*/3 * * * * $MYAPPROOT/mylocalapp
Since cron does not run login, .profile and /etc/profile are not sourced. Therefore PATH may not be set to a value you expect. I would either
set and export PATH to an appropriate value
use full paths in the script
Your trick with symlinks assumes . is in the PATH and just does not seem nice
My recomendation:
Set all variables in a external file. I use 'process_name.env' file located in /etc/process_name or similar. Imagine you have a backup script. Then you:
Create /etc/backup.env and put all environment variables needed for do the "backup" task.
Modify your backup script and add this line after Shebang:
. /etc/backup.env #There is a dot and a space before full path to backup environment.
IMO this approach is better than declaring variables at CRON definitions because:
Easy to maintain. Just edit a file.
Easy to switch configuration/centralized configuration:
You can have multiple .env for using your script in different situations (for example, consider you have backup locations on your .env, you can pass .env location as an argument and run your cron job daily providing an .env with few locations and weekly with different locations by providing another .env, just a example).
You can keep your .env files in a VCS like SVN or Git.
Much easy to test your scripts (there is no need to execute it from CRON).
Regards

Resources