I spent some time finding a solution for my problem but google couldn't provide me a sufficient answer... I'm working a lot with the command line in linux and I simply need a way to navigate fast through my file system. I don't want to type cd [relative or absoulte path] all the time. I know there is pushd and popd but that still seems too complicated for a simple problem like this.
When I'm in ~/Desktop/sampleFile I simply want to use sampleCommand fileToGo to get to ~/Desktop/anotherFile/anotherFile/fileToGo, no matter, where the file is located. Is there an easy command for this?
Thanks in advance!
This can be done with native Bash features without involving a sub-shell fork:
You can insert this into your "$HOME/.bashrc":
cdf(){
# Query globstar state
shopt -q globstar
# and save it in the gs variable (gs=0 if set, 1 if not)
local gs=$?
# Need globstar to glob find files in sub-directories
shopt -s globstar
# Find the file in directories
# and store the result into the matches array
matches=(**/"$1")
# globstar no longer needed, so restore its previous state
[ $gs -gt 0 ] && shopt -u globstar
# Change to the directory containing the first matched file
cd "${matches[0]%/*}" # cd EXIT status is preserved
}
Hmm, you could do something like this:
cd $(dirname $(find . -name name-of-your-file | head -n 1))
That will search the current directory (use / instead of . to search all directories) for a file called name-of-your-file and cd into the parent directory of the first file with that name that it finds.
If you're in a large directory, typing the path and using cd will probably be faster than this, but it works alright for small directories.
Related
I am writing an automated bash script that moves some files from one directory to another directory, but the first directory may be empty:
$ mv somedir/* someotherdir/
mv: cannot stat 'somedir/*': No such file or directory
How can I write this command without generating an error if the directory is empty? Should I just use rm and cp instead? I could write a conditional check to see if the directory is empty first, but that feels overweight.
I'm surprised the command fails if the directory is empty, so I'm trying to find out if I'm missing some simple solution.
Environment:
bash
RHEL
If you really want full control over the process, it might look like:
#!/usr/bin/env bash
# ^^^^- bash, not sh
restore_nullglob=$(shopt -p nullglob) # store the initial state of the nullglob setting
shopt -s nullglob # unconditionally enable nullglob
source_files=( somedir/* ) # store matching files in an array
if (( ${#source_files[#]} )); then # if that array isn't empty...
mv -- "${source_files[#]}" someotherdir/ # ...move the files it contains...
else # otherwise...
echo "No files to move; doing nothing" >&2 # ...write an error message.
fi
eval "$restore_nullglob" # restore nullglob to its original setting
Explaining the moving parts:
When nullglob is set, the shell expands *.txt to an empty list if no .txt files exist; otherwise (by default), it expands *.txt to the string *.txt when there are no matching files.
source_files is an array above -- bash's native mechanism to store a list. ${#source_files[#]} expands to the length of that array, whereas ${source_files[#]} on its own expands to its contents.
(( )) creates an arithmetic context, in which expressions are treated as math. In such a context, 0 is falsey, and positive numbers are truthy. Thus, if (( ${#source_files[#]} )) is true only if there is more than one file listed in the array source_files.
BTW, note that saving and restoring nullglob isn't really essential in an independent script; the purpose of showing how to do it is so you can safely use this code in larger scripts that might make assumptions about whether or not nullglob is set, without disrupting other code.
find somedir -type f -exec mv -t someotherdir/. '{}' +
Saves you the check, may not be what you want, though.
Are you aware of the output stream and the error stream? Output stream has number 1, while error stream has number 2. In case you don't want to see a result, you can redirect that result to the garbage bin.
Excuse me?
Well, let's have a look at this case: when the directory is empty, an error is generated and that error is shown in the error stream (2). You can redirect this, using 2>/dev/null (/dev/null being the UNIX/Linux garbage bin), so your command becomes:
$ mv somedir/* someotherdir/ 2>/dev/null
Following up on Dominique, to report all errors except the empty directory one use:
mv somedir/* someotherdir 2>&1 | grep -v No.such
Does anyone know of a way to (via bash) setup a "versioning" copy of a file into another? For example: I am copying file into file.bak. If file.bak exists, I am currently overwriting. What I'd like to do is set it up so that it creates multiple files: file, file.bak, file.bak.1, file.bak.2, etc...
Right now, I'm using:
cp -rf file file.bak
This currently overwrites the file(as expected)
or:
cp --backup=t file1 file2
repeat few times to see the result...
see https://www.gnu.org/software/coreutils/manual/html_node/cp-invocation.html
Simply use a test
[ -e file.bak ] && cp -r file file.bak.$(date +%s) || cp -r file file.bak
This will create a unique backup if file.bak already exists in the form file.bak.1411505497
There are many ways to skin this cat.
Since you're using Linux, it's likely you've got the GNU mv command, which may include a --backup option. You could wrap this in a shell function:
bkp() {
file="$1"
if [ -f "$file" ]; then
/bin/mv -v --backup=numbered "$(mktemp ${file}XXX)" "$file"
#/bin/rm "$file"
fi
}
You can put this in your .bashrc, for example. Then you can use this as follows:
# bkp foo
This will copy foo to numbered backup files. You can uncomment the rm if this is, for example, a log file that you're rotating.
Another option, which is more portable to operating systems that don't use GNU tools (i.e. FreeBSD, OSX) might be something like this quick-and-dirty solution might work:
bkp() {
file="$1"
if [ -f "$file" ]; then
# increment existing files up to 10
for n in {9..1}; do
if [ -f $file.$n ]; then
# remove -v if you want less noise.
mv -v "${file}.$n" "${file}.$[n+1]"
fi
done
# move the original to first backup position
mv "$file" "$file.1"
else
echo "Not found: $file" >&2
fi
}
It suffers in that it won't compact your list of files (and will throw errors) if some numbers are missing, but that's stuff you can add if it's important. You'd use it pretty much the same way, changing the final mv to a cp if you need to keep the original in place.
Final option I'll mention is in comments as well. Since you've said that you're using this solution to back up "system files" (which I assume you mean to be things in /etc/) you should consider using an actual version control system to control your versions of these files.
Many options exist, but I'd recommend RCS for its simplicity and low overhead. Simply install the package, mkdir /etc/RCS to keep your /etc directory clean, read the man pages for rcs, ci, co, rlog, rcsdiff and perhaps rcsintro, and you're good to go. You'll get better control of diffs and history, opportunity for comments, none of the overhead of a repository for a large VCS like SVN or Git. I've been using this on various servers for years, as RCS is still built in to the base system in FreeBSD. :)
I have this question after quite a day of searching the net, perhaps I'm doing something wrong , here is my script:
#!/bin/bash
shopt -s extglob
FILE_EXTENSIONS=properties\|xml\|sh\|sql\|ksh
SOURCE_FOLDER=$1
if [ -z "$SOURCE_FOLDER" ]; then
SOURCE_FOLDER=$(pwd)
fi # Set directory to current working folder if no input parameter.
for file in $SOURCE_FOLDER/**/*.*($FILE_EXTENSIONS)
do
echo Working with file: $file
done
Basically, I want to recursively get all the files filtered by a list of extensions within folders from a directory that is passed as an argument including the directory itself.
I would like to know if there is a way of doing this and how without the use of the find command.
Imagine I have this file tree:
bin/props.properties
bin/xmls.xml
bin/source/sources.sh
bin/config/props.properties
bin/config/folders/moreProps.xml
My script, as it is right now and running from /bin, would echo:
bin/source/sources.sh
bin/config/props.properties
bin/config/folders/moreProps.xml
Leaving the ones in the working path aside.
P.S. I know this can be done with find but I really want to know if there's another way for the sake of learning.
Thanks!
You can use find with grep, just like this:
#!/bin/bash
SOURCE_FOLDER=$1
EXTENSIONS="properties|xml|sh|sql|ksh"
find $SOURCE_FOLDER | grep -E ".(${EXTENSIONS})"
#or even better
find $SOURCE_FOLDER -regextype posix-egrep -regex ".*(${EXTENSIONS})"
I'm attempting to sync all files from within a large directory structure into a single root directory (ie not creating the sub directories but still including all recursive files).
Environment:
Ubuntu 12.04 x86
RSYNC version 3.0.9
GNU bash version 4.2.25(1)
So far I have this command called from a bash script which works fine and provides the basic core functionality required:
shopt -s globstar
rsync -adv /path/to/source/**/. /path/to/dest/. --exclude-from=/myexcludefile
The contents of myexcludefile are:
filename
*/
# the */ prevents all of the directories appearing in /path/to/dest/
# other failed attempts have included:
directory1
directory1/
directory1/*
I now need to exclude files that are located inside certain directories in the source tree. However due to the globstar approach of looking in all directories rsync is unable to match directories to exclude. In other words, with the exception of my /* and filename rules, everything else is completely ignored.
So I'm looking for some assistance on either the excludes syntax or if there's another way of achieving the rsync of many directories into a single destination directory that doesn't use my globstar approach.
Any help or advice would be very gratefully received.
If you want to exclude directories from a globstar match, you can save those to an array, then filter the contents of that array based on a file.
Example:
#!/bin/bash
shopt -s globstar
declare -A X
readarray -t XLIST < exclude_file.txt
for A in "${XLIST[#]}"; do
X[$A]=.
done
DIRS=(/path/to/source/**/.)
for I in "${!DIRS[#]}"; do
D=${DIRS[I]}
[[ -n ${X[$D]} ]] && unset 'DIRS[I]'
done
rsync -adv "${DIRS[#]}" /path/to/dest/.
Run with:
bash script.sh
Note that values in exclude_file.txt should really match expanded values in /path/to/source/**/..
I know my topic is a little confusing, but here is what I want to do.
I have a file which I would like to create a link to in my home directory ~/bin, but when I execute the file that is symbolically linked, the file requires another file in its directory. Therefore, it fails to run because it cannot find the other file. What can I do?
Well, you have two simple solutions.
edit the shell script to point to the absolute path of the file, not just the the basename.
./path/to/file.sh
VS
file.sh
so something like this should do what your after. sed -i 's|file.sh|./path/to/file.sh|g' ~/bin/script.sh it searches your symlinked file, script.sh in this case, and replaces the call to file.sh to ./path/to/file.sh. note you often see sed use /'s. but it can use just about anything as a delimiter, if you wish to use /'s here you will need to escape them. /. you may want to consider escaping the . (period) as well, but in this case its not necessary. If you are new to sed realize that the -i flag means it will edit the file in place. Lastly, realize its a simple search and replace operation and you may chose to do it by hand.
The second way is to create a ln -s to the file as you did with the other file so there exists a symbolic link between both files.
ln -s /far/off/script.sh ~/bin/script.sh
and
ln -s /far/off/file.sh ~/bin/file.sh
more on symlinking
I would rather create a script file in ~/bin/` that calls your executable from the appropriate directory.
Here is an example using /sbin/ifconfig:
$ cat > ~/bin/file
#!/bin/bash
file=/sbin/ifconfig
cd `dirname $file`
`basename $file`
(ctr+d)
$ chmod +x ~/bin/file
$ file
Here you should see the output of ifconfig but the point is: its get executed from the /sbin directory. So if ifconfig had dependencies it would work properly. Just replace /sbin/ifconfig with your absolute path.
Alternatively, you can modify your script as
pushd ~/bin
##### your script here
popd
Combination of readlink and dirname will get the actual directory of the script:
my_dir=$(dirname "$(readlink -f "$0")")
source "$my_dir/other_file"