Linux: Piping output to unique files - linux

I have a folder filed with hundreds of text files which I want to run a Linux command called mint. This command outputs a text value which I want stored in unique files, one for each file I have in the folder. Is there a way to run the command using the * character to represent all my input files, while still piping the output to a file that is unique from each other file?
Example:
$ mint * > uniqueFile.krn

With the bugs fixed and caveats closed:
#!/bin/bash
# ^^^^ - bash, not sh, for [[ ]] support
for f in *.krn; do
[[ $f = *.krn ]] && continue # skip files already ending in .krn
mint "$f" >"$f.krn"
done
Or, with a prefix:
for f in *; do
[[ $f = int_* ]] && continue
mint "$f" >"int_$f"
done
You can also avoid recreating hashes that already exist unless the source file changed:
for f in *; do
# don't hash hash files
[[ $f = int_* ]] && continue
# if a non-empty hash file exists, and is newer than our source file, don't hash again
[[ -s "int_$f" && "int_$f" -nt "$f" ]] && continue
# ...if we got through the above conditions, then go ahead with creating a hash
mint "$f" >"int_$f"
done
To explain:
test -s filename is true only if a file by the given name exists and is non-empty
test file1 -nt file2 is true only if both files exist, and file1 is newer than file2.
[[ ]] is a ksh-extended shell syntax derived from that for the test command, adding support for pattern-matching tests (ie. [[ $string = *.txt ]] will be true only if $string expands to a value ending in .txt), and relaxing quoting rules (it's safe to write [[ -s $f ]], but test -s "$f" needs the quotes to work with all possible filenames).

Thanks for all the suggestions! Shiping's solution worked great, I just appended a prefix to the file name. Like so:
$ for file in * ; do mint $file > int_$file ; done
Self-answer moved from question and flagged Community Wiki; see What is the appropriate action when the answer to a question is added to the question itself?

Related

extracting files that doesn't have a dir with the same name

sorry for that odd title. I didn't know how to word it the right way.
I'm trying to write a script to filter my wiki files to those got directories with the same name and the ones without. I'll elaborate further.
here is my file system:
what I need to do is print a list of those files which have directories in their name and another one of those without.
So my ultimate goal is getting:
with dirs:
Docs
Eng
Python
RHEL
To_do_list
articals
without dirs:
orphan.txt
orphan2.txt
orphan3.txt
I managed to get those files with dirs. Here is me code:
getname () {
file=$( basename "$1" )
file2=${file%%.*}
echo $file2
}
for d in Mywiki/* ; do
if [[ -f $d ]]; then
file=$(getname $d)
for x in Mywiki/* ; do
dir=$(getname $x)
if [[ -d $x ]] && [ $dir == $file ]; then
echo $dir
fi
done
fi
done
but stuck with getting those without. if this is the wrong way of doing this please clarify the right one.
any help appreciated. Thanks.
Here's a quick attempt.
for file in Mywiki/*.txt; do
nodir=${file##*/}
test -d "${file%.txt}" && printf "%s\n" "$nodir" >&3 || printf "%s\n" "$nodir"
done >with 3>without
This shamelessly uses standard output for the non-orphans. Maybe more robustly open another separate file descriptor for that.
Also notice how everything needs to be quoted unless you specifically require the shell to do whitespace tokenization and wildcard expansion on the value of a token. Here's the scoop on that.
That may not be the most efficient way of doing it, but you could take all files, remove the extension, and the check if there isn't a directory with that name.
Like this (untested code):
for file in Mywiki/* ; do
if [ -f "$d" ]; then
dirname=$(getname "$d")
if [ ! -d "Mywiki/$dirname" ]; then
echo "$file"
fi
fi
done
To List all the files in current dir
list1=`ls -p | grep -v /`
To List all the files in current dir without extension
list2=`ls -p | grep -v / | sed 's/\.[a-z]*//g'`
To List all the directories in current dir
list3=`ls -d */ | sed -e "s/\///g"`
Now you can get the desired directory listing using intersection of list2 and list3. Intersection of two lists in Bash

How can I batch rename multiple images with their path names and reordered sequences in bash?

My pictures are kept in the folder with the picture-date for folder name, for example the original path and file names:
.../Pics/2016_11_13/wedding/DSC0215.jpg
.../Pics/2016_11_13/afterparty/DSC0234.jpg
.../Pics/2016_11_13/afterparty/DSC0322.jpg
How do I rename the pictures into the format below, with continuous sequences and 4-digit padding?
.../Pics/2016_11_13_wedding.0001.jpg
.../Pics/2016_11_13_afterparty.0002.jpg
.../Pics/2016_11_13_afterparty.0003.jpg
I'm using Bash 4.1, so only mv command is available. Here is what I have now but it's not working
#!/bin/bash
p=0
for i in *.jpg;
do
mv "$i" "$dirname.%03d$p.JPG"
((p++))
done
exit 0
Let say you have something like .../Pics/2016_11_13/wedding/XXXXXX.jpg; then go in directory .../Pics/2016_11_13; from there, you should have a bunch of subdirectories like wedding, afterparty, and so on. Launch this script (disclaimer: I didn't test it):
#!/bin/sh
for subdir in *; do # scan directory
[ ! -d "$subdir" ] && continue; # skip non-directory
prognum=0; # progressive number
for file in $(ls "$dir"); do # scan subdirectory
(( prognum=$prognum+1 )) # increment progressive
newname=$(printf %4.4d $prognum) # format it
newname="$subdir.$newname.jpg" # compose the new name
if [ -f "$newname" ]; then # check to not overwrite anything
echo "error: $newname already exist."
exit
fi
# do the job, move or copy
cp "$subdir/$file" "$newname"
done
done
Please note that I skipped the "date" (2016_11_13) part - I am not sure about it. If you have a single date, then it is easy to add these digits in # compose the new name. If you have several dates, then you can add a nested for for scanning the "date" directories. One more reason I skipped this, is to let you develop something by yourself, something you can be proud of...
Using only mv and bash builtins:
#! /bin/bash
shopt -s globstar
cd Pics
p=1
# recursive glob for .jpg files
for i in **/*.jpg
do
# (date)/(event)/(filename).jpg
if [[ $i =~ (.*)/(.*)/(.*).jpg ]]
then
newname=$(printf "%s_%s.%04d.jpg" "${BASH_REMATCH[#]:1:2}" "$p")
echo mv "$i" "$newname"
((p++))
fi
done
globstar is a bash 4.0 feature, and regex matching is available even in OSX's anitque bash.

Merge files to directories based on match of filename to directory name

I am pretty new to scripting so please be easy. I am aware that there is another forum that is related to this but does not exactly cover my issue.
I have a directory containing files and another directory containing the corresponding folders that I need to move each file to. Each file corresponds to the destination directory like:
DS-123.txt
/DS-123_alotofstuffhere/
I would like to automate the move based on a match of the first 6 characters of the filename to the first 6 of the directory.
I have this:
filesdir=$(ls ~/myfilesarehere/)
dir=$(ls ~/thedirectoriesareinthisfolder/)
for i in $filesdir; do
for j in $dir; do
if [[${i:6} == ${j:6}]]; then
cp $i $j
fi
done
done
But when I run the script, I get the following error:
es: line 6: [[_DS-123_morefilenametext.fasta: command not found
I am using Linux (not sure what version on the supercomputer, sorry).
It's better to use arrays and globbing to hold the list of files and directories, instead of ls. With that change and a correction to the [[ ... ]] part, you code us this:
files=(~/myfilesarehere/*)
dirs=(~/thedirectoriesareinthisfolder/*)
for i in "${files[#]}"; do
[[ -f "$i" ]] || continue # skip if not a regular file
for j in "${dirs[#]}"; do
[[ -d "$j" ]] || continue # skip if not a directory
ii="${i##*/}" # get the basename of file
jj="${j##*/}" # get the basename of dir
if [[ ${ii:0:6} == ${jj:0:6} ]]; then
cp "$i" "$j"
# need to break unless a file has more than one destination directory
fi
done
done
[[ -d "$j" ]] check is necessary because your dirs array could contain some files too. To be safer, I have added a check for $i being a file as well.
Here is the solution that doesn't use arrays, as suggested by #triplee:
for i in ~/myfilesarehere/*; do
[[ -f "$i" ]] || continue # skip if not a regular file
for j in ~/thedirectoriesareinthisfolder/*; do
[[ -d "$j" ]] || continue # skip if not a directory
ii="${i##*/}" # get the basename of file
jj="${j##*/}" # get the basename of dir
if [[ ${ii:0:6} == ${jj:0:6} ]]; then
cp "$i" "$j"
# need to break unless a file has more than one destination directory
fi
done
done

Renaming a set of files to 001, 002,

I originally had a set of images of the form image_001.jpg, image_002.jpg, ...
I went through them and removed several. Now I'd like to rename the leftover files back to image_001.jpg, image_002.jpg, ...
Is there a Linux command that will do this neatly? I'm familiar with rename but can't see anything to order file names like this. I'm thinking that since ls *.jpg lists the files in order (with gaps), the solution would be to pass the output of that into a bash loop or something?
If I understand right, you have e.g. image_001.jpg, image_003.jpg, image_005.jpg, and you want to rename to image_001.jpg, image_002.jpg, image_003.jpg.
EDIT: This is modified to put the temp file in the current directory. As Stephan202 noted, this can make a significant difference if temp is on a different filesystem. To avoid hitting the temp file in the loop, it now goes through image*
i=1; temp=$(mktemp -p .); for file in image*
do
mv "$file" $temp;
mv $temp $(printf "image_%0.3d.jpg" $i)
i=$((i + 1))
done
A simple loop (test with echo, execute with mv):
I=1
for F in *; do
echo "$F" `printf image_%03d.jpg $I`
#mv "$F" `printf image_%03d.jpg $I` 2>/dev/null || true
I=$((I + 1))
done
(I added 2>/dev/null || true to suppress warnings about identical source and target files. If this is not to your liking, go with Matthew Flaschen's answer.)
Some good answers here already; but some rely on hiding errors which is not a good idea (that assumes mv will only error because of a condition that is expected - what about all the other reaons mv might error?).
Moreover, it can be done a little shorter and should be better quoted:
for file in *; do
printf -vsequenceImage 'image_%03d.jpg' "$((++i))"
[[ -e $sequenceImage ]] || \
mv "$file" "$sequenceImage"
done
Also note that you shouldn't capitalize your variables in bash scripts.
Try the following script:
numerate.sh
This code snipped should do the job:
./numerate.sh -d <your image folder> -b <start number> -L 3 -p image_ -s .jpg -o numerically -r
This does the reverse of what you are asking (taking files of the form *.jpg.001 and converting them to *.001.jpg), but can easily be modified for your purpose:
for file in *
do
if [[ "$file" =~ "(.*)\.([[:alpha:]]+)\.([[:digit:]]{3,})$" ]]
then
mv "${BASH_REMATCH[0]}" "${BASH_REMATCH[1]}.${BASH_REMATCH[3]}.${BASH_REMATCH[2]}"
fi
done
I was going to suggest something like the above using a for loop, an iterator, cut -f1 -d "_", then mv i i.iterator. It looks like it's already covered other ways, though.

How do you normalize a file path in Bash?

I want to transform /foo/bar/.. to /foo
Is there a bash command which does this?
Edit: in my practical case, the directory does exist.
if you're wanting to chomp part of a filename from the path, "dirname" and "basename" are your friends, and "realpath" is handy too.
dirname /foo/bar/baz
# /foo/bar
basename /foo/bar/baz
# baz
dirname $( dirname /foo/bar/baz )
# /foo
realpath ../foo
# ../foo: No such file or directory
realpath /tmp/../tmp/../tmp
# /tmp
realpath alternatives
If realpath is not supported by your shell, you can try
readlink -f /path/here/..
Also
readlink -m /path/there/../../
Works the same as
realpath -s /path/here/../../
in that the path doesn't need to exist to be normalized.
I don't know if there is a direct bash command to do this, but I usually do
normalDir="`cd "${dirToNormalize}";pwd`"
echo "${normalDir}"
and it works well.
Try realpath. Below is the source in its entirety, hereby donated to the public domain.
// realpath.c: display the absolute path to a file or directory.
// Adam Liss, August, 2007
// This program is provided "as-is" to the public domain, without express or
// implied warranty, for any non-profit use, provided this notice is maintained.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <libgen.h>
#include <limits.h>
static char *s_pMyName;
void usage(void);
int main(int argc, char *argv[])
{
char
sPath[PATH_MAX];
s_pMyName = strdup(basename(argv[0]));
if (argc < 2)
usage();
printf("%s\n", realpath(argv[1], sPath));
return 0;
}
void usage(void)
{
fprintf(stderr, "usage: %s PATH\n", s_pMyName);
exit(1);
}
A portable and reliable solution is to use python, which is preinstalled pretty much everywhere (including Darwin). You have two options:
abspath returns an absolute path but does not resolve symlinks:
python -c "import os,sys; print(os.path.abspath(sys.argv[1]))" path/to/file
realpath returns an absolute path and in doing so resolves symlinks, generating a canonical path:
python -c "import os,sys; print(os.path.realpath(sys.argv[1]))" path/to/file
In each case, path/to/file can be either a relative or absolute path.
Use the readlink utility from the coreutils package.
MY_PATH=$(readlink -f "$0")
Old question, but there is much simpler way if you are dealing with full path names at the shell level:
abspath="$( cd "$path" && pwd )"
As the cd happens in a subshell it does not impact the main script.
Two variations, supposing your shell built-in commands accept -L and -P, are:
abspath="$( cd -P "$path" && pwd -P )" #physical path with resolved symlinks
abspath="$( cd -L "$path" && pwd -L )" #logical path preserving symlinks
Personally, I rarely need this later approach unless I'm fascinated with symbolic links for some reason.
FYI: variation on obtaining the starting directory of a script which works even if the script changes it's current directory later on.
name0="$(basename "$0")"; #base name of script
dir0="$( cd "$( dirname "$0" )" && pwd )"; #absolute starting dir
The use of CD assures you always have the absolute directory, even if the script is run by commands such as ./script.sh which, without the cd/pwd, often gives just .. Useless if the script does a cd later on.
readlink is the bash standard for obtaining the absolute path. It also has the advantage of returning empty strings if paths or a path doesn't exist (given the flags to do so).
To get the absolute path to a directory that may or may not exist, but who's parents do exist, use:
abspath=$(readlink -f $path)
To get the absolute path to a directory that must exist along with all parents:
abspath=$(readlink -e $path)
To canonicalise the given path and follow symlinks if they happen to exist, but otherwise ignore missing directories and just return the path anyway, it's:
abspath=$(readlink -m $path)
The only downside is that readlink will follow links. If you do not want to follow links, you can use this alternative convention:
abspath=$(cd ${path%/*} && echo $PWD/${path##*/})
That will chdir to the directory part of $path and print the current directory along with the file part of $path. If it fails to chdir, you get an empty string and an error on stderr.
As Adam Liss noted realpath is not bundled with every distribution. Which is a shame, because it is the best solution. The provided source code is great, and I will probably start using it now. Here is what I have been using until now, which I share here just for completeness:
get_abs_path() {
local PARENT_DIR=$(dirname "$1")
cd "$PARENT_DIR"
local ABS_PATH="$(pwd)"/"$(basename "$1")"
cd - >/dev/null
echo "$ABS_PATH"
}
If you want it to resolve symlinks, just replace pwd with pwd -P.
My recent solution was:
pushd foo/bar/..
dir=`pwd`
popd
Based on the answer of Tim Whitcomb.
Not exactly an answer but perhaps a follow-up question (original question was not explicit):
readlink is fine if you actually want to follow symlinks. But there is also a use case for merely normalizing ./ and ../ and // sequences, which can be done purely syntactically, without canonicalizing symlinks. readlink is no good for this, and neither is realpath.
for f in $paths; do (cd $f; pwd); done
works for existing paths, but breaks for others.
A sed script would seem to be a good bet, except that you cannot iteratively replace sequences (/foo/bar/baz/../.. -> /foo/bar/.. -> /foo) without using something like Perl, which is not safe to assume on all systems, or using some ugly loop to compare the output of sed to its input.
FWIW, a one-liner using Java (JDK 6+):
jrunscript -e 'for (var i = 0; i < arguments.length; i++) {println(new java.io.File(new java.io.File(arguments[i]).toURI().normalize()))}' $paths
I'm late to the party, but this is the solution I've crafted after reading a bunch of threads like this:
resolve_dir() {
(builtin cd `dirname "${1/#~/$HOME}"`'/'`basename "${1/#~/$HOME}"` 2>/dev/null; if [ $? -eq 0 ]; then pwd; fi)
}
This will resolve the absolute path of $1, play nice with ~, keep symlinks in the path where they are, and it won't mess with your directory stack. It returns the full path or nothing if it doesn't exist. It expects $1 to be a directory and will probably fail if it's not, but that's an easy check to do yourself.
Talkative, and a bit late answer. I need to write one since I'm stuck on older RHEL4/5.
I handles absolute and relative links, and simplifies //, /./ and somedir/../ entries.
test -x /usr/bin/readlink || readlink () {
echo $(/bin/ls -l $1 | /bin/cut -d'>' -f 2)
}
test -x /usr/bin/realpath || realpath () {
local PATH=/bin:/usr/bin
local inputpath=$1
local changemade=1
while [ $changemade -ne 0 ]
do
changemade=0
local realpath=""
local token=
for token in ${inputpath//\// }
do
case $token in
""|".") # noop
;;
"..") # up one directory
changemade=1
realpath=$(dirname $realpath)
;;
*)
if [ -h $realpath/$token ]
then
changemade=1
target=`readlink $realpath/$token`
if [ "${target:0:1}" = '/' ]
then
realpath=$target
else
realpath="$realpath/$target"
fi
else
realpath="$realpath/$token"
fi
;;
esac
done
inputpath=$realpath
done
echo $realpath
}
mkdir -p /tmp/bar
(cd /tmp ; ln -s /tmp/bar foo; ln -s ../.././usr /tmp/bar/link2usr)
echo `realpath /tmp/foo`
The problem with realpath is that it is not available on BSD (or OSX for that matter). Here is a simple recipe extracted from a rather old (2009) article from Linux Journal, that is quite portable:
function normpath() {
# Remove all /./ sequences.
local path=${1//\/.\//\/}
# Remove dir/.. sequences.
while [[ $path =~ ([^/][^/]*/\.\./) ]]; do
path=${path/${BASH_REMATCH[0]}/}
done
echo $path
}
Notice this variant also does not require the path to exist.
Try our new Bash library product realpath-lib that we have placed on GitHub for free and unencumbered use. It's thoroughly documented and makes a great learning tool.
It resolves local, relative and absolute paths and doesn't have any dependencies except Bash 4+; so it should work just about anywhere. It's free, clean, simple and instructive.
You can do:
get_realpath <absolute|relative|symlink|local file path>
This function is the core of the library:
function get_realpath() {
if [[ -f "$1" ]]
then
# file *must* exist
if cd "$(echo "${1%/*}")" &>/dev/null
then
# file *may* not be local
# exception is ./file.ext
# try 'cd .; cd -;' *works!*
local tmppwd="$PWD"
cd - &>/dev/null
else
# file *must* be local
local tmppwd="$PWD"
fi
else
# file *cannot* exist
return 1 # failure
fi
# reassemble realpath
echo "$tmppwd"/"${1##*/}"
return 0 # success
}
It also contains functions to get_dirname, get_filename, get_ stemname and validate_path. Try it across platforms, and help to improve it.
Based on #Andre's answer, I might have a slightly better version, in case someone is after a loop-free, completely string-manipulation based solution. It is also useful for those who don't want to dereference any symlinks, which is the downside of using realpath or readlink -f.
It works on bash versions 3.2.25 and higher.
shopt -s extglob
normalise_path() {
local path="$1"
# get rid of /../ example: /one/../two to /two
path="${path//\/*([!\/])\/\.\./}"
# get rid of /./ and //* example: /one/.///two to /one/two
path="${path//#(\/\.\/|\/+(\/))//}"
# remove the last '/.'
echo "${path%%/.}"
}
$ normalise_path /home/codemedic/../codemedic////.config
/home/codemedic/.config
I made a builtin-only function to handle this with a focus on highest possible performance (for fun). It does not resolve symlinks, so it is basically the same as realpath -sm.
## A bash-only mimic of `realpath -sm`.
## Give it path[s] as argument[s] and it will convert them to clean absolute paths
abspath () {
${*+false} && { >&2 echo $FUNCNAME: missing operand; return 1; };
local c s p IFS='/'; ## path chunk, absolute path, input path, IFS for splitting paths into chunks
local -i r=0; ## return value
for p in "$#"; do
case "$p" in ## Check for leading backslashes, identify relative/absolute path
'') ((r|=1)); continue;;
//[!/]*) >&2 echo "paths =~ ^//[^/]* are impl-defined; not my problem"; ((r|=2)); continue;;
/*) ;;
*) p="$PWD/$p";; ## Prepend the current directory to form an absolute path
esac
s='';
for c in $p; do ## Let IFS split the path at '/'s
case $c in ### NOTE: IFS is '/'; so no quotes needed here
''|.) ;; ## Skip duplicate '/'s and '/./'s
..) s="${s%/*}";; ## Trim the previous addition to the absolute path string
*) s+=/$c;; ### NOTE: No quotes here intentionally. They make no difference, it seems
esac;
done;
echo "${s:-/}"; ## If xpg_echo is set, use `echo -E` or `printf $'%s\n'` instead
done
return $r;
}
Note: This function does not handle paths starting with //, as exactly two double slashes at the start of a path are implementation-defined behavior. However, it handles /, ///, and so on just fine.
This function seems to handle all edge cases properly, but there might still be some out there that I haven't dealt with.
Performance Note: when called with thousands of arguments, abspath runs about 10x slower than realpath -sm; when called with a single argument, abspath runs >110x faster than realpath -sm on my machine, mostly due to not needing to execute a new program every time.
If you just want to normalize a path, existed or not existed, without touching the file system, without resolving any links, and without external utils, here is a pure Bash function translated from Python's posixpath.normpath.
#!/usr/bin/env bash
# Normalize path, eliminating double slashes, etc.
# Usage: new_path="$(normpath "${old_path}")"
# Translated from Python's posixpath.normpath:
# https://github.com/python/cpython/blob/master/Lib/posixpath.py#L337
normpath() {
local IFS=/ initial_slashes='' comp comps=()
if [[ $1 == /* ]]; then
initial_slashes='/'
[[ $1 == //* && $1 != ///* ]] && initial_slashes='//'
fi
for comp in $1; do
[[ -z ${comp} || ${comp} == '.' ]] && continue
if [[ ${comp} != '..' || (-z ${initial_slashes} && ${#comps[#]} -eq 0) || (\
${#comps[#]} -gt 0 && ${comps[-1]} == '..') ]]; then
comps+=("${comp}")
elif ((${#comps[#]})); then
unset 'comps[-1]'
fi
done
comp="${initial_slashes}${comps[*]}"
printf '%s\n' "${comp:-.}"
}
Examples:
new_path="$(normpath '/foo/bar/..')"
echo "${new_path}"
# /foo
normpath "relative/path/with trailing slashs////"
# relative/path/with trailing slashs
normpath "////a/../lot/././/mess////./here/./../"
# /lot/mess
normpath ""
# .
# (empty path resolved to dot)
Personally, I cannot understand why Shell, a language often used for manipulating files, doesn't offer basic functions to deal with paths. In python, we have nice libraries like os.path or pathlib, which offers a whole bunch of tools to extract filename, extension, basename, path segments, split or join paths, to get absolute or normalized paths, to determine relations between paths, to do everything without much brain. And they take care of edge cases, and they're reliable. In Shell, to do any of these, either we call external executables, or we have to reinvent wheels with these extremely rudimentary and arcane syntaxes...
I needed a solution that would do all three:
Work on a stock Mac. realpath and readlink -f are addons
Resolve symlinks
Have error handling
None of the answers had both #1 and #2. I added #3 to save others any further yak-shaving.
#!/bin/bash
P="${1?Specify a file path}"
[ -e "$P" ] || { echo "File does not exist: $P"; exit 1; }
while [ -h "$P" ] ; do
ls="$(ls -ld "$P")"
link="$(expr "$ls" : '.*-> \(.*\)$')"
expr "$link" : '/.*' > /dev/null &&
P="$link" ||
P="$(dirname "$P")/$link"
done
echo "$(cd "$(dirname "$P")"; pwd)/$(basename "$P")"
Here is a short test case with some twisted spaces in the paths to fully exercise the quoting
mkdir -p "/tmp/test/ first path "
mkdir -p "/tmp/test/ second path "
echo "hello" > "/tmp/test/ first path / red .txt "
ln -s "/tmp/test/ first path / red .txt " "/tmp/test/ second path / green .txt "
cd "/tmp/test/ second path "
fullpath " green .txt "
cat " green .txt "
Based on loveborg's excellent python snippet, I wrote this:
#!/bin/sh
# Version of readlink that follows links to the end; good for Mac OS X
for file in "$#"; do
while [ -h "$file" ]; do
l=`readlink $file`
case "$l" in
/*) file="$l";;
*) file=`dirname "$file"`/"$l"
esac
done
#echo $file
python -c "import os,sys; print os.path.abspath(sys.argv[1])" "$file"
done
FILEPATH="file.txt"
echo $(realpath $(dirname $FILEPATH))/$(basename $FILEPATH)
This works even if the file doesn't exist. It does require the directory containing the file to exist.
I know this is an ancient question. I'm still offering an alternative. Recently I met the same issue and found no existing and portable command to do that. So I wrote the following shell script which includes a function that can do the trick.
#! /bin/sh
function normalize {
local rc=0
local ret
if [ $# -gt 0 ] ; then
# invalid
if [ "x`echo $1 | grep -E '^/\.\.'`" != "x" ] ; then
echo $1
return -1
fi
# convert to absolute path
if [ "x`echo $1 | grep -E '^\/'`" == "x" ] ; then
normalize "`pwd`/$1"
return $?
fi
ret=`echo $1 | sed 's;/\.\($\|/\);/;g' | sed 's;/[^/]*[^/.]\+[^/]*/\.\.\($\|/\);/;g'`
else
read line
normalize "$line"
return $?
fi
if [ "x`echo $ret | grep -E '/\.\.?(/|$)'`" != "x" ] ; then
ret=`normalize "$ret"`
rc=$?
fi
echo "$ret"
return $rc
}
https://gist.github.com/bestofsong/8830bdf3e5eb9461d27313c3c282868c
Since none of the presented solutions worked for me, in the case where a file does not exist, I implemented my idea.
The solution of André Anjos had the problem that paths beginning with ../../ were resolved wrongly. For example ../../a/b/ became a/b/.
function normalize_rel_path(){
local path=$1
result=""
IFS='/' read -r -a array <<< "$path"
i=0
for (( idx=${#array[#]}-1 ; idx>=0 ; idx-- )) ; do
c="${array[idx]}"
if [ -z "$c" ] || [[ "$c" == "." ]];
then
continue
fi
if [[ "$c" == ".." ]]
then
i=$((i+1))
elif [ "$i" -gt "0" ];
then
i=$((i-1))
else
if [ -z "$result" ];
then
result=$c
else
result=$c/$result
fi
fi
done
while [ "$i" -gt "0" ]; do
i=$((i-1))
result="../"$result
done
unset IFS
echo $result
}
I discovered today that you can use the stat command to resolve paths.
So for a directory like "~/Documents":
You can run this:
stat -f %N ~/Documents
To get the full path:
/Users/me/Documents
For symlinks, you can use the %Y format option:
stat -f %Y example_symlink
Which might return a result like:
/usr/local/sbin/example_symlink
The formatting options might be different on other versions of *NIX but these worked for me on OSX.
A simple solution using node.js:
#!/usr/bin/env node
process.stdout.write(require('path').resolve(process.argv[2]));

Resources