How can I list the path of the output of this script? - linux

How can I list the path of the output of this script?
This is my command:
(ls -d */ ); echo -n $i; ls -R $i | grep "wp-config.php" ;
This is my current output:
/wp-config.php

It seems you want find the path to a file called "wp-config.php".
Does the following help?
find $PWD -name 'wp-config.php'

Your script is kind of confusing: Why does ls -d */ does not show any output? What's the value of $i? Your problem in fact seems to be that ls -R lists the contents of all subdirectories but doesn't give you full paths for their contents.
Well, find is the best tool for that, but you can simulate it in this case via a script like this:
#!/bin/bash
searchFor=wp-config.php
startDir=${1:-.}
lsSubDir() {
local actDir="$1"
for entry in $(ls "$actDir"); do
if [ -d "$actDir/$entry" ]; then
lsSubDir "$actDir/$entry"
else
[ $entry = $searchFor ] && echo "$actDir/$entry"
fi
done
}
lsSubDir $startDir
Save it in a file like findSimulator, make it executable and call it with the directory where to start searching as parameter.
Be warned: this script is not very efficient, it may stop on large subdirectory-trees because of recursion. I would strongly recommend the solution using find.

Related

extracting files that doesn't have a dir with the same name

sorry for that odd title. I didn't know how to word it the right way.
I'm trying to write a script to filter my wiki files to those got directories with the same name and the ones without. I'll elaborate further.
here is my file system:
what I need to do is print a list of those files which have directories in their name and another one of those without.
So my ultimate goal is getting:
with dirs:
Docs
Eng
Python
RHEL
To_do_list
articals
without dirs:
orphan.txt
orphan2.txt
orphan3.txt
I managed to get those files with dirs. Here is me code:
getname () {
file=$( basename "$1" )
file2=${file%%.*}
echo $file2
}
for d in Mywiki/* ; do
if [[ -f $d ]]; then
file=$(getname $d)
for x in Mywiki/* ; do
dir=$(getname $x)
if [[ -d $x ]] && [ $dir == $file ]; then
echo $dir
fi
done
fi
done
but stuck with getting those without. if this is the wrong way of doing this please clarify the right one.
any help appreciated. Thanks.
Here's a quick attempt.
for file in Mywiki/*.txt; do
nodir=${file##*/}
test -d "${file%.txt}" && printf "%s\n" "$nodir" >&3 || printf "%s\n" "$nodir"
done >with 3>without
This shamelessly uses standard output for the non-orphans. Maybe more robustly open another separate file descriptor for that.
Also notice how everything needs to be quoted unless you specifically require the shell to do whitespace tokenization and wildcard expansion on the value of a token. Here's the scoop on that.
That may not be the most efficient way of doing it, but you could take all files, remove the extension, and the check if there isn't a directory with that name.
Like this (untested code):
for file in Mywiki/* ; do
if [ -f "$d" ]; then
dirname=$(getname "$d")
if [ ! -d "Mywiki/$dirname" ]; then
echo "$file"
fi
fi
done
To List all the files in current dir
list1=`ls -p | grep -v /`
To List all the files in current dir without extension
list2=`ls -p | grep -v / | sed 's/\.[a-z]*//g'`
To List all the directories in current dir
list3=`ls -d */ | sed -e "s/\///g"`
Now you can get the desired directory listing using intersection of list2 and list3. Intersection of two lists in Bash

List files greater than 100K in bash

I want to list the files recursively in the HOME directory. I'm trying to write my own script , so I should not use the command find or ls. My script is:
#!/bin/bash
minSize=102400;
printFiles() {
for x in "$1/"*; do
if [ -d "$x" ]; then
printFiles "$x";
else
size=$(wc -c "$x");
if [[ "$size" -gt "$minSize" ]]; then
echo "$size";
fi
fi
done
}
printFiles "/~";
So, the problem here is that when I run this script, the terminal throws Line 11: division by 0 and /home/gandalf/Videos/*: No such file or directory. I have not divided by any number, why I'm getting this error?. And the second one?
Alternatively, I can't use find or ls because I have to display the files one by one asking to the user if he want to see the next file or not. This is possible using the command find or ls or only can be done writing my own function?
Thanks.
size=$(wc -c "$x");
That's the line that is failing. When you run that wc command manually you should be able to see why:
$ wc -c /tmp/out
5 /tmp/out
The output contains not only the file size but also the file name. So you can't use $size with the -gt comparator on the next line. One way to fix that is to change the wc line to use cut (or awk, or sed, etc) to keep just the file size.
size=$(wc -c "$x" | cut -f1 -d " ")
A simpler alternative suggested by #mklement0:
size=$(wc -c < "$x")

How to pipe files one by one from list into script?

I have a list of files that I need to pipe into a shell script. I can list the files within a directory by using the following:
ls ~/data/2121/*SOMEFILE*
resulting in:
2121.SOMEFILEaa
2121.SOMEFILEab
2121.SOMEFILEac
and so on...
I have another script that performs some processing on a single file (2121.SOMEFILEaa) which I run by using the following command:
bash runscript ../data/2121/2121.SOMEFILEaa
However, I need to make this more efficient by piping individual files from the list of files generated via ls into the script. How can I pipe the results from the ls ~/data/2121/*SOMEFILES* command--file by file--into the runscript script?
Another option
ls ~/data/2121/*SOMEFILE* | xargs -L1 bash runscript
I think you are looking for this:
for file in ~/data/2121/*SOMEFILE*; do
bash runscript "$file"
done
In this way, you're calling bash runscript for each file.
$ cat pipe.sh
#!/bin/bash
## Store data from pipe to variable $PIPE ------#
_read_pipe(){ #
while read -t 10 pipe; do
if [ -n "$pipe" ] ;then
PIPE="$PIPE $pipe" ;fi ;done ;}
## your code -----------------------------------#
_read_pipe #
for kung_foo in $PIPE ;do
echo $kung_foo ;done
$ ls 2121.SOMEFILE* | ./pipe.sh
2121.SOMEFILEaa
2121.SOMEFILEab
2121.SOMEFILEac
and so on...
[ -t ] is for timeout
I hope this helps,
cheers Karim

How do I search for a file based on what is output by a command running on that file

I am working on a project for one of my professors and he asked me to sort a couple hundred .fits images based on their header files (specifically what star they are images of) I think that grep would be the best way to do this however I can't seam to figure out how to use grep based on the header.
I am entering:
ls | imhead *.fits | grep -E -r "PG\ 1104+243" *
to just list them out for now, once they are listed I know how to copy them into a directory.
I am new to using grep so I am unsure as to where my error lies? any help would be greatly appreciated! Thanks!
Assuming that imghead will extract the headers of the .fits as txt, you can use a simple shell script to do it:
script.sh
#!/bin/bash
grep "$1" "$2" > /dev/null 2>&1 && echo "$2"
Note that the + is a special character if you use extended regular expression, meaning if you pass the -E as in the question. A simple grep without any options should do the trick here.
Use find to exec the script on every *.fits file in the current folder:
find -maxdepth 1 -name '*.fits' -exec ./script.sh 'PG 1104+243' {} \;
If you are going to copy/move/alter or do something with the files you find, you might be better off, in terms of complexity and ease of quoting, using a loop like this:
#!/bin/bash
find . -name \*.fits -print0 | while read -d '' -r file; do
echo Checking file: $file
imhead "$file" | grep -q 'PG 1104+243'
if [ $? -eq 0 ]; then
echo Object matches: $file
fi
done

How to get full path of a file?

Is there an easy way I can print the full path of file.txt ?
file.txt = /nfs/an/disks/jj/home/dir/file.txt
The <command>
dir> <command> file.txt
should print
/nfs/an/disks/jj/home/dir/file.txt
Use readlink:
readlink -f file.txt
I suppose you are using Linux.
I found a utility called realpath in coreutils 8.15.
realpath -s file.txt
/data/ail_data/transformed_binaries/coreutils/test_folder_realpath/file.txt
Since the question is about how to get the full/absolute path of a file and not about how to get the target of symlinks, use -s or --no-symlinks which means don't expand symlinks.
As per #styrofoam-fly and #arch-standton comments, realpath alone doesn't check for file existence, to solve this add the e argument: realpath -e file
The following usually does the trick:
echo "$(cd "$(dirname "$1")" && pwd -P)/$(basename "$1")"
I know there's an easier way that this, but darned if I can find it...
jcomeau#intrepid:~$ python -c 'import os; print(os.path.abspath("cat.wav"))'
/home/jcomeau/cat.wav
jcomeau#intrepid:~$ ls $PWD/cat.wav
/home/jcomeau/cat.wav
On Windows:
Holding Shift and right clicking on a file in Windows Explorer gives you an option called Copy as Path.
This will copy the full path of the file to clipboard.
On Linux:
You can use the command realpath yourfile to get the full path of a file as suggested by others.
find $PWD -type f | grep "filename"
or
find $PWD -type f -name "*filename*"
If you are in the same directory as the file:
ls "`pwd`/file.txt"
Replace file.txt with your target filename.
I know that this is an old question now, but just to add to the information here:
The Linux command which can be used to find the filepath of a command file, i.e.
$ which ls
/bin/ls
There are some caveats to this; please see https://www.cyberciti.biz/faq/how-do-i-find-the-path-to-a-command-file/.
You could use the fpn (full path name) script:
% pwd
/Users/adamatan/bins/scripts/fpn
% ls
LICENSE README.md fpn.py
% fpn *
/Users/adamatan/bins/scripts/fpn/LICENSE
/Users/adamatan/bins/scripts/fpn/README.md
/Users/adamatan/bins/scripts/fpn/fpn.py
fpn is not a standard Linux package, but it's a free and open github project and you could set it up in a minute.
Works on Mac, Linux, *nix:
This will give you a quoted csv of all files in the current dir:
ls | xargs -I {} echo "$(pwd -P)/{}" | xargs | sed 's/ /","/g'
The output of this can be easily copied into a python list or any similar data structure.
echo $(cd $(dirname "$1") && pwd -P)/$(basename "$1")
This is explanation of what is going on at #ZeRemz's answer:
This script get relative path as argument "$1"
Then we get dirname part of that path (you can pass either dir or file to this script): dirname "$1"
Then we cd "$(dirname "$1") into this relative dir
&& pwd -P and get absolute path for it. -P option will avoid all symlinks
After that we append basename to absolute path: $(basename "$1")
As final step we echo it
You may use this function. If the file name is given without relative path, then it is assumed to be present in the current working directory:
abspath() { old=`pwd`;new=$(dirname "$1");if [ "$new" != "." ]; then cd $new; fi;file=`pwd`/$(basename "$1");cd $old;echo $file; }
Usage:
$ abspath file.txt
/I/am/in/present/dir/file.txt
Usage with relative path:
$ abspath ../../some/dir/some-file.txt
/I/am/in/some/dir/some-file.txt
With spaces in file name:
$ abspath "../../some/dir/another file.txt"
/I/am/in/some/dir/another file.txt
You can save this in your shell.rc or just put in console
function absolute_path { echo "$PWD/$1"; }
alias ap="absolute_path"
example:
ap somefile.txt
will output
/home/user/somefile.txt
I was surprised no one mentioned located.
If you have the locate package installed, you don't even need to be in the directory with the file of interest.
Say I am looking for the full pathname of a setenv.sh script. This is how to find it.
$ locate setenv.sh
/home/davis/progs/devpost_aws_disaster_response/python/setenv.sh
/home/davis/progs/devpost_aws_disaster_response/webapp/setenv.sh
/home/davis/progs/eb_testy/setenv.sh
Note, it finds three scripts in this case, but if I wanted just one I
would do this:
$ locate *testy*setenv.sh
/home/davis/progs/eb_testy/setenv.sh
This solution uses commands that exist on Ubuntu 22.04, but generally exist on most other Linux distributions, unless they are just to hardcore for s'mores.
The shortest way to get the full path of a file on Linux or Mac is to use the ls command and the PWD environment variable.
<0.o> touch afile
<0.o> pwd
/adir
<0.o> ls $PWD/afile
/adir/afile
You can do the same thing with a directory variable of your own, say d.
<0.o> touch afile
<0.o> d=/adir
<0.o> ls $d/afile
/adir/afile
Notice that without flags ls <FILE> and echo <FILE> are equivalent (for valid names of files in the current directory), so if you're using echo for that, you can use ls instead if you want.
If the situation is reversed, so that you have the full path and want the filename, just use the basename command.
<0.o> touch afile
<0.o> basename $PWD/afile
afile
In a similar scenario, I'm launching a cshell script from some other location. For setting the correct absolute path of the script so that it runs in the designated directory only, I'm using the following code:
set script_dir = `pwd`/`dirname $0`
$0 stores the exact string how the script was executed.
For e.g. if the script was launched like this: $> ../../test/test.csh,
$script_dir will contain /home/abc/sandbox/v1/../../test
For Mac OS X, I replaced the utilities that come with the operating system and replaced them with a newer version of coreutils. This allows you to access tools like readlink -f (for absolute path to files) and realpath (absolute path to directories) on your Mac.
The Homebrew version appends a 'G' (for GNU Tools) in front of the command name -- so the equivalents become greadlink -f FILE and grealpath DIRECTORY.
Instructions for how to install the coreutils/GNU Tools on Mac OS X through Homebrew can be found in this StackExchange arcticle.
NB: The readlink -f and realpath commands should work out of the box for non-Mac Unix users.
I like many of the answers already given, but I have found this really useful, especially within a script to get the full path of a file, including following symlinks and relative references such as . and ..
dirname `readlink -e relative/path/to/file`
Which will return the full path of the file from the root path onwards.
This can be used in a script so that the script knows which path it is running from, which is useful in a repository clone which could be located anywhere on a machine.
basePath=`dirname \`readlink -e $0\``
I can then use the ${basePath} variable in my scripts to directly reference other scripts.
Hope this helps,
Dave
This worked pretty well for me. It doesn't rely on the file system (a pro/con depending on need) so it'll be fast; and, it should be portable to most any *NIX. It does assume the passed string is indeed relative to the PWD and not some other directory.
function abspath () {
echo $1 | awk '\
# Root parent directory refs to the PWD for replacement below
/^\.\.\// { sub("^", "./") } \
# Replace the symbolic PWD refs with the absolute PWD \
/^\.\// { sub("^\.", ENVIRON["PWD"])} \
# Print absolute paths \
/^\// {print} \'
}
This is naive, but I had to make it to be POSIX compliant. Requires permission to cd into the file's directory.
#!/bin/sh
if [ ${#} = 0 ]; then
echo "Error: 0 args. need 1" >&2
exit 1
fi
if [ -d ${1} ]; then
# Directory
base=$( cd ${1}; echo ${PWD##*/} )
dir=$( cd ${1}; echo ${PWD%${base}} )
if [ ${dir} = / ]; then
parentPath=${dir}
else
parentPath=${dir%/}
fi
if [ -z ${base} ] || [ -z ${parentPath} ]; then
if [ -n ${1} ]; then
fullPath=$( cd ${1}; echo ${PWD} )
else
echo "Error: unsupported scenario 1" >&2
exit 1
fi
fi
elif [ ${1%/*} = ${1} ]; then
if [ -f ./${1} ]; then
# File in current directory
base=$( echo ${1##*/} )
parentPath=$( echo ${PWD} )
else
echo "Error: unsupported scenario 2" >&2
exit 1
fi
elif [ -f ${1} ] && [ -d ${1%/*} ]; then
# File in directory
base=$( echo ${1##*/} )
parentPath=$( cd ${1%/*}; echo ${PWD} )
else
echo "Error: not file or directory" >&2
exit 1
fi
if [ ${parentPath} = / ]; then
fullPath=${fullPath:-${parentPath}${base}}
fi
fullPath=${fullPath:-${parentPath}/${base}}
if [ ! -e ${fullPath} ]; then
echo "Error: does not exist" >&2
exit 1
fi
echo ${fullPath}
This works with both Linux and Mac OSX:
echo $(pwd)$/$(ls file.txt)
find / -samefile file.txt -print
Will find all the links to the file with the same inode number as file.txt
adding a -xdev flag will avoid find to cross device boundaries ("mount points"). (But this will probably cause nothing to be found if the find does not start at a directory on the same device as file.txt)
Do note that find can report multiple paths for a single filesystem object, because an Inode can be linked by more than one directory entry, possibly even using different names. For instance:
find /bin -samefile /bin/gunzip -ls
Will output:
12845178 4 -rwxr-xr-x 2 root root 2251 feb 9 2012 /bin/uncompress
12845178 4 -rwxr-xr-x 2 root root 2251 feb 9 2012 /bin/gunzip
Usually:
find `pwd` | grep <filename>
Alternatively, just for the current folder:
find `pwd` -maxdepth 1 | grep <filename>
This will work for both file and folder:
getAbsolutePath(){
[[ -d $1 ]] && { cd "$1"; echo "$(pwd -P)"; } ||
{ cd "$(dirname "$1")" || exit 1; echo "$(pwd -P)/$(basename "$1")"; }
}
Another Linux utility, that does this job:
fname <file>
For Mac OS, if you just want to get the path of a file in the finder, control click the file, and scroll down to "Services" at the bottom. You get many choices, including "copy path" and "copy full path". Clicking on one of these puts the path on the clipboard.
fp () {
PHYS_DIR=`pwd -P`
RESULT=$PHYS_DIR/$1
echo $RESULT | pbcopy
echo $RESULT
}
Copies the text to your clipboard and displays the text on the terminal window.
:)
(I copied some of the code from another stack overflow answer but cannot find that answer anymore)
In Mac OSX, do the following steps:
cd into the directory of the target file.
Type either of the following terminal commands.
Terminal
ls "`pwd`/file.txt"
echo $(pwd)/file.txt
Replace file.txt with your actual file name.
Press Enter
you#you:~/test$ ls
file
you#you:~/test$ path="`pwd`/`ls`"
you#you:~/test$ echo $path
/home/you/test/file
Beside "readlink -f" , another commonly used command:
$find /the/long/path/but/I/can/use/TAB/to/auto/it/to/ -name myfile
/the/long/path/but/I/can/use/TAB/to/auto/it/to/myfile
$
This also give the full path and file name at console
Off-topic: This method just gives relative links, not absolute. The readlink -f command is the right one.

Resources