Find -perm finds file (or something else) that dosent exist - linux

Hello i am new in the world of linux and i have a simple question:
I created a directory named testfiles and 10 textfiles inside of it with various permissions.
I tried using find -perm -u=wr | wc -l in order to count all files with rw prem for user but it counts 1 more file than it is supposed to have. I then used find -perm -u=wr to see the list and realized there is an extra line in the beginning which i am not sure what it is can someone help? How do I get rid of that?

The '.' and '..' are references to the current directory and the parent directory respectively. These exist in all linux directories. As these are treated as directories, you can ignore them by using the '-type' option of the find command, where 'f' specifies files only.
find -perm -u=wr -type f | wc -l
Note that the find command as you have it will recursively find files if another directory exists where you are running the command. You can limit it to the current directory only by using the max depth option. Run 'man find' in the terminal to see all the other options you have with the find command.

Related

How can i count the number of files with a specific octal code without them showing in shell

I tried using tree command but I didn't know how .(I wanted to use tree because I don't want the files to show up , just the number)
Let's say c is the code for permission
For example I want to know how many files are there with the permission 751
Use find with the -perm flag, which only matches files with the specified permission bits.
For example, if you have the octal in $c, then run
find . -perm $c
The usual find options apply—if you only want to find files at the current level without recursing into directories, run
find . -maxdepth 1 -perm $c
To find the number of matching files, make find print a dot for every file and use wc to count the number of dots. (wc -l will not work with more exotic filenames with newlines as #BenjaminW. has pointed out in the comments. Source of idea of using wc -c is this answer.)
find . -maxdepth 1 -perm $c -printf '.' | wc -c
This will show the number of files without showing the files themselves.
If you're using zsh as your shell, you can do it natively without any external programs:
setopt EXTENDED_GLOB # Just in case it's not already set
c=0751
files=( **/*(#qf$c) )
echo "${#files[#]} files found"
will count all files in the current working directory and subdirectories with those permissions (And gives you all the names in an array in case you want to do something with them later). Read more about zsh glob qualifiers in the documentation.

How to search for files ending/starting/containing a certain letter in terminal?

I have been looking all over the internet to help me with this. I want to list all files that start/end/contain a certain letter but the results I found on the internet do not seem to work for me. I need to use the ls command for this (assignment).
I tried this code from another question:
ls abc* # list all files starting with abc---
ls *abc* # list all files containing --abc--
ls *abc # list all files ending with --abc
but when ever I try any of those it comes back with "ls: cannot access '*abc': No such file or directory"
Use find for finding files:
find /path/to/folder -maxdepth 1 -type f -name 'abc*'
This will give you all regular filenames within /path/to/folder which start with abc.

How to list down the full paths to all files in multiple directories of a certain file format?

I am looking for a Bash solution to my dilemma here. I have directory project, and within that I have project/Batch1 project/Batch2 project/Batch3 and so on.
Within each Batch folder, I have a mix of files but what I am interested in are files in the .txt format, for example. I am able to recursively acquire the names of every file in my project directory using the ls -LR command. However, how should I get full paths including the file names, and also only for .txt files?
Thanks!
Use find command. Open man 1 find and check possible option you need for above task.
Here is the simple example. Open terminal and run
find pwd -type f -name "*.txt"
Or
find pwd -name "*.txt"
Or you can do same using bash script also.

parsing ls -R output into a variable in Unix

I am executing a ls-R /files/
I got the following output
./: nvision
./nvision: layout
./nvision/layout: abcd.txt
I am looking to get path in the listing like
/nvision
/nvision/layout/
/nvision/layout/abcd.txt
and I should be able to copy the required path to a variable
ps: I am not searching for nvision
I am trying to get the list of folders and files under files folder
can any one help me with that
Have you tried using find (see reference)
It would be as easy as find . to get the list of files and folders inside the current directory. Change the . to any path to obtain the list of files and directories inside that path:
nvision
nvision/abcd.txt
nvision/layout
To save it to a variable
var=`find .`
And to add the initial slash to every line (if required)
var=`find . -exec echo /{} \;`
Here var has no special meaning, it's just the variable name.
To later use the variable you can use $var or ${var}. For example, to print it or save it to file:
# Print the variable content
echo $var
# Save the content of var to a file
echo $var > /tmp/file.txt
You should really use find for these kind of things. Simply use find directory. If you require more specific output formatting you can make use of find's -printf option. Find is a really powerful tool that also allows all kinds of filtering. Make sure you check the documentation for more information: GNU FindUtils.
To store the results in a variable use one of the following statements:
result=`find ...`
or
result=$(find ...)
You can also use find to directly execute a command for each match using find's -exec option. Again, make sure to check out the documentation. It's really comprehensive.
Update (Mac / UNIX users – Linux users are not affected)
BSD find requires a path. Use
find .
instead of just
find
if you require a listing of all files in your working directory.
well the answer is all over this page you should be using find which lists all files found yo can define
where . is current folder otherwise replace . with path you are wishing to search
find .-type d -print
which lists directories only or find
or
find . -type f -print
which will list all files only
if you are looking for both then
find . -print
and if you only wish to define recursive level try
find . -maxdepth 1 -print
and here is a script
#!/bin/bash
for names in $(find . -type f -print); do
echo $names
done

sed not working as expected, but only for directory depth greater than 1

I am trying to find all instances of a string in all files on my system up to a specified directory depth. I then want to replace these with another string and I am using 'find' and 'sed' by piping one into the other.
This works where I use the base path as cd /home/../.. or any other directory which isn't "/". It also only works if I select a directory depth of 1 (so /test.txt is changed, but /home/test.txt isn't) If I change nothing else and used say a depth of 2 or 3, neither /test.txt nor /home/text.txt are changed. In the former, no warnings appear, and in the latter, the results below (And no strings are replaced in either of the files).
Worryingly, it did work once out of the blue, but I have no idea how and I can't recreate the results. I should say I know the risks of using these commands with root from base directory, and the specific use of the programs below is intentional so I am not looking for an alternative way, just a clue as to how this isn't working and perhaps a suggestion on how to fix it.
cd /;find . -maxdepth 3 -type f -print0 | xargs -0 sed -i 's/teststring123/itworked/gI'
sed: couldn't open temporary file ./sys/kernel/sedoPGqGB: No such file or directory
sed: couldn't open temporary file ./proc/878/sedtqayiq: No such file or directory
As you see, there are warnings, but nether the less I would expect it to work, the commands appear good, anything I am missing folks?
This should be:
find / -maxdepth 3 -type f -print -exec sed -i -e 's/teststring123/itworked/g' {} \;
Although changing all files below / strikes me as a very bad idea indeed (I hope you're not running as root!).
The "couldn't open temporary file ./[...]" errors are likely to be because sed, running as your user, doesn't have permission to create files in /.
My version runs from your current working directory, I assume your ${HOME}, where you'll be able to create the temporary file, but you're still unlikely to be able to replace those files vital to the continued running of your operating system.

Resources