Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I was experimenting something different with find. I just wanted to move all my .c files under my root to my source directory. Though, I've no idea how to exclude some directories where I don't want to pick up my files.
The line of code I should improve is the following:
sudo find / -type f -name "*.c" -exec cp {} ~/my/source/directory \;
Since there are different .c files in my source directory, copying files to the same path where the files are picked up would raise an error (it would work anyway, though). Also, this would happen anyway even if I had no file in my source directory, since they were copied there before.
So, my line should look like the following:
sudo find / -type f -name "*.c" EXCEPT ~/my/source/directory -exec cp {} ~/my/source/directory
Is there anyway to do that? I've looked up in man pages as well as in my reference manual (there is something similar there even if I couldn't emulate it), but I wasn't able to find nothing helpful.
Any help appreciated.
EDIT:
find man page is clear about it. If interested, you can find the solution to my problem at line 343 (343G on less).
If i got your question correctly, then this might help: Exclude directory from find . command
Check answer posted by #GetFree in the above link.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Currently i am using the below find command to find the cert8.db file
find / -type f -path '.mozilla/*' -name "cert8.db"
The above command is trying to search the file everywhere.
I would like to scan the file only under root and user profiles(Unknown, unknown1 etc).
most common location for this file is:
in root: /root/.mozilla/firefox/i636e2gs.default/cert8.db
profile unknown:
/home/unknown/.mozilla/firefox/bz5xry7t.default-beta/cert9.db
profile unknown1:
/home/unknown1/.mozilla/firefox/bz5xry7t.default-beta/cert9.db
Note: I am not sure, this could be the Mozilla's default initialization location.
Thanks,
find expects zero, one, or more paths where to look for files and apply its expressions. Replace (/ = file system root) with all locations you want to be searched:
find /root /home -type f -path '.mozilla/*' -name 'cert8.db'
You might even limit the search space further by moving the .mozilla part into the list of starting points if it is a reasonable assumption that the .mozilla directory is always a direct child of the root folder or the home directories (making your shell do the work instead of find):
find /root/.mozilla/firefox /home/*/.mozilla/firefox -type f -name 'cert8.db'
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I do not want to display files, only directories.
I found out that I can display all of the directories in my current working directory by entering
ls -d */
But I don't know how to specify another directory.
Also I do not want to use relative path, I want to use absolute path so that I can put it in a script that I can run and it will work no matter what location I run it from.
find $(pwd) -maxdepth 1 -type d will do that for you.
-type d specifies you want only directories
-maxdepth 1 prevents it from going recursive
$(pwd) gives it the absolute path as the reference point, but if you have some folder, say /batch/operation/home - then you could do that, and it'll still print the absolute path.
If you prefer, you could alias it, or make that a function in your script to make usage a bit cleaner
alias lsd="find $(pwd) -maxdepth 1 -type d"
I searched some more forums online and I found this solution, which I think is a little more straight forward:
find /home/israel/sales -type d -print
This is my path listing, but anyone could use their own absolute path for whatever folders they have. In my case, this lists all of the folders in the /home/israel/sales foler.
I tried it out and it worked for me :)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a directory called pics_temp which is soooo big, i can't even ls that directory and it has millions of files. I would like to delete that file. I can't use rm or rmdir and xargs rm -rf, because it will take much much longer time and the program will terminate by itself, that large is that directory... Is there any way that can i delete that directory ?
Can I write shell script and put it in cron job or is there any other way to delete contents in that directory ?
Thanks for your help.
Try find.
find pics_temp -mindepth 1 -print -delete
find invoked like this will not try to buffer large amounts of filenames and will not be restricted by the maximum length of an argument list or the long time it can take to fork a process millions of times. -mindepth 1 will cause it to delete everything inside pics_temp but not the directory itself.
It also doesn't rely on printf %s\\n * or ls or other such commands, which all share the drawback of requiring all files to be known before they start generating output.
The command given above has -print, which will print files before it deletes them. This can be useful to tell you that something is actually happening, but it will slow down the operation. When you are confident that it is working, you might want to remove the -print so that the command runs faster.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am using Ubuntu now but am used to Windows where i can natively search for specific files inside a folder, select all the found entries and directly select and copy all of them in order to paste to another folder.
Now I found that this is not possible with Ubuntu. I digged on the internets a bit and found several search tools like catfish, gnome-search-tool or recoll... but all they do is search, no selection or copying of the found files.
Therefore, I'd like to ask you guys if you have any idea on how to get this functionality?
Why I am asking here - because the Ubuntu forum is either dead or nobody has an idea.
Suppose you want to select txt files in sourcedir and move them to targetdir.
cd sourcedir
find . -name "*.txt" -level 0 -exec mv "{}" targetdir "{}" ";"
The exact syntax for your flavor of linux may vary slightly - in particular the -level 0 might use a different keyword and the "{}" ";" at the end might be different.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
in my linux system, for some reason ,the time is not right for sometimes,
and in a folder where will be create log file by an applicaiton.
so i want to delete the file while it is time is later than system.
for example when the tiem is 20131212 ,and the file time is 20140202.
I has already trid to use find . -mtime or stat -c etc,but i cannot got a running shell.
touch /tmp/currtime
find . type f -newer /tmp/currtime -exec rm {} \;
touch sets the modification time of /tmp/currtime to now. Then find looks for any files whose modification time is later than that, and deletes it.
You can use Barmar's method, touch a file /tmp/currtime
any file newer than /tmp/currtime will be in the future
alternatively you can do it without the temporary file
find . -newermt "1 second" -exec rm {} \;