listing files and copy it in unix - linux

The purpose is to copy files generated in last 2 hours. Here is the small script:
a=`find . -mmin -120 -ls`
cp $a /tmp/
echo $a
401 1 drwxr-x--- 2 oracle oinstall 1024 Mar 26 11:00 . 61
5953 -rw-r----- 1 oracle oinstall 6095360 Mar 26 11:00 ./file1
5953 -rw-r----- 1 oracle oinstall 6095360 Mar 26 11:00 ./file2
I get the following error:
cp: invalid option -- 'w'
Try `cp --help' for more information.
How can I fix the script ?

the -ls is giving you ls style output. Try dropping that and you should just get the relative path to the file which should be more like what you want. Or see Biffen's comment on your question, that seems like the approach I would have taken.

One problem is that -ls will print a lot of things beside the filenames, and they will be passed to cp and cp will be confused. So the first thing to do is to stop using -ls. (In the future you can use set -x to see what gets executed, it should help you debug this type of problem.)
Another problem is that the output of find can contain spaces and other things (imagine a file named $(rm -r *)) that can't simply be passed as arguments to cp.
I see three different solutions:
Use a single find command with -exec:
find . -mmin -120 -exec cp {} /tmp/ \;
Use xargs:
find . -mmin -120 -print0 | xargs -0 cp -t /tmp/
(Note the use of -t with cp to account for the swapped arguments.
Iterate over the output of find:
while IFS='' read -r -d '' file
do
cp "${file}" /tmp/
done < <( find . -mmin -120 -print0 )
(Caveat: I haven't tested any of the above.)

All you have to do is to extract only the filenames. So, change the find command to the following:
a=`find . -mmin -120 -type f`
cp $a /tmp/
Above find command only captures the files and finds only files whose where modified in last 120 mins. Or do it with single find command like below:
find . -mmin -120 -type f -exec cp '{}' /tmp/ \;

Related

Command to check for a very large number of .gz files in a directory [duplicate]

This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 5 years ago.
Below is the current days file. Previous days file converted to .gz by system. I wanted to find the total count of last days specific .gz files. I tried the below command which gives me the error. Please suggest
bash-3.2$ ls -lrth|tail
299K Mar 23 2017 N08170323091903766
333K Mar 23 2017 N08170323091903771
328K Mar 23 2017 N09170323091903776
367K Mar 23 2017 N09170323091903782
347K Mar 23 2017 N04170323092003784
368K Mar 23 2017 N08170323092003783
***bash-3.2$ ls -lrth N08170322*|wc -l***
bash: /usr/bin/ls: Arg list too long
0
***bash-3.2$ zcat N08170322*.gz|wc -l***
bash: /usr/bin/zcat: Arg list too long
0
This is happening because you have too many files in the directory.
You can easily get around the first issue:
ls | grep -c N08170322
or, to be even more precise:
ls | grep -c '^N08170322'
would give you the list of files. However, a better way to do this is:
find . -name "N08170322*" -exec ls {} + | wc -l
which will address the ls parsing issue mentioned in #hek2mgl's comment.
If you really want to count the lines of all the zipped files in one shot, you can do this:
find . -name "N08170322*" -exec zcat {} + | wc -l
See also:
Argument list too long error for rm, cp, mv commands
Use this
find . -name "N08170322*" -exec ls {} \; |wc -l
As explained in the below answer, you are getting argument list too long as there are multiple files in the directory. To overcome it, you can club it with find and exec
Edit: Created use case to check if command works with/without ls
These are the 3 empty files I created.
$ find -name "file*" -exec ls {} \;
./file1
./file2
./file3
Running wc -l without ls, prints number of lines in each file.
$ find -name "file*" -exec wc -l {} \;
0 ./file1
0 ./file2
0 ./file3
Running it with ls gives me number count of number of files, which is what OP wants.
$ find -name "file*" -exec ls {} \; | wc -l
3

Copy files in Unix generated in 24 hours

I am trying to copy files which are generated in the past one day (24 hrs). I am told to use awk command but I couldn't find the exact command for doing this. My task is to copy files from /source/path --> /destination/path.
find /source/path -type f -mmin -60 -exec ls -al {} \;
I have used the above command to find the list of files generated in the past 60 mins, but my requirement is to copy the files, and not just knowing the file names.
Just go ahead an exec cp instead of ls:
find /source/path -type f -mmin -60 -exec cp {} /destination/path \;
You are really close! Take the name of files and use it for copy.
find /source/path -type f -mmin -60 -exec ls -al {} \; |\
while read file
do
cp -a "${file}" "/destination/path"
done

Find and rename a directory

I am trying to find and rename a directory on a linux system.
the folder name is something like : thefoldername-23423-431321
thefoldername is consistent but the numbers change every time.
I tried this:
find . -type d -name 'thefoldername*' -exec mv {} newfoldername \;
The command actually works and rename that directory. But I got an error on terminal saying that there is no such file or directory.
How can I fix it?
It's a harmless error which you can get rid of with the -depth option.
find . -depth -type d -name 'thefoldername*' -exec mv {} newfoldername \;
Find's normal behavior is to process directories and then recurse into them. Since you've renamed it find complains when it tries to recurse. The -depth option tells find to recurse first, then process the directory after.
It's missing the -execdir option! As stated in man pages of find:
-execdir command {};
Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find.
find . -depth -type d -name 'thefoldername*' -execdir mv {} newfoldername \;
With the previous answer my folders contents are disappeared.
This is my solution. It works well:
for i in find -type d -name 'oldFolderName';
do
dirname=$(dirname "$i")
mv $dirname/oldFolderName $dirname/newFolderName
done
.../ABC -> .../BCD
find . -depth -type d -name 'ABC' -execdir mv {} $(dirname $i)/BCD \;
Replace 1100 with old_value and 2200 with new_value that you want to replace.
example
for i in $(find . -type d -iname '1100');do echo "mv "$i" "$i"__" >> test.txt; sed 's/1100__/2200/g' test.txt > test_1.txt; bash test_1.txt ; rm test*.txt ; done
Proof
[user#server test]$ ls -la check/
drwxr-xr-x. 1 user user 0 Jun 7 12:16 1100
[user#server test]$ for i in $(find . -type d -iname '1100');do echo "mv "$i" "$i"__" >> test.txt; sed 's/1100__/2200/g' test.txt > test_1.txt; bash test_1.txt ; rm test*.txt ; done
[user#server test]$ ls -la check/
drwxr-xr-x. 1 user user 0 Jun 7 12:16 2200
here __ in sed is used only to change the name it have no other significance

Find all writable files in the current directory

I want to quickly identify all writable files in the directory. What is the quick way to do it?
find -type f -maxdepth 1 -writable
The -writable option will find files that are writable by the current user. If you'd like to find files that are writable by anyone (or even other combinations), you can use the -perm option:
find -maxdepth 1 -type f -perm /222
This will find files that are writable by their owner (whoever that may be):
find -maxdepth 1 -type f -perm /200
Various characters can be used to control the meaning of the mode argument:
/ - any permission bit
- - all bits (-222 would mean all - user, group and other)
no prefix - exact specification (222 would mean no permssions other than write)
to find writable files regardless of owner, group or others, you can check the w flag in the file permission column of ls.
ls -l | awk '$1 ~ /^.*w.*/'
$1 is the first field, (ie the permission block of ls -l) , the regular expression just say find the letter "w" in field one. that's all.
if you want to find owner write permission
ls -l | awk '$1 ~ /^..w/'
if you want to find group write permission
ls -l | awk '$1 ~ /^.....w/'
if you want to find others write permission
ls -l | awk '$1 ~ /w.$/'
-f will test for a file
-w will test whether it's writeable
Example:
$ for f in *; do [ -f $f ] && [ -w $f ] && echo $f; done
If you are in shell use
find . -maxdepth 1 -type f -writable
see man find
You will find you get better answers for this type of question on superuser.com or serverfault.com
If you are writing code not just using shell you may be interested in the access(2) system call.
This question has already been asked on serverfault
EDIT: #ghostdog74 asked if you removed write permissions for this file if this would still find the file. The answer, no this only finds files that are writable.
dwaters#eirene ~/temp
$ cd temp
dwaters#eirene ~/temp/temp
$ ls
dwaters#eirene ~/temp/temp
$ touch newfile
dwaters#eirene ~/temp/temp
$ ls -alph
total 0
drwxr-xr-x+ 2 dwaters Domain Users 0 Mar 22 13:27 ./
drwxrwxrwx+ 3 dwaters Domain Users 0 Mar 22 13:26 ../
-rw-r--r-- 1 dwaters Domain Users 0 Mar 22 13:27 newfile
dwaters#eirene ~/temp/temp
$ find . -maxdepth 1 -type f -writable
./newfile
dwaters#eirene ~/temp/temp
$ chmod 000 newfile
dwaters#eirene ~/temp/temp
$ ls -alph
total 0
drwxr-xr-x+ 2 dwaters Domain Users 0 Mar 22 13:27 ./
drwxrwxrwx+ 3 dwaters Domain Users 0 Mar 22 13:26 ../
---------- 1 dwaters Domain Users 0 Mar 22 13:27 newfile
dwaters#eirene ~/temp/temp
$ find . -maxdepth 1 -type f -writable
dwaters#eirene ~/temp/temp
for var in `ls`
do
if [ -f $var -a -w $var ]
then
echo "$var having write permission";
else
echo "$var not having write permission";
fi
done
The problem with find -writable is that it's not portable and it's not easy to emulate correctly with portable find operators. If your version of find doesn't have it, you can use touch to check if the file can be written to, using -r to make sure you (almost) don't modify the file:
find . -type f | while read f; do touch -r "$f" "$f" && echo "File $f is writable"; done
The -r option for touch is in POSIX, so it can be considered portable. Of course, this will be much less efficient than find -writable.
Note that touch -r will update each file's ctime (time of last change to its meta-data), but one rarely cares about ctime anyway.
Find files writeable by owner:
find ./ -perm /u+w
Find files writeable by group:
find ./ -perm /g+w
Find files writeable by anyone:
find ./ -perm /o+w
Find files with defined permission:
find ./ -type -d -perm 0777
find ./ -type -d -perm 0755
find ./ -type -f -perm 0666
find ./ -type -f -perm 0644
Disable recursive with:
-maxdepth 1
stat -c "%A->%n" *| sed -n '/^.*w.*/p'
I know this a very old thread, however...
The below command helped me: find . -type f -perm /+w
You can use -maxdepth based on how many levels below directory you want to search.
I am using Linux 2.6.18-371.4.1.el5.
If you want to find all files that are writable by apache etal then you can do this:
sudo su www-data
find . -writable 2>/dev/null
Replace www-data with nobody or apache or whatever your web user is.

How do I find all the files that were created today in Unix/Linux?

How do I find all the files that were create only today and not in 24 hour period in unix/linux
On my Fedora 10 system, with findutils-4.4.0-1.fc10.i386:
find <path> -daystart -ctime 0 -print
The -daystart flag tells it to calculate from the start of today instead of from 24 hours ago.
Note however that this will actually list files created or modified in the last day. find has no options that look at the true creation date of the file.
find . -mtime -1 -type f -print
To find all files that are modified today only (since start of day only, i.e. 12 am), in current directory and its sub-directories:
touch -t `date +%m%d0000` /tmp/$$
find . -type f -newer /tmp/$$
rm /tmp/$$
Source
I use this with some frequency:
$ ls -altrh --time-style=+%D | grep $(date +%D)
After going through many posts I found the best one that really works
find $file_path -type f -name "*.txt" -mtime -1 -printf "%f\n"
This prints only the file name like
abc.txt not the /path/tofolder/abc.txt
Also also play around or customize with -mtime -1
This worked for me. Lists the files created on May 30 in the current directory.
ls -lt | grep 'May 30'
Use ls or find to have all the files that were created today.
Using ls : ls -ltr | grep "$(date '+%b %e')"
Using find : cd $YOUR_DIRECTORY; find . -ls 2>/dev/null| grep "$(date '+%b %e')"
find ./ -maxdepth 1 -type f -execdir basename '{}' ';' | grep `date +'%Y%m%d'`
You can use find and ls to accomplish with this:
find . -type f -exec ls -l {} \; | egrep "Aug 26";
It will find all files in this directory, display useful informations (-l) and filter the lines with some date you want... It may be a little bit slow, but still useful in some cases.
Just keep in mind there are 2 spaces between Aug and 26. Other wise your find command will not work.
find . -type f -exec ls -l {} \; | egrep "Aug 26";
If you're did something like accidentally rsync'd to the wrong directory, the above suggestions work to find new files, but for me, the easiest was connecting with an SFTP client like Transmit then ordering by date and deleting.
To get file before 24 hours execute below command:
find . -type f -mtime 1 -exec ls -l {} \;
To get files created today execute below command:
find . -type f -mtime -1 -exec ls -l {} \;
To Get files created before n days before, where +2 is before 2 days files in below command:
find . -type f -mtime +2 -exec ls -l {} \;

Resources