Possible to grep and modify/touch the file modification date? - linux

I am using the following to find all instances of a string in a number of files and delete the line where they are found.
find . -maxdepth 1 -xdev -type f -exec sed -i '/teststring/Id' {} \;
I don't want to change the date the file was modified because that impacts on the order that the files are shown in an unrelated application. So I was thinking I could grab the date before executing sed, then touch the file and replace the old modify date at the end of the command. I want to have it all in one command integrated with the above if possible.

Try the following command:
find . -maxdepth 1 -xdev -type f -exec sed -i.bak '/teststring/Id' {} \; -exec touch -r {}.bak {} \; -exec rm {}.bak \;
The find command executes three steps for each file found:
sed changes the file and creates a backup of the original file (with a .bak extension)
touch sets the timestamp of the new file to be the same as the backup
rm deletes the backup

for file in $(find . -maxdepth 1 -xdev -type f )
do
mod_time=$(stat --format=%y $file)
perl -wpl -i -e 's!teststring!!' $file
touch -d ''$mod_time'' $file
done

Related

I want to get an output of the find command in shell script

Am trying to write a script that finds the files that are older than 10 hours from the sub-directories that are in the "HS_client_list". And send the Output to a file "find.log".
#!/bin/bash
while IFS= read -r line; do
echo Executing cd /moveit/$line
cd /moveit/$line
#Find files less than 600 minutes old.
find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log
done < HS_client_list
However, the script is able to cd to the folders from HS_client_list(this file contents the name of the subdirectories) but, the find command (find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log) is not working. The Output file is empty. But when I run find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log as a command it works and from the script it doesn't.
You are overwriting the file in each iteration.
You can use xargs to perform find on multiple directories; but you have to use an alternate delimiter to avoid having xargs populate the {} in the -execdir command.
sed 's%^%/moveit/%' HS_client_list |
xargs -I '<>' find '<>' -type f -iname "*.enc" -mmin +600 -execdir basename {} \; > /home/infa91punv/find.log
The xargs ls did not seem to perform any useful functionality, so I took it out. Generally, don't use ls in scripts.
With GNU find, you could avoid the call to an external utility, and use the -printf predicate to print just the part of the path name that you care about.
For added efficiency, you could invoke a shell to collect the arguments:
sed 's%^%/moveit/%' HS_client_list |
xargs sh -c 'find "$#" -type f -iname "*.enc" -mmin +600 -execdir basename {} \;' _ >/home/infa91punv/find.log
This will run as many directories as possible in a single find invocation.
If you want to keep your loop, the solution is to put the redirection after done. I would still factor out the cd, and take care to quote the variable interpolation.
while IFS= read -r line; do
find /moveit/"$line" -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';'
done < HS_client_list >/home/infa91punv/find.log

linux find no such file or directory but exists

Then i trying to use this script
for line in `cat dirs.txt`;
do
find "$line" -type f \( -name '*good*' -o -exec grep -F "badbad" {} \; \) -exec echo {} \;;
done
I get error on each existing dirs and match the find criteria
find: /home/goods/ : No such file or directory
find: /home/bads/ : No such file or directory
find: /home/fill/ : No such file or directory
But then i look manualy this dirs exist and i can read them all
Why this happens?
You must check in file for ^M$
You can do that with command cat dirs.txt -vET
Then you must trim them all with command cat dirs.txt|tr -d "\r" >1.txt
Issue is that you have dos (^M) line endings, in the file. Running dos2unix dirs.txt dirs.txt should solve the problem. Ideally, you also shouldn't use for line in $(cat ..., but something like
while IFS= read -r line; do
find "$line" -type f \( -name '*good*' -o -exec grep -F "badbad" {} \; \) -exec echo {} \;
done < dirs.txt

Recursively prepend text to file names

I want to prepend text to the name of every file of a certain type - in this case .txt files - located in the current directory or a sub-directory.
I have tried:
find -L . -type f -name "*.txt" -exec mv "{}" "PrependedTextHere{}" \;
The problem with this is dealing with the ./ part of the path that comes with the {} reference.
Any help or alternative approaches appreciated.
You can do something like this
find -L . -type f -name "*.txt" -exec bash -c 'echo "$0" "${0%/*}/PrependedTextHere${0##*/}"' {} \;
Where
bash -c '...' executes the command
$0 is the first argument passed in, in this case {} -- the full filename
${0%/*} removes everything including and after the last / in the filename
${0##*/} removes everything before and including the last / in the filename
Replace the echo with a mv once you're satisfied it's working.
Are you just trying to move the files to a new file name that has Prepend before it?
for F in *.txt; do mv "$F" Prepend"$F"; done
Or do you want it to handle subdirectories and prepend between the directory and file name:
dir1/PrependA.txt
dir2/PrependB.txt
Here's a quick shot at it. Let me know if it helps.
for file in $(find -L . -type f -name "*.txt")
do
parent=$(echo $file | sed "s=\(.*/\).*=\1=")
name=$(echo $file | sed "s=.*/\(.*\)=\1=")
mv "$file" "${parent}PrependedTextHere${name}"
done
This ought to work, as long file names does not have new line character(s). In such case make the find to use -print0 and IFS to have null.
#!/bin/sh
IFS='
'
for I in $(find -L . -name '*.txt' -print); do
echo mv "$I" "${I%/*}/prepend-${I##*/}"
done
p.s. Remove the echo to make the script effective, it's there to avoid accidental breakage for people who randomly copy paste stuff from here to their shell.

In Linux terminal, how to delete all files in a directory except one or two

In a Linux terminal, how to delete all files from a folder except one or two?
For example.
I have 100 image files in a directory and one .txt file.
I want to delete all files except that .txt file.
From within the directory, list the files, filter out all not containing 'file-to-keep', and remove all files left on the list.
ls | grep -v 'file-to-keep' | xargs rm
To avoid issues with spaces in filenames (remember to never use spaces in filenames), use find and -0 option.
find 'path' -maxdepth 1 -not -name 'file-to-keep' -print0 | xargs -0 rm
Or mixing both, use grep option -z to manage the -print0 names from find
In general, using an inverted pattern search with grep should do the job. As you didn't define any pattern, I'd just give you a general code example:
ls -1 | grep -v 'name_of_file_to_keep.txt' | xargs rm -f
The ls -1 lists one file per line, so that grep can search line by line. grep -v is the inverted flag. So any pattern matched will NOT be deleted.
For multiple files, you may use egrep:
ls -1 | grep -E -v 'not_file1.txt|not_file2.txt' | xargs rm -f
Update after question was updated:
I assume you are willing to delete all files except files in the current folder that do not end with .txt. So this should work too:
find . -maxdepth 1 -type f -not -name "*.txt" -exec rm -f {} \;
find supports a -delete option so you do not need to -exec. You can also pass multiple sets of -not -name somefile -not -name otherfile
user#host$ ls
1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt josh.pdf keepme
user#host$ find . -maxdepth 1 -type f -not -name keepme -not -name 8.txt -delete
user#host$ ls
8.txt keepme
Use the not modifier to remove file(s) or pattern(s) you don't want to delete, you can modify the 1 passed to -maxdepth to specify how many sub directories deep you want to delete files from
find . -maxdepth 1 -not -name "*.txt" -exec rm -f {} \;
You can also do:
find -maxdepth 1 \! -name "*.txt" -exec rm -f {} \;
In bash, you can use:
$ shopt -s extglob # Enable extended pattern matching features
$ rm !(*.txt) # Delete all files except .txt files

How to copy all the files with the same suffix to another directory? - Unix

I have a directory with unknown number of subdirectories and unknown level of sub*directories within them. How do I copy all the file swith the same suffix to a new directory?
E.g. from this directory:
> some-dir
>> foo-subdir
>>> bar-sudsubdir
>>>> file-adx.txt
>> foobar-subdir
>>> file-kiv.txt
Move all the *.txt files to:
> new-dir
>> file-adx.txt
>> file-kiv.txt
One option is to use find:
find some-dir -type f -name "*.txt" -exec cp \{\} new-dir \;
find some-dir -type f -name "*.txt" would find *.txt files in the directory some-dir. The -exec option builds a command line (e.g. cp file new.txt) for every matching file denoted by {}.
Use find with xargs as shown below:
find some-dir -type f -name "*.txt" -print0 | xargs -0 cp --target-directory=new-dir
For a large number of files, this xargs version is more efficient than using find some-dir -type f -name "*.txt" -exec cp {} new-dir \; because xargs will pass multiple files at a time to cp, instead of calling cp once per file. So there will be fewer fork/exec calls with the xargs version.

Resources