This question already has answers here:
Rename multiple files based on pattern in Unix
(24 answers)
Closed 2 years ago.
I have a folder containing a sequence of files whose names bear the form filename-white.png. e.g.
images
arrow-down-white.png
arrow-down-right-white.png
...
bullets-white.png
...
...
video-white.png
I want to strip out the -white bit so the names are simply filename.png. I have played around, dry run with -n, with the Linux rename command. However, my knowledge of regexes is rather limited so I have been unable to find the right way to do this.
If you are in the directory above images, the command is
rename "s/-white.png/.png/" images/*
If your current directory is images, then run rename "s/-white.png/.png/" ./* instead. To do a dry run, just attach a -n like you said:
rename -n "s/-white.png/.png/" images/*
or
rename -n "s/-white.png/.png/" ./*
Related
This question already has answers here:
Rename multiple files while keeping the same extension on Linux
(4 answers)
Closed 3 years ago.
I'd like to copy and rename multiple files within the same folder. Just like I have the files
foo.c foo.h and want to use them as a template for new files named bar.c bar.h.
cp foo.* bar.*
would describe what I mean but won't work.
using rename will just overwrite the old files.
Is there some simple solution for this or do I have to create a whole script that opens a folder in /tmp, copy there, rename there and move back?
I just found the answer myself with that wonderful tool mcp
mcp 'foo.*' 'bar.#1'
You can do it with a simple for loop and some string manipulation
#!/bin/bash
# for each file following the pattern "foo."
for i in foo.*
do
# copy file to "bar" + original extension
cp $i bar.${i#foo.}
done
This question already has answers here:
How to loop over files in directory and change path and add suffix to filename
(6 answers)
Closed 4 years ago.
New to Linux here, sorry for the (easy?) question:
I have a script in Linux called script_run that works fine if I run it once and manually designate filenames. The script reads an input file, interpolates the data, and then saves the interpolated data to an output file. I'd like the output file to have the same name, except with a "_interp" added. I want to automate the script to run for all files in the file folder directory. How do I do this? My attempt is below, and I am running the script in the file folder directory, but it fails to loop. Thank you for your help!
FILEFOLDER=$*
for FILES in $FILEFOLDER
do
script_run "
[--input ${FILES}]
[WriterStd --output tmp_output_${FILES}.txt]"
cat tmp_output_${FILES}.txt >> output_${FILES}_interp.txt
done
#!/bin/bash
FILES=`ls *.*`
for FILE in $FILES
do
script_run "[--input ${FILE}] [WriterStd --output tmp_output_${FILE}.txt]"
cat tmp_output_${FILE}.txt >> output_${FILE}_interp.txt
done
btw what's with this strange annotation [--input ${FILE}] ? Does your script explicitly requires a format like that?
This question already has answers here:
SFTP: return number of files in remote directory?
(4 answers)
Closed 6 years ago.
I am writing a script in bash and I need to count how many files starting with ddd that are in a remote directory using SFTP. After it download each file, so them I can compare how many files had in the remote directory and how many files were downloaded. Check if they match and such.
I was doing something like this:
echo ls -l | sftp "user#$123.45.67.8:/home/user/datafolder/ddd*" | wc -l
The one above works, but when I run this it downloads all the files to my local folder, which I do not want.
How can I count the number of files and do not download them. I want to download them in another part of the code.
As it said in the comments the best way to do is using ssh. So this outputs what I wanted
ssh user#123.45.67.8 ls /home/user/datafolder/ddd* | wc -l
rsync --list-only provides a succinct way to list the files in a remote directory. Simply passing the result to wc -l takes care of the count (excluding the . and .. (dot) files), e.g.
rsync --list-only server:/path/to/dir/ | wc -l
(note the trailing '/' to count the contents rather than the directory itself. Add -r for a recursive count. You have all rsync options available to tailor the files counted, e.g. --exclude="stuff", etc.)
This question already has answers here:
How can I recursively find all files in current and subfolders based on wildcard matching?
(19 answers)
Closed 1 year ago.
I'm on Ubuntu, and I'd like to find all files in the current directory and subdirectories whose name contains the string "John". I know that grep can match the content of the files, but I have no idea how to use it with file names.
Use the find command,
find . -type f -name "*John*"
The find command will take long time because it scans real files in file system.
The quickest way is using locate command, which will give result immediately:
locate "John"
If the command is not found, you need to install mlocate package and run updatedb command first to prepare the search database for the first time.
More detail here: https://medium.com/#thucnc/the-fastest-way-to-find-files-by-filename-mlocate-locate-commands-55bf40b297ab
This is a very simple solution using the tree command in the directory you want to search for. -f shows the full file path and | is used to pipe the output of tree to grep to find the file containing the string filename in the name.
tree -f | grep filename
use ack its simple.
just type ack <string to be searched>
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Consider two directories:
/home/user/music/flac
/media/MUSIC/flac
I would like the second directory (destination; a USB drive) to contain the same files and structure as the first directory (master). There are 3600+ files (59G in total). Every file is scanned using unison, which is painfully slow. I would rather it compare based on file name, size, and modification time.
I think rsync might be better but the examples from the man pages are rather cryptic, and Google searches did not reveal any simple, insightful examples. I would rather not accidentally erase files in the master. ;-)
The master list will change over time: directories reorganized, new files added, and existing files updated (e.g., re-tagging). Usually the changes are minor; taking hours to complete a synchronization strikes me as sub-optimal.
What is the exact command to sync the destination directory with the master?
The command should copy new files, reorganize moved files (or delete then copy), and copy changed files (based on date). The destination files should have their timestamp set to the master's timestamp.
You can use rsync this way:
rsync --delete -r -u /home/user/music/flac/* /media/MUSIC/flac
It will delete files in /media/MUSIC/flac (never on master), and update based on file date.
There are more options, but I think this way is sufficient for you. :-)
(I just did simple tests! Please test better!)
You can use plain old cp to copy new & changed files (as long as your filesystems have working timestamps):
cp -dpRuv /home/user/music/flac /media/MUSIC/
To delete files from the destination that don't exist at the source, you'll need to use find. Create a script /home/user/bin/remover.sh like so:
#!/bin/bash
CANONNAME="$PWD/$(basename $1)"
RELPATH=$(echo "$CANONNAME" | sed -e "s#/media/MUSIC/flac/##")
SOURCENAME="/home/user/music/flac/$RELPATH"
if [ ! -f "$SOURCENAME" ]; then
echo "Removing $CANONNAME"
rm "$CANONNAME"
fi
Make it executable, then run it from find:
find /media/MUSIC/flac -type f -execdir /home/user/bin/remover.sh "{}" \;
The only thing this won't do is remove directories from the destination that have been removed in the source - if you want that too you'll have to make a third pass, with a similar find/script combination.