Change filenames to lowercase in Ubuntu in all subdirectories [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know it's been asked but what I've found has not worked out so far.
The closet I came is this : rename -n 'y[A-Z]/[a-z]/' *
which works for the current directory. I'm not too good at Linux terminal so what
should I add to this command to apply it to all of the files in all the sub-directories from which I am in, thanks!

Here's one way using find and tr:
for i in $(find . -type f -name "*[A-Z]*"); do mv "$i" "$(echo $i | tr A-Z a-z)"; done
Edit; added: -name "*[A-Z]*"
This ensures that only files with capital letters are found. For example, if files with only lowercase letters are found and moved to the same file, mv will display the are the same file error.

Perl has a locale-aware lc() function which might work better:
find . -type f | perl -n -e 'chomp; system("mv", $_, lc($_))'
Note that this script handles whitespace in filenames, but not newlines. And there's no protection against collisions, if you have "ASDF.txt" and "asdf.txt" one is going to get clobbered.

Related

Given an array with filenames, how to find and delete all matching files in bash? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
Given a blacklist.txt file with filenames:
.picasa.ini
Thumbs.db
._.DS_store
How can I best find files with those filenames and delete them? I tried:
readarray -t blacklisted < ./Blacklist.txt
for n in ${blacklisted[#]};do find . -type f -name "${n}" -delete; done
But it doesn't work for me.
Read the file line by line, and launch the rm command on each iteration.
#!/bin/bash
filename='blacklist.txt'
echo Start
while read p; do
echo "removing $p ..."
find . -name "$p" -exec rm {} \;
done < "$filename"
Add the -f flag to the rm command if you feel confident.

"Find" command: highlight matching literal parts [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I use find a lot, generally using the -name or -iname parameters.
I would like it to highlight the matching part in the files it finds (like grep does).
For example: find . -iname "*FOO*" would highlight instances for FOO
I know I could pipe it into grep but I'd rather not write two commands each time.
Is there a simple way to do it?
eg. like this:
find /home/ -type f | grep -i --color=always *.cpp

Trying to rename .JPG to .jpg in shell CLI [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm trying to rename all files in a directory from having the .JPG ext to .jpg but it isn't working.
I have looked around the net and found a few things but I can't seem to get any to work. The latest one I tried was:
rename -n .JPG .jpg *.JPG
I used the -n flag to see what would be modified but I got no response (no files).
What am I doing wrong here!?
If you don't want to use rename, you mention you have tried various things, then with only built-in bash utils, you can do this.
for x in `find . -maxdepth 1 -type f -name "*.JPG"` ; do mv "$x" `echo $x|sed 's/JPG/jpg/g'`; done
The backticks around find run the expression and assign the result to variable x. There are various switches you can use with find to limit by time, size, etc, if you need more sophisticated searching than just all JPG in current directory, for example. Maxdepth 1 will limit the search to current directory.
EDIT:
As pointed out by Adrian, using sed is unecessary and wasteful as it uses another subshell, so instead, this could all be compressed to:
for x in `find . -maxdepth 1 -type f -name "*.JPG"` ; do mv "$x" "${x%.JPG}.jpg"; done
The proper perl rename expects a regular expression so you would achieve this doing:
$ rename 's#\.JPG$#.jpg#' *.JPG
The shitty util-linux version of rename does not have an -n switch so you would have to do:
$ rename .JPG .jpg *.JPG
Consult the man page to check which implementation is actually installed on your system.

Recursively doing the command ls without -R [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am trying to find a way to recreate the output of ls -R (linux) without using the option -R i.e without the recursion command, is this at all possible?
There are no other constraints.
shopt -s globstar nullglob
printf "%s\n" **
or
find .
The closest I can think of right now is to recurse through all given directories using find and to perform a listing on each. I used ls -1 because I noticed that ls -R defaults to a single column when redirected into a file; you may choose to omit the -1 option.
for dir in `find . -type d`; do
echo $dir:
ls -1 $dir
done
However, it doesn't work with filenames that contain spaces. I'm still looking for a way around that...

In Linux, how do I find find directory with the most subdirectories or files? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I find the directory with the largest number of files/subdirectories in it on the system? Obviously the clever answer is /, but that's not what I'm looking for.
I’ve been told the filesystem is out of nodes, so I suspect that somewhere there are a lot of files/directories which are just garbage, and I want to find them.
I’ve tried running this:
$ find /home/user -type d -print | wc -l
to find specific directories.
starting from the current directory, you could try
find . -type d | cut -d/ -f 2 | uniq -c
This will list all directories starting from the current one, split each line by the character "/", select field number "2" (each line starts with "./", so your first field would be ".") and then only outputs unique lines, and a count how often this unique line appears (-c parameter).
You could also add an "sort -g" at the end.

Resources