Bash Script: Check if multiple files are executable and output them? [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
Can you make a script that iterates through a folder (for example the home directory) and prints the names of all regular files that are executable?

Are you aware that the find command has an -executable switch?
If you want to see all executable files in all subdirectories, you might do:
find ./ -type f -executable
If you want to see all executable files, but just in your directory, you might do:
find ./ -maxdepth 1 -type f -executable

I can.
#!/bin/bash
for d in "$#"
do [[ -d "$d" ]] || { printf '\n"%s" not a directory\n' "$d"; continue; }
for f in "$d"/* "$d"/.*; do [[ -f "$f" && -x "$f" ]] && ls -l "$f"; done
done
But use find as Dominique advised.
Why reinvent the wheel?
Still, there's a lot going on there that could be useful.
Let me know if you have questions.

Related

Given an array with filenames, how to find and delete all matching files in bash? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
Given a blacklist.txt file with filenames:
.picasa.ini
Thumbs.db
._.DS_store
How can I best find files with those filenames and delete them? I tried:
readarray -t blacklisted < ./Blacklist.txt
for n in ${blacklisted[#]};do find . -type f -name "${n}" -delete; done
But it doesn't work for me.
Read the file line by line, and launch the rm command on each iteration.
#!/bin/bash
filename='blacklist.txt'
echo Start
while read p; do
echo "removing $p ..."
find . -name "$p" -exec rm {} \;
done < "$filename"
Add the -f flag to the rm command if you feel confident.

Recursively doing the command ls without -R [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am trying to find a way to recreate the output of ls -R (linux) without using the option -R i.e without the recursion command, is this at all possible?
There are no other constraints.
shopt -s globstar nullglob
printf "%s\n" **
or
find .
The closest I can think of right now is to recurse through all given directories using find and to perform a listing on each. I used ls -1 because I noticed that ls -R defaults to a single column when redirected into a file; you may choose to omit the -1 option.
for dir in `find . -type d`; do
echo $dir:
ls -1 $dir
done
However, it doesn't work with filenames that contain spaces. I'm still looking for a way around that...

Linux: Delay between regular expression executions? / Readings every file in directory? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a directory on a linux machine that has hundreds of files. They are relatively short files and can easily be displayed and read in a command line in a second or two. Is there a way to print all of the files to the command line in an automated way, similar to typing "cat *" but with a one or two second delay between each print so that I can read each of the files?
I have tried making a bash script with:
cat $1
sleep 2
And then calling "bash script.sh *" but it only prints one of the files and then stops.
Any suggestions?
Use a loop in the script:
#!/usr/bin/env bash
for f
do
cat "$f"
sleep 2
done
while [ -e "$1" ]
do
cat "$1"
shift 1
sleep 2
done
for i in $(find /path/to/files -type f -name \*.\*); do
cat $i;
sleep 2;
done
This should work fine:
find /path/to/files -type f
if there is a specific extension you wish to focus on for eg then try something like this:
find /path/to/files -type f -name \*.cfg
and rather than looping you could do it all within find
find /path/to/files -type f -name \*.cfg -print -exec cat {} \; -exec sleep 2 \;
Edited to add Kevin is not off his head I had commented that cat * might not work in a large folder containing lots of files and he responded to my comment which I had removed whilst he was writing his comment :)
In your shell prompt (provided it's sh-compatible - most likely):
for i in *; do
cat $i
sleep 2
done
perhaps ? Loops over all the files (*) and cats each one out. You don't really need a separate shell script for something so simple.

exclude directories mv unix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The command below moves every hidden/normal file ending with *string without . or _ before it.
mv {.,}*[!._]string /destination
How can I also exclude moving all directories in the above command?
Try
find /WHERE/TO/FIND -name '*STRING' \( ! -name '*_STRING' -o ! -name '*.STRING' \) -type f -exec mv \{\} /WHERE/TO/MOVE \;
Note, if you want to move every file from only the /WHERE/TO/FIND directory, you should add -maxdepth 1 (after e.g. the -type f part).
How about:
for file in {.,}*[!._]string; do test -f "$file" && mv "$file" /destination; done
In what shell does the [!._] glob actually work when used with {.,}? You would probably be better off avoiding the {} notation and do:
for file in .*[!._]string *[!._]string; do ... ; done

Change filenames to lowercase in Ubuntu in all subdirectories [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know it's been asked but what I've found has not worked out so far.
The closet I came is this : rename -n 'y[A-Z]/[a-z]/' *
which works for the current directory. I'm not too good at Linux terminal so what
should I add to this command to apply it to all of the files in all the sub-directories from which I am in, thanks!
Here's one way using find and tr:
for i in $(find . -type f -name "*[A-Z]*"); do mv "$i" "$(echo $i | tr A-Z a-z)"; done
Edit; added: -name "*[A-Z]*"
This ensures that only files with capital letters are found. For example, if files with only lowercase letters are found and moved to the same file, mv will display the are the same file error.
Perl has a locale-aware lc() function which might work better:
find . -type f | perl -n -e 'chomp; system("mv", $_, lc($_))'
Note that this script handles whitespace in filenames, but not newlines. And there's no protection against collisions, if you have "ASDF.txt" and "asdf.txt" one is going to get clobbered.

Resources