Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a source code which is having text and binary file. I have to find and collect all the human unreadable files present in source code. How I can do this?
Although the answer of Far Had is correct, you don't even need a for-loop for this. As you state yourself, all your files are within one directory, so you can simply run:
file *
The answers containing "text" (be it ASCII, unicode or something else) indicate human readable files.
This piece of code returns a list of all non ascii text files in current directory.
Hope this will help:
for i in `find . -type f`; do file $i; done |grep -v text | cut -d : -f 1
You could replace the . (dot) after the find with any other location in your filsystem.
One way is to use perl (File::Find module) like this:
perl -MFile::Find -e '#directories=shift || "."; sub wanted { ! -T && print "$File::Find::name\n"; }; find(\&wanted, #directories);'
NOTE: The above command defaults to searching the current directory.
To search a specific directory e.g. /tmp, just type the above command followed by a space and /tmp
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
i have already edited this question, because probably would useful for someone else,and well , my problematics is the next:
I have hundreds of files, 256 to be exact ,this files are .csv,
and I need to generate a directory/folder with 50 copies for each file,
for example:
-file1.csv---->folder_for_file1---->1copy_file1.csv,2copy_file1.csv,3copy_file1.csv........50copy_file1.csv
-file2.csv---->folder_for_file2---->1copy_file2.csv,2copy_file2.csv,3copy_file2.csv........50copy_file2.csv
-file3.csv---->folder_for_file3---->1copy_file3.csv,2copy_file3.csv,3copy_file3.csv........50copy_file3.csv
...
-file256.csv---->folder_forfile256---->1copy_file256.csv,2copy_file256.csv,3copy_file256.csv........50copy_file256.csv
What can I use to do this ??, some bash script or some simple ubuntu/linux command, like mk dir?
P.D/P.S.The answer that they have provided me, works very well, but I had a problem with the name of the generated folder, because it's name is the extension of the file and it's not useful for me.
thanks in advance
Your requirements are not very clear. I will answer with some assumptions.
For each file named file_i , suppose you want to create a directory named file_i_folder for that file under the same path of these files. You can do this via this command:
ls | xargs -t -n1 -i mkdir {}_folder
Then you want to create copies of each files under their corresponding directories. Since the names of files cannot be duplicated, you may want to give prefixes for copies, e.g. copy1_file1. You can do this via this command:
ls -p | grep -v / | xargs -t -n1 -i bash -c 'for i in {1..50}; do cp {} "{}_folder/copy${i}_{}" ; done'
You can alter the commands to change the format of the names of files and directories at your own will.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
What is the purpose of the below command ?
grep -ir nashorn ./ | grep "^[^:]*\.java"
If finds all lines that contain the string nashorn, case-insensitively, in files in the current directory hierarchy whose names contain .java.
The -i option to grep makes it match case-insensitively. The -r option makes it recurse into all directories in the directory arguments and search all the files. So the first part of the pipeline matches nashorn in all files in the current directory, recursively.
The output of that command will be in the format:
filename:matching line
The second grep matches those lines. ^ means the beginning of the lines, [^:]* means a sequence of characters that doesn't include :, which restricts it to the filename part of the line. ANd \.java matches .java literally. So it only matches lines where .java is in the filename part of the line.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I want to rename files in a folder on UNIX using a script.
The format of the original file is:
abc.txt.temp
and I want to rename it to:
abc.txt
Many files use this format and I want to remove .temp from the original file name.
The answer Ciprian gave is certainly an option but I feel it's limiting.
The solution below is much more flexible as you don't have to actually count anything and you can remove text from any position rather than just the end.
The following command (1 line) will remove any mention of .temp in all the files:
for filename in *; do mv "$filename" "${filename//.temp/}"; done
Note The "*" means all files in current folder. You can use *.temp to achieve exactly the same result as Ciprian's method. (that is, only removing .temp from files ending with .temp)
I don't know about UNIX, but since the question also have the Linux tag it may just be a UNIX/Linux confusion.
Most GNU/Linux distributions have a rename command. Depending on the rename version, to replace foo with bar in files names the syntax may either be as simple as
rename foo bar files
or follow sed's regexp syntax :
rename 's/foo/bar/' files
In your case, you want to replace .temp with an empty string ('') in all files ending with .temp, so depending on your rename version one of these commands should work :
rename .temp '' *.temp
or
rename 's/\.temp$//' *.temp
Create the following script with a name like 'rename.sh':
#!/bin/bash
TARGET_DIR=$1
TARGET_FILES="$TARGET_DIR/*.temp"
for fileName in $TARGET_FILES
do
newFileName=${fileName::-5}
mv -v "${fileName}" "${newFileName}"
done
note The ${var:offset:length} expansion requires bash version 4 or higher.
Give it execution rights:
chmod a+x rename.sh
You need to call it and pass the name of the directory of the .temp files as a parameter. Call it like this:
./rename.sh /path/to/the/temp-files
The script loops over all the *.temp files in the target folder, extracts the last 5 chars from the file path ('.temp' is 5 chars) and moves the original file to the new file that doesn't contain .temp as the extension.
EDIT: tested on a CentOS 7
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I´ve written the line command this, but it doesn´t work
cat fich?.txt < hi, adding message for several files
I´ve got the following files
fich1.txt
fich2.txt
fich3.txt
fich4.txt
fich5.txt
fich6.txt
for f in fich?.txt; do
cat message.txt >>$f
done
This will add (append) the contents of the file message.txt to all files matching the shell globbing pattern fich?.txt. If you want to be more strict and only match the specific files that you mentioned, use the pattern fich[1-6].txt.
To add something to the beginning of each file, do this:
for f in fich?.txt; do
tmpfile=$( mktemp XXXXX )
cat message.txt $f >$tmpfile
mv $tmpfile $f
done
No catching of errors is happening here, so it's not fool-proof or super safe.
Different approach - inserts the new content at the beginning of the file
for f in fich*.txt; do
sed --in-place '1 ihi, adding message for several files' "$f";
done
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a number of files such as file_022.bmp, file_023.bmp...file_0680.bmp. I need to rename these to something a little bit more convenient such as file_1.bmp, file_2.bmp...file_658.bmp.
Is there a bash script that I could write to do this for me? Thanks for the help and advice.
Luke H
if you're on a debian based linux system then you can use the rename script which accepts regular expressions to rename files. Some more info because I find it hard to find the man page.
e.g.
harald#Midians_Gate:~$ ls p*.php
parse.php pd.php pgrep.php preg_based.php proc.php
suppose I want to change the extension to .perl and prepend the name with file_
then I use command:
rename -n 's/([a-z]*)\.php/file_$1.perl/' p*.php
would give
parse.php renamed as file_parse.perl
pd.php renamed as file_pd.perl
pgrep.php renamed as file_pgrep.perl
preg_based.php renamed as preg_file_based.perl
proc.php renamed as file_proc.perl
I select and capture the base filename ([a-z]*) and then use it in the substitution $1 and append .perl and prepend $1 with the regular string file_
the -n option makes it test run without changing anything
As you can see from this example your selecting regexp needs to be correctly thought out or you get cases like the above preg_based.php where you wanted file_preg_based.perl :)
to compensate for that I would've needed to use ([a-z_]*) here
It's one of the many reasons why I keep hanging on to debian, I'd love to find the equivalent for other non-debian systems though :-/
if you have files a.bmp,b.bmp,c.bmp
and you want to end up with file_1.bmp, file_2.bmp, file_3.bmp
using bash:
mkdir result
index=1
for i in *.bmp
do
mv "$i" "result/file_"$((index++)).bmp
done
notes:
using a subdirectory is advised to avoid accidentally overwriting a file that looks like file_xx.bmp
if you have too many files to fit in the command line after expansion you could use something like:
mkdir result
index=1
find . -name "*.bmp" | while read i
do
echo mv "$i" "result/file_"$((index++)).bmp
done
after inspecting the output remove the 'echo'