adding the same message for several files in linux [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I´ve written the line command this, but it doesn´t work
cat fich?.txt < hi, adding message for several files
I´ve got the following files
fich1.txt
fich2.txt
fich3.txt
fich4.txt
fich5.txt
fich6.txt

for f in fich?.txt; do
cat message.txt >>$f
done
This will add (append) the contents of the file message.txt to all files matching the shell globbing pattern fich?.txt. If you want to be more strict and only match the specific files that you mentioned, use the pattern fich[1-6].txt.
To add something to the beginning of each file, do this:
for f in fich?.txt; do
tmpfile=$( mktemp XXXXX )
cat message.txt $f >$tmpfile
mv $tmpfile $f
done
No catching of errors is happening here, so it's not fool-proof or super safe.

Different approach - inserts the new content at the beginning of the file
for f in fich*.txt; do
sed --in-place '1 ihi, adding message for several files' "$f";
done

Related

How replace string with regex in bash script? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 days ago.
Improve this question
I have this working in terminal (it just remove the date using string replacement)
NAME="/home/me/a_path/2023-04-10 filename"
NEW_NAME=$(echo ${NAME//20[0-9][0-9]-[0-9][0-9]-[0-9][0-9] /})
echo ${NEW_NAME}
>>> Expected output : "/home/me/a_path/filename"
But this is not working (it output the non-modified string) in script, I can't understand why.
I tried different quotation marks positions and some other things that I found on SO, but nothing has worked for me yet.
I tried using sed it does not work better.
Edit: The example I gave is working, so probably a typo in my full script
Here is another approach without regex based on the inputs provided in the asked question.
#!/bin/bash
NAME="/home/me/a_path/2023-04-10 filename"
DIR=$(dirname "${NAME}")
FILE=$(basename "${NAME}" | awk '{print $NF}')
echo "${DIR}/${FILE}"
The output:
/home/me/a_path/filename

How to find out human unreadable file in the directory? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a source code which is having text and binary file. I have to find and collect all the human unreadable files present in source code. How I can do this?
Although the answer of Far Had is correct, you don't even need a for-loop for this. As you state yourself, all your files are within one directory, so you can simply run:
file *
The answers containing "text" (be it ASCII, unicode or something else) indicate human readable files.
This piece of code returns a list of all non ascii text files in current directory.
Hope this will help:
for i in `find . -type f`; do file $i; done |grep -v text | cut -d : -f 1
You could replace the . (dot) after the find with any other location in your filsystem.
One way is to use perl (File::Find module) like this:
perl -MFile::Find -e '#directories=shift || "."; sub wanted { ! -T && print "$File::Find::name\n"; }; find(\&wanted, #directories);'
NOTE: The above command defaults to searching the current directory.
To search a specific directory e.g. /tmp, just type the above command followed by a space and /tmp

Shell script to compare file names(without extension) in a directory [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My requirement is I will have xml and pdf files like pairs.(e.g.,file1.xml, file1.pdf and file2.xml,file2.pdf) in same folder.
I need to check for xml files which are not having pdf pair and move them to different folder.(e.g., if file3.xml doesn't have file3.pdf, I need to move it to different folder).
Please answer me the shell script to do get this functionality done.
You can remove the extension using parameter expansion:
#! /bin/bash
for file in *.xml ; do
if [[ ! -f ${file%.xml}.pdf ]] ; then
mv "$file" folder/
fi
done

Bash script to read a file and run a command based on a specific string [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am needing a bash script which, when a file is saved to a particular directory (file name could be anything), read its contents and then, if a particular string in the file is found then run a specific command/or another script.
I have had a look through Bash script: perform actions based on file contents but this script appears to depend on the file being named. My bash scripting is next to useless so hoping someone can help :)
If you install the inotify-tools package, you can use inotifywait for this:
#!/bin/bash
DIR_TO_WATCH=/tmp
STRING=foobar
cd "$DIR_TO_WATCH"
inotifywait -qme close_write --format '%f' -r ./ | while read changed_file; do
if grep "$STRING" "$changed_file" &>/dev/null; then
echo "$STRING found on file $changed_file!"
fi
done
I recommend you take a look at inotifywaits manual for more details on the command-line options

rename multiple files shell in Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a number of files such as file_022.bmp, file_023.bmp...file_0680.bmp. I need to rename these to something a little bit more convenient such as file_1.bmp, file_2.bmp...file_658.bmp.
Is there a bash script that I could write to do this for me? Thanks for the help and advice.
Luke H
if you're on a debian based linux system then you can use the rename script which accepts regular expressions to rename files. Some more info because I find it hard to find the man page.
e.g.
harald#Midians_Gate:~$ ls p*.php
parse.php pd.php pgrep.php preg_based.php proc.php
suppose I want to change the extension to .perl and prepend the name with file_
then I use command:
rename -n 's/([a-z]*)\.php/file_$1.perl/' p*.php
would give
parse.php renamed as file_parse.perl
pd.php renamed as file_pd.perl
pgrep.php renamed as file_pgrep.perl
preg_based.php renamed as preg_file_based.perl
proc.php renamed as file_proc.perl
I select and capture the base filename ([a-z]*) and then use it in the substitution $1 and append .perl and prepend $1 with the regular string file_
the -n option makes it test run without changing anything
As you can see from this example your selecting regexp needs to be correctly thought out or you get cases like the above preg_based.php where you wanted file_preg_based.perl :)
to compensate for that I would've needed to use ([a-z_]*) here
It's one of the many reasons why I keep hanging on to debian, I'd love to find the equivalent for other non-debian systems though :-/
if you have files a.bmp,b.bmp,c.bmp
and you want to end up with file_1.bmp, file_2.bmp, file_3.bmp
using bash:
mkdir result
index=1
for i in *.bmp
do
mv "$i" "result/file_"$((index++)).bmp
done
notes:
using a subdirectory is advised to avoid accidentally overwriting a file that looks like file_xx.bmp
if you have too many files to fit in the command line after expansion you could use something like:
mkdir result
index=1
find . -name "*.bmp" | while read i
do
echo mv "$i" "result/file_"$((index++)).bmp
done
after inspecting the output remove the 'echo'

Resources