I want to change multiple different strings across all files in a folder to one new string.
When the string in the text files (within a same directory) is like this:
file1.json: "url/1.png"
file2.json: "url/2.png"
file3.json: "url/3.png"
etc.
I would need to point them all to a single PNG, i.e., "url/static.png", so all three files have the same URL inside pointing to the same PNG.
How can I do that?
you can use the command find and sed for this. make sure you are in the folder that you want to replace files.
find . -name '*.*' -print|xargs sed -i "s/\"url\/1.png\"/\"url\/static.png\"/g"
Suggesting bash script:
#!/bin/bash
# for each file with extension .json in current directory
for currFile in *.json; do
# extract files ordinal from from current filename
filesOrdinal=$(echo "#currFile"| grep -o "[[:digit:]]\+")
# use files ordinal to identify string and replace it in current file
sed -i -r 's|url/'"$filesOrdinal".png'|url/static.png|' $currFile
done
Related
I have already followed this query # (How to replace a string in multiple files in linux command line).
My question is rather an extension of the same.
I want to check only specific file extensions in the subfolders also but not every file extension.
What I have already tried:
grep -rli 'old-word' * | xargs -i# sed -i 's/old-word/new-word/g' #
My problem: It is changing in every other file format as well. I want to search and replace only in one file extension.
Please add another answer where I can change the entire line of a file as well not just one word.
Thanks in advance.
Simplest solution is to use complex grep command:
grep -rli --include="*.html" --include=".json" 'old-word' *
The disadvantage of this solution. Is that you do not have clear control which files are scanned.
Better suggesting to tune a find command to locate your desired files.
Using RegExp filtering option -regex to filter file names.
So you verify the correct files are scanned.
Than feed the find command result to grep scanning list.
Example:
Assuming you are looking for file extensions txt pdf html .
Assuming your search path begins in /home/user/data
find /home/user/data -regex ".*\.\(html\|txt\|pdf\)$"
Once you have located your files. It is possible to grep match each file from the the above find command:
grep -rli 'old-word' $( find /home/user/data -regex ".*\.\(html\|txt\|pdf\)$" )
I have a folder/subfolders that contain some files with filenames that end with a random numeric extension:
DWH..AUFTRAG.20211123115143.A901.3801176
DWH..AUFTRAGSPOSITION.20211122002147.A901.3798013
I would like to remove everything after A901 from the above filenames.
For example:
DWH..AUFTRAG.20211123115143.A901 (remove this .3801176)
DWH..AUFTRAGSPOSITION.20211122002147.A901 (remove this .3798013) from the filename
How do I use rename or any other command in linux to remove only after A901 everything from finale rest file name keep as it is?
I can see there is 5 '.' (dots) before the number so I did some desi jugad.
I made some files in folder and also made a folder and created some files inside that folder accourding to the name pattern that you gave.
I created a command and it somewhat looks like this.
find "$PWD"|grep A901|while read F; do mv "${F}" `echo ${F}|cut -d . -f 1-5`;done
When executed it worked for me.
terminal output below.
rexter#rexter:~/Desktop/test$ find $PWD
/home/rexter/Desktop/test
/home/rexter/Desktop/test/test1
/home/rexter/Desktop/test/test1/DWH..AUFTRAG.20211123115143.A901.43214
/home/rexter/Desktop/test/test1/DWH..AUFTRAGSPOSITION.2021112200fsd2147.A901.31244324
/home/rexter/Desktop/test/DWH..AUFTRAG.20211123115143.A901.321423
/home/rexter/Desktop/test/DWH..AUFTRAGSPOSITION.20211122002147.A901.3124325
rexter#rexter:~/Desktop/test$ find "$PWD"|grep A901|while read F; do mv "${F}" `echo ${F}|cut -d . -f 1-5`;done
rexter#rexter:~/Desktop/test$ find $PWD
/home/rexter/Desktop/test
/home/rexter/Desktop/test/test1
/home/rexter/Desktop/test/test1/DWH..AUFTRAG.20211123115143.A901
/home/rexter/Desktop/test/test1/DWH..AUFTRAGSPOSITION.2021112200fsd2147.A901
/home/rexter/Desktop/test/DWH..AUFTRAG.20211123115143.A901
/home/rexter/Desktop/test/DWH..AUFTRAGSPOSITION.20211122002147.A901
rexter#rexter:~/Desktop/test$
I dont know if this is a proper way to do it but it just make things work.
Let me know if it is useful to you.
So I have a text file containing the names of ~1000 folder names, and a directory with around ~30,000 folders. What I need to do is to find a bash command that will read the text file for the folder names, and grep those folders from the directory and copy them to a new destination. Is this at all possible?
I am new to coding, my apologies if this isn't worded well.
you can use a bash scrit like this one:
fileList=$(cat nameFIle)
srcDir="/home/ex/src"
destDir="/home/ex/dest"
for name in fileList
do
cp -r "${srcDir}/${name}" "${destDir}"/
done
Definitely possible - and you don't even need grep. Assuming your text file has one file per line.
cp -r `cat filenames.txt` path_to_copy_location/
I would write:
xargs cp -t /destination/directory < file.of.dirnames
I have some files located in one directory /home/john
I want to copy all the files with *.text extension from this directory and save them as *.text.bkup, again in the same directory, i.e. /home/john
Is there a single command with which I can do that?
Also, with extension of the same idea, is it possible to copy all the files with multiple extentions (e.g. *.text & *.doc) as *.text.bkup & *.doc.bkup repectively (again in the same directory)?
This is best accomplished with a Shell loop:
~/tmp$ touch one.text two.text three.doc four.doc
~/tmp$ for FILE in *.text *.doc; do cp ${FILE} ${FILE}.bkup; done
~/tmp$ ls -1
four.doc
four.doc.bkup
one.text
one.text.bkup
three.doc
three.doc.bkup
two.text
two.text.bkup
What happens in the code above is the shell gets all .text and .doc files and then loops through each value one by one, assigning the variable FILE to each value. The code block between the "do" and the "done" is executed for every value of FILE, effectively copying each file to filename.bkup.
You can achieve this easily with find:
find /home/john -iname '*.text' -type f -exec cp \{} \{}.backup \;
No, there is no single/simple command to achieve this with standard tools
But you can write a script like this to do it for you.
for file in *.text
do
cp -i "${file}" "${file}.bkup"
done
with -i option you can confirm each overwriting operation
I sort of use a roundabout way to achieve this. It involves a Perl script and needs additional steps.
Step 1:
Copy the names of all the text files into a text file.
find -maxdepth 1 -type f -name '*.txt' > file_name1.txt
Step 2:
Make a duplicate of the copied file.
cp file_name1.txt file_name2.txt
Now open the file_name2.txt in vi editor and do a simple string substitution.
%s/.text/.text.backup/g
Step 3: Merge the source and destination file names into a single file separated by a comma.
paste -d, file_name1.txt file_name2.txt > file_name.txt
Step 4: Run the below perl script to achieve the desired results
open(FILE1,"<file_name.txt") or die'file doesnt exist'; #opens a file that has source and destination separated beforhand using commas
chomp(#F1_CONTENTS=(<FILE1>)); # copies the content of the file into an array
close FILE1;
while()
{
foreach $f1 (#F1_CONTENTS)
{
#file_name=split(/,/,$f1); # separates the file content based on commas
print "cp $file_name[0] $file_name[1]\n";
system ("cp $file_name[0] $file_name[1]"); # performs the actual copy here
}
last;
}
I have a list of directories within directories and this is what I am trying to attempt:
find a specific file format which is .xml
within all these .xml files, read the contents in the files and remove line 3
For line 3, its string is as follows: dxflib <Name of whatever folder it is in>.dxb
I tried using find -name "*.xml" | xargs grep -v "dxflib" in the terminal (I am using linux) and I found out that while my code works and it displays the results, it did not overwrite the changes to the file.
And as I googled online, it is mentioned that I will need to add in >> output.txt etc
And hence, are there anyways in which I can make it to save / overwrite its own file?
Removes third line in file:
sed -i '3d' file