sed not working as expected, but only for directory depth greater than 1 - linux

I am trying to find all instances of a string in all files on my system up to a specified directory depth. I then want to replace these with another string and I am using 'find' and 'sed' by piping one into the other.
This works where I use the base path as cd /home/../.. or any other directory which isn't "/". It also only works if I select a directory depth of 1 (so /test.txt is changed, but /home/test.txt isn't) If I change nothing else and used say a depth of 2 or 3, neither /test.txt nor /home/text.txt are changed. In the former, no warnings appear, and in the latter, the results below (And no strings are replaced in either of the files).
Worryingly, it did work once out of the blue, but I have no idea how and I can't recreate the results. I should say I know the risks of using these commands with root from base directory, and the specific use of the programs below is intentional so I am not looking for an alternative way, just a clue as to how this isn't working and perhaps a suggestion on how to fix it.
cd /;find . -maxdepth 3 -type f -print0 | xargs -0 sed -i 's/teststring123/itworked/gI'
sed: couldn't open temporary file ./sys/kernel/sedoPGqGB: No such file or directory
sed: couldn't open temporary file ./proc/878/sedtqayiq: No such file or directory
As you see, there are warnings, but nether the less I would expect it to work, the commands appear good, anything I am missing folks?

This should be:
find / -maxdepth 3 -type f -print -exec sed -i -e 's/teststring123/itworked/g' {} \;
Although changing all files below / strikes me as a very bad idea indeed (I hope you're not running as root!).
The "couldn't open temporary file ./[...]" errors are likely to be because sed, running as your user, doesn't have permission to create files in /.
My version runs from your current working directory, I assume your ${HOME}, where you'll be able to create the temporary file, but you're still unlikely to be able to replace those files vital to the continued running of your operating system.

Related

BASH loop through multiple files replacing content and filename

I'm trying to find all files with a .pux extension 1 level below my parent directory. Once the files are identified, I want to add a carriage return to each line inside each file (if possible, only if that line doesn't already have the CR). Finally I want to rename the extension to .pun ensuring there is no .pux left behind.
I've been trying different methods with this and my biggest problem is that I cannot develop or debug this code easily as I cannot access the command line directly. I can't access the Linux server that the script will run on. I can only call it from my application on my windows server (trust me, I'm thinking exactly what you are right now).
The Linux server is running BASH 3.2.57(2). I don't believe the Unix2Dos utility is installed as I've tried using it in it's most basic form with no success. I've confirmed my find command can successfully identify the files I need as I have ran this and checked my log file output.
#!/bin/bash
MYSCRIPTS=${0%/*}
PARENTDIR=/home/clnt/parent/
LOGFILE="$MYSCRIPTS"/PUX2PUN.log
find "$PARENTDIR" -mindepth 2 -maxdepth 2 -type f -name "*.pux" > "$LOGFILE"
Logfile output:
/home/clnt/parent/z3y/prz3y.pux
/home/clnt/parent/wsl/prwsl.pux
However when I have tried to build on this code and pipe those results to a while read do, it doesn't appear to do anything.
#!/bin/bash
MYSCRIPTS=${0%/*}
PARENTDIR=/home/clnt/parent/
LOGFILE="$MYSCRIPTS"/PUX2PUN.log
find "$PARENTDIR" -mindepth 2 -maxdepth 2 -type f -name "*.pux" -print0 | while IFS= read -r file; do
sed -i '/\r/! s/$/\r/' "${file}" &&
mv "${file}" "${file/%pux/pun}" >> "$LOGFILE"
done
I'm open to other methods if they are standard in my BASH version and safe. Below my parent first should be anywhere from 1-250 folders max and each of those children folders can have up to 1 pr*.pux file each (* will match the folder name as shown in my example output earlier). So were' not dealing with a ton of files.

How to find and replace an IP address in many archives in linux

Example:
find /tmp/example -type f -print0 | xargs -0 sed -i 's/10.20.1.110/10.10.1.40/g'
I need replace 10.20.1.110 to 10.10.1.40 in all archives inside /tmp/example.
But this command does not replace inside archives.
.xml, *.txt , *.py ..jy . This archives types.
These are not archives, but ordinary text file extensions; thus, if the sed command doesn't work for you, there must be another reason. It may be that the command is executed with insufficient priviledges - sed -i exits as soon as it cannot rename its temporary output file to the input file (as it's the case if the containing directory has the sticky bit t set and you don't own the file or the directory). Pay heed to error messages.

Script to zip complete file structure depending on file age

Alright so i have a web server running CentOS at work that is hosting a few websites internally only. It's our developpement server and thus has lots [read tons] of old junk websites and whatnot.
I was trying to elaborate a command that would find files that haven't been modified for over 6 months, group them all in a tarball and then delete them. Thus far i have tried many different type of find commands with arguments and whatnot. Our structure looks like such
/var/www/joomla/username/fileshere/temp
/var/www/username/fileshere
So i tried something amongst the lines of :
find /var/www -mtime -900 ! -mtime -180 | xargs tar -cf test4.tar
Only to have a 10MB resulting tar, when the expected result would be over 50 GB's.
I tried using gzip instead, but i ended up zipping MY WHOLE SERVER thus making is unusable, had to transfer the whole filesystem and reinstall a complete new server and lots of shit and trouble and... you get the idea. So i want to find the perfect command that won't blow up our server but will find all FILES and DIRECTORIES that haven't been modified for over 6 months.
Be careful with ctime.
ctime is related to changes made to inodes (changing permissions, owner, etc)
atime when a file was last accessed (check if your file system is using noatime or relatime options, in that case the atime option may not work in the expected way)
mtime when data in a file was last modified.
Depending on what are you trying to do, the mtime option could be your best option.
Besides, you should check the print0 option. From man find:
-print0
True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or
other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs.
I do not know what are you trying to do but this command could be useful for you:
find /var/www -mtime +180 -print0 | xargs -0 tar -czf example.tar.gz
Try this:
find /var/www -ctime +180 | xargs tar cf test.tar
The ctime parameter tells you the difference between current time and each files modification times, and if you use the + instead of minus it will give you the "files modified in a date older than x days".
Then just pass it to tar with xargs and you should be set.

How do I write a bash script to replace words in files and then rename files?

I have a folder structure, as shown below:
I need to create a bash script that does 4 things:
It searches all the files in the generic directory and finds the string 'generic' and makes it into 'something'
As above, but changes "GENERIC" to "SOMETHING"
As above, but changes "Generic" to "Something"
Renames any filename that has "generic" in it with "something"
Right now I am doing this process manually by using the search and replace in net beans. I dont know much about bash scripting, but i'm sure this can be done. I'm thinking of something that I would run and it would take "Something" as the input.
Where would I start? what functions should I use? overall guidance would be great. thanks.
I am using Ubuntu 10.5 desktop edition.
Editing
The substitution part is a sed script - call it mapname:
sed -i.bak \
-e 's/generic/something/g' \
-e 's/GENERIC/SOMETHING/g' \
-e 's/Generic/Something/g "$#"
Note that this will change words in comments and strings too, and it will change 'generic' as part of a word rather than just the whole word. If you want just the word, then you use end-word markers around the terms: 's/\<generic\>/something/g'. The -i.bak creates backups.
You apply that with:
find . -type f -exec mapname {} +
That creates a command with a list of files and executes it. Clearly, you can, if you prefer, avoid the intermediate mapname shell/sed script (by writing the sed script in place of the word mapname in the find command). Personally, I prefer to debug things separately.
Renaming
The renaming of the files is best done with the rename command - of which there are two variants, so you'll need to read your manual. Use one of these two:
find . -name '*generic*' -depth -exec rename generic something {} +
find . -name '*generic*' -depth -exec rename s/generic/something/g {} +
(Thanks to Stephen P for pointing out that I was using a more powerful Perl-based variant of rename with full Perl regexp capacity, and to Zack and Jefromi for pointing out that the Perl one is found in the real world* too.)
Notes:
This renames directories.
It is probably worth keeping the -depth in there so that the contents of the directories are renamed before the directories; you could otherwise get messages because you rename the directory and then can't locate the files in it (because find gave you the old name to work with).
The more basic rename will move ./generic/do_generic.java to ./something/do_generic.java only. You'd need to run the command more than once to get every component of every file name changed.
* The version of rename that I use is adapted from code in the 1st Edition of the Camel book.
Steps 1-3 can be done like this:
find .../path/to/generic -type f -print0 |
xargs -0 perl -pi~ -e \
's/\bgeneric\b/something/g;
s/\bGENERIC\b/SOMETHING/g;
s/\bGeneric\b/Something/g;'
I don't understand exactly what you want to happen in step 4 so I can't help with that part.

How to build one file contains other files, selected by mask?

I need to put the contents of all *.as files in some specified folder into one big file.
How can I do it in Linux shell?
You mean cat *.as > onebigfile?
If you need all files in all subdirectories, th most robust way to do this is:
rm onebigfile
find -name '*.as' -print0 | xargs -0 cat >> onebigfile
This:
deletes onebigfile
for each file found, appends it onto onebigfile (this is why we delete it in the previous step -- otherwise you could end up tacking onto some existing file.)
A less robust but simpler solution:
cat `find -name '*.as'` > onebigfile
(The latter version doesn't handle very large numbers of files or files with weird filenames so well.)
Not sure what you mean by compile but are you looking for tar?

Resources