#!/usr/bin/env bash
DOCUMENT_ROOT=/var/www/html/
rm index.php
cd modules/mymodules/
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
shows shows a warning
-i used with no filenames on the command line, reading from STDIN.
It prevents running rest of the scripts.
Any solutions ?
Actually I need to run
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
Inside a shell script
I am using ubuntu with docker.
Let's look at this a step at a time. First, you're running this grep command:
grep -ril y.yy.yy.y *
This recursively searches all files and directories in your current directory. It looks for files containing the string "y.yy.yy.yy" in any case and returns a list of the files which contain this text.
This command will return either a list of filenames or nothing.
Whatever is returned from that grep command is then passed as arguments to your Perl command:
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' [list of filenames]
If grep returns a list of files, then the -p option here means that each line in every file in the list is (in turn) run through that substitution and then printed to a new file. The -i means there's one new file for each old file and the new files are given the same names as the old files (the old files are deleted once the command has run).
So far so good. But what happens if the grep doesn't return any filenames? In that case, your Perl command doesn't get any filenames and that would trigger the error that you are seeing.
So my (second) guess is that your grep command isn't doing what you want it to and is returning an empty list of filenames. Try running the grep command on its own and see what you get.
Related
In Linux I use InfluxDB which can make a backup of the database for archival purposes. Each backup comprises a series of files with the same prefix "/tank/Backups/var/Influxdb/20191225T235655Z." and different extensions.
I wanted to write a bash script which first deletes the oldest existing backups, then creates a new one (here I paste only the removal):
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' | \
sort -ru | sed 's/$/.*/' | tail -n +4 | xargs -d '\n' -r rm --
However, when I run the script as "sudo", I get
rm: cannot remove '/tank/Backups/var/Influxdb/20191225T235655Z.*': No such file or directory
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct. Also, if I manually write
sudo /tank/Backups/var/Influxdb/20190930T215357Z.*
the command succeeds.
Why is the script reporting an error?
I'm using Ubuntu 18.04 and the folder "/tank" is a ZFS volume.
Better do :
find /tank/Backups/var/Influxdb/* -mtime +5 -delete
to remove files older than 5 days.
Then, you can run the next command
Explaining the Error
This answer is only here to explain the error and give a deeper understanding of what is happening. If you are simply looking for an elegant solution search for other answers.
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct
The listed strings are not what you want. When you pass these paths to rm it sees them just as literal strings, that is, two files whose names end with a literal *. Since you don't have such files you get an error.
When you type rm * manually into your console bash (not rm!) does globbing. bash searches files and replaces the * with the list of found files. Only after that bash executes rm foundFile1 foundFile2 .... rm never sees the *.
Strings inside a pipeline are not processed by bash, but by the commands in the pipeline, in your case rm. rm does not glob.
You could run bash inside your pipeline and let it expand the * you inserted earlier. To this end, replace the last command in your pipeline with xargs -r bash -c 'rm -- $*' --. However, note that your paths are not quoted here. If there are spaces or literal * in your filenames the command will break. This is necessary for globbing as quoted "*" are not expanded by bash.
To quote your files you have to insert the * glob inside the bash command:
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' |
sort -ru | tail -n +4 | xargs -d\\n -L1 -r bash -c 'rm -- "$0."*'
Above command is only a simple fix for your command. It is neither elegant nor very robust. Using tools like find is strongly recommended.
Trying to figure out how to iterate through a .txt file (filemappings.txt) line by line, then split each line using tab(\t) as a delimiter so that we can create the directory specified on the right of the tab (mkdir -p).
Reading filemappings.txt and then splitting each line by tab
server/ /client/app/
server/a/ /client/app/a/
server/b/ /client/app/b/
Would turn into
mkdir -p /client/app/
mkdir -p /client/app/a/
mkdir -p /client/app/b/
Would xargs be a good option? Why or why not?
cut -f 2 filemappings.txt | tr '\n' '\0' | xargs -0 mkdir -p
xargs -0 is great for vector operations.
You already have an answer telling you how to use xargs. In my experience xargs is useful when you want to run a simple command on a list of arguments that are easy to retrieve. In your example, xargs will do nicelly. However, if you want to do something more complicated than run a simple command, you may want to use a while loop:
while IFS=$'\t' read -r a b
do
mkdir -p "$b"
done <filemappings.txt
In this special case, read a b will read two arguments separated by the defined IFS and put each in a different variable. If you are a one-liner lover, you may also do:
while IFS=$'\t' read -r a b; do mkdir -p "$b"; done <filemappings.txt
In this way you may read multiple arguments to apply to any series of commands; something that xargs is not well suited to do.
Using read -r will read a line literally regardless of any backslashes in it, in case you need to read a line with backslashes.
Also note that some operating systems may allow tabs as part of a file or directory name. That would break the use of the tab as the separator of arguments.
As others have pointed out, \t character could also be a part of the file or directory name, and the following command may fail. Assuming the question represents the true form of the input file, one can use:
$ grep -o -P '(?<=\t).*' filemappings.txt | xargs -d'\n' mkdir -p
It uses -P perl-style regex to get words after the \t(TAB) character, then use -d'\n' which provides all relevant lines as a single input to mkdir -p.
sed -n '/\t/{s:^.*\t\t*:mkdir -p ":;s:$:":;p}' filemappings.txt | bash
sed -n: only work with lines that contains tab (delimiter)
s:^.*\t\t*:mkdir -p :: change all things from line beggning to tab to mkdir -p
| bash: tell bash to create folders
With GNU Parallel it looks like this:
parallel --colsep '\t' mkdir -p {2} < filemapping.txt
I am working on the definition of regular expressions. With the command
file=`echo $2 | sed -e "s/\&/\&/g" \
-e "s/</\</g" \
-e "s/>/\>/g" \
-e "s/'/\'/g"`
a shell script accesses a file in a file system and then continues editing the file. That works pretty well. However, it can not be used to capture files whose file path contains two spaces in succession.
Is it possible to adapt this command character so that such special cases are included in the file path?
The easiest thing to do is put the filename in quotes on the command line. For example:
$ script.sh arg "file name"
The other thing you can do, if the the file name is the last argument the script receives, is to take all of the remaining command line args. E.g.:
shift # Shifts off the first argument, so $2 is the first one
file=`echo $* | sed ....`
The file name incl. File path I give the command already.
The program "set_attributes" sets with option -w the value 1 to the specified file "path to file"
./set_attributes -u user -p password -w 1 server_2 "path to file"
./set_attributes -u user -p password -w 1 server_2 "/example/folder
1/filename.jpg
This file gets the program "set_attributes" not handled because there are two blank characters in the file path.
I am writing a bash script to copy some config files. I run the file using sudo bash configure.sh.
#!/bin/bash
cp config/ocr_pattern /usr/share/tesseract-ocr/tessdata/ocr_pattern
cp config/ocr_config /usr/share/tesseract-ocr/tessdata/tessconfigs/ocr_config
However when I view the changes made, ocr_config is copied correctly but ocr_pattern is copied with ocr_pattern? as the filename instead of ocr_pattern. There is an additional character ? behind in the filename for ocr_pattern. What is the issue here?
cat -A
#!/bin/bash^M
cp config/ocr_pattern /usr/share/tesseract-ocr/tessdata/ocr_pattern^M
cp config/ocr_config /usr/share/tesseract-ocr/tessdata/tessconfigs/ocr_config
As shown by the output of cat -A, you have carriage return (\r) at the end of some lines causing the mentioned issues.
Remove those:
sed -i 's/\r$//' configure.sh
or just use dos2unix:
dos2unix configure.sh
I have a sed file that contains contains a few substitutions, it is executed on a file using the following syntax:
sed -f mysedfile file.txt > fixed_file.txt
I would like to test a system variable and depending what that variable contains, execute different sed operations on file.txt.
Would it be possible to put this logic into mysedfile?
Thank you for the help.
Perl was explicitly created to get around limitations of sed and awk. The -p mode runs a script for each line in the file. You can put it on the commandline:
perl -p -e "s/foo/\$ENV{'HOME'}/e" < files.txt
Or move the script to a file (you can remove the '\' before the $)
perl -p file.pl < files.txt
Or make the first line of your script like this so you can run it directly.
#!/usr/bin/perl -p