Bash script to mkdir on each line of a file that has been split by a delimiter? - linux

Trying to figure out how to iterate through a .txt file (filemappings.txt) line by line, then split each line using tab(\t) as a delimiter so that we can create the directory specified on the right of the tab (mkdir -p).
Reading filemappings.txt and then splitting each line by tab
server/ /client/app/
server/a/ /client/app/a/
server/b/ /client/app/b/
Would turn into
mkdir -p /client/app/
mkdir -p /client/app/a/
mkdir -p /client/app/b/
Would xargs be a good option? Why or why not?

cut -f 2 filemappings.txt | tr '\n' '\0' | xargs -0 mkdir -p
xargs -0 is great for vector operations.

You already have an answer telling you how to use xargs. In my experience xargs is useful when you want to run a simple command on a list of arguments that are easy to retrieve. In your example, xargs will do nicelly. However, if you want to do something more complicated than run a simple command, you may want to use a while loop:
while IFS=$'\t' read -r a b
do
mkdir -p "$b"
done <filemappings.txt
In this special case, read a b will read two arguments separated by the defined IFS and put each in a different variable. If you are a one-liner lover, you may also do:
while IFS=$'\t' read -r a b; do mkdir -p "$b"; done <filemappings.txt
In this way you may read multiple arguments to apply to any series of commands; something that xargs is not well suited to do.
Using read -r will read a line literally regardless of any backslashes in it, in case you need to read a line with backslashes.
Also note that some operating systems may allow tabs as part of a file or directory name. That would break the use of the tab as the separator of arguments.

As others have pointed out, \t character could also be a part of the file or directory name, and the following command may fail. Assuming the question represents the true form of the input file, one can use:
$ grep -o -P '(?<=\t).*' filemappings.txt | xargs -d'\n' mkdir -p
It uses -P perl-style regex to get words after the \t(TAB) character, then use -d'\n' which provides all relevant lines as a single input to mkdir -p.

sed -n '/\t/{s:^.*\t\t*:mkdir -p ":;s:$:":;p}' filemappings.txt | bash
sed -n: only work with lines that contains tab (delimiter)
s:^.*\t\t*:mkdir -p :: change all things from line beggning to tab to mkdir -p
| bash: tell bash to create folders

With GNU Parallel it looks like this:
parallel --colsep '\t' mkdir -p {2} < filemapping.txt

Related

How can I delete the oldest n group of files with the same prefix?

In Linux I use InfluxDB which can make a backup of the database for archival purposes. Each backup comprises a series of files with the same prefix "/tank/Backups/var/Influxdb/20191225T235655Z." and different extensions.
I wanted to write a bash script which first deletes the oldest existing backups, then creates a new one (here I paste only the removal):
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' | \
sort -ru | sed 's/$/.*/' | tail -n +4 | xargs -d '\n' -r rm --
However, when I run the script as "sudo", I get
rm: cannot remove '/tank/Backups/var/Influxdb/20191225T235655Z.*': No such file or directory
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct. Also, if I manually write
sudo /tank/Backups/var/Influxdb/20190930T215357Z.*
the command succeeds.
Why is the script reporting an error?
I'm using Ubuntu 18.04 and the folder "/tank" is a ZFS volume.
Better do :
find /tank/Backups/var/Influxdb/* -mtime +5 -delete
to remove files older than 5 days.
Then, you can run the next command
Explaining the Error
This answer is only here to explain the error and give a deeper understanding of what is happening. If you are simply looking for an elegant solution search for other answers.
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct
The listed strings are not what you want. When you pass these paths to rm it sees them just as literal strings, that is, two files whose names end with a literal *. Since you don't have such files you get an error.
When you type rm * manually into your console bash (not rm!) does globbing. bash searches files and replaces the * with the list of found files. Only after that bash executes rm foundFile1 foundFile2 .... rm never sees the *.
Strings inside a pipeline are not processed by bash, but by the commands in the pipeline, in your case rm. rm does not glob.
You could run bash inside your pipeline and let it expand the * you inserted earlier. To this end, replace the last command in your pipeline with xargs -r bash -c 'rm -- $*' --. However, note that your paths are not quoted here. If there are spaces or literal * in your filenames the command will break. This is necessary for globbing as quoted "*" are not expanded by bash.
To quote your files you have to insert the * glob inside the bash command:
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' |
sort -ru | tail -n +4 | xargs -d\\n -L1 -r bash -c 'rm -- "$0."*'
Above command is only a simple fix for your command. It is neither elegant nor very robust. Using tools like find is strongly recommended.

Generating a script to delete a list of files

I have a file containing a list of paths I want to delete.
Adding rm in front of each path (to generate a script that will run these deletions) seems like the obvious approach. How can I do this?
Changing a list of filenames into a shell script by prepending rm to the beginning of each line is dangerous practice: Filenames may not map to themselves when interpreted by a shell, and may even have side effects that include running arbitrary commands. Don't do that.
If you want to delete all files named in a file, just use xargs to directly invoke rm with the filenames passed:
xargs rm -f -- <input-file
Note that this will have xargs attempt to interpret escape characters, quotes, etc. inside the names; if you don't want this, and have GNU xargs:
xargs -d $'\n' rm -f -- <input-file
Similarly, if you had control over your input file's format, you should use a NUL-delimited stream of filenames rather than a newline-delimited list of names. (This is because POSIX filesystems allow newline literals inside filenames). If your input file is null-delimited, then you can use:
xargs -0 rm -f -- <null-delimimted-input-file
If you really want to generate a shell script that will delete a listed set of names, by the way, you can do this in bash, like so:
while IFS= read -r filename; do
printf 'rm -f -- %q\n' "$filename"
done <input-list >output-script
Using printf %q escapes content in such a way that when reread by bash, it will be parsed as its literal contents (thus, putting backslashes before characters like * or $ which might otherwise be interpreted).
That said, because this invokes rm once per file, it will be less efficient than xargs (which passes multiple filenames to each rm invocation).
That said -- there actually is a middle ground: You can have xargs invoke bash, and generated a safely quoted list in the latter, with only a minimal number of invocations:
{
echo "#!/bin/bash"
xargs bash -c 'printf "rm -f -- "; printf "%q " "$#"; printf "\n"'
} <input-file >output-script
you can use sed
sed 's/^/rm /' foo.sh > foo2.sh
^ is the beginning of a line, so a start of each line will be replaced by rm.

Bash loop through directory including hidden file

I am looking for a way to make a simple loop in bash over everything my directory contains, i.e. files, directories and links including hidden ones.
I will prefer if it could be specifically in bash but it has to be the most general. Of course, file names (and directory names) can have white space, break line, symbols. Everything but "/" and ASCII NULL (0×0), even at the first character. Also, the result should exclude the '.' and '..' directories.
Here is a generator of files on which the loop has to deal with :
#!/bin/bash
mkdir -p test
cd test
touch A 1 ! "hello world" \$\"sym.dat .hidden " start with space" $'\n start with a newline'
mkdir -p ". hidden with space" $'My Personal\nDirectory'
So my loop should look like (but has to deal with the tricky stuff above):
for i in * ;
echo ">$i<"
done
My closest try was the use of ls and bash array, but it is not working with, is:
IFS=$(echo -en "\n\b")
l=( $(ls -A .) )
for i in ${l[#]} ; do
echo ">$i<"
done
unset IFS
Or using bash arrays but the ".." directory is not exclude:
IFS=$(echo -en "\n\b")
l=( [[:print:]]* .[[:print:]]* )
for i in ${l[#]} ; do
echo ">$i<"
done
unset IFS
* doesn't match files beginning with ., so you just need to be explicit:
for i in * .[^.]*; do
echo ">$i<"
done
.[^.]* will match all files and directories starting with ., followed by a non-. character, followed by zero or more characters. In other words, it's like the simpler .*, but excludes . and ... If you need to match something like ..foo, then you might add ..?* to the list of patterns.
As chepner noted in the comments below, this solution assumes you're running GNU bash along with GNU find GNU sort...
GNU find can be prevented from recursing into subdirectories with the -maxdepth option. Then use -print0 to end every filename with a 0x00 byte instead of the newline you'd usually get from -print.
The sort -z sorts the filenames between the 0x00 bytes.
Then, you can use sed to get rid of the dot and dot-dot directory entries (although GNU find seems to exclude the .. already).
I also used sed to get read of the ./ in front of every filename. basename could do that too, but older systems didn't have basename, and you might not trust it to handle the funky characters right.
(These sed commands each required two cases: one for a pattern at the start of the string, and one for the pattern between 0x00 bytes. These were so ugly I split them out into separate functions.)
The read command doesn't have a -z or -0 option like some commands, but you can fake it with -d "" and blanking the IFS environment variable.
The additional -r option prevents a backslash-newline combo from being interpreted as a line continuation. (A file called backslash\\nnewline would otherwise be mangled to backslashnewline.) It might be worth seeing if other backslash-combos get interpreted as escape sequences.
remove_dot_and_dotdot_dirs()
{
sed \
-e 's/^[.]\{1,2\}\x00//' \
-e 's/\x00[.]\{1,2\}\x00/\x00/g'
}
remove_leading_dotslash()
{
sed \
-e 's/^[.]\///' \
-e 's/\x00[.]\//\x00/g'
}
IFS=""
find . -maxdepth 1 -print0 |
sort -z |
remove_dot_and_dotdot_dirs |
remove_leading_dotslash |
while read -r -d "" filename
do
echo "Doing something with file '${filename}'..."
done
It may not be the most favorable way but I tried bellow thing
while read line ; do echo $line; done <<< $(ls -a | grep -v -w ".")
check the below trail which I did
Try the find command, something like:
find .
That will list all the files in all recursive directories.
To output only files excluding the leading . or .. try:
find . -type f -printf %P\\n

Using variable with cp in bash

The file returned will have spaces in the file name so I run the file name through sed to append quotes at the beginning and end. However, when I use $CF with cp it fails. If I manually echo $CF and use the resulting file in place of $CF it works just fine. What's the problem?
CF=`ls -tr /mypath/CHS1*.xlsx | tail -1 | sed -e 's/^/"/g' -e 's/$/"/g'`
cp $CF "/mydest/myfile.xlsx"
You don't need to add the quotes like that (in fact, it probably won't work). Instead, just use them in the cp line:
CF=$(ls -tr /mypath/CHS1*.xlsx | tail -1)
cp "$CF" "/mydest/myfile.xlsx"
I changed it from using backticks to the newer (and preferred) $() syntax.

Unix: How to delete files listed in a file

I have a long text file with list of file masks I want to delete
Example:
/tmp/aaa.jpg
/var/www1/*
/var/www/qwerty.php
I need delete them. Tried rm `cat 1.txt` and it says the list is too long.
Found this command, but when I check folders from the list, some of them still have files
xargs rm <1.txt Manual rm call removes files from such folders, so no issue with permissions.
This is not very efficient, but will work if you need glob patterns (as in /var/www/*)
for f in $(cat 1.txt) ; do
rm "$f"
done
If you don't have any patterns and are sure your paths in the file do not contain whitespaces or other weird things, you can use xargs like so:
xargs rm < 1.txt
Assuming that the list of files is in the file 1.txt, then do:
xargs rm -r <1.txt
The -r option causes recursion into any directories named in 1.txt.
If any files are read-only, use the -f option to force the deletion:
xargs rm -rf <1.txt
Be cautious with input to any tool that does programmatic deletions. Make certain that the files named in the input file are really to be deleted. Be especially careful about seemingly simple typos. For example, if you enter a space between a file and its suffix, it will appear to be two separate file names:
file .txt
is actually two separate files: file and .txt.
This may not seem so dangerous, but if the typo is something like this:
myoldfiles *
Then instead of deleting all files that begin with myoldfiles, you'll end up deleting myoldfiles and all non-dot-files and directories in the current directory. Probably not what you wanted.
Use this:
while IFS= read -r file ; do rm -- "$file" ; done < delete.list
If you need glob expansion you can omit quoting $file:
IFS=""
while read -r file ; do rm -- $file ; done < delete.list
But be warned that file names can contain "problematic" content and I would use the unquoted version. Imagine this pattern in the file
*
*/*
*/*/*
This would delete quite a lot from the current directory! I would encourage you to prepare the delete list in a way that glob patterns aren't required anymore, and then use quoting like in my first example.
You could use '\n' for define the new line character as delimiter.
xargs -d '\n' rm < 1.txt
Be careful with the -rf because it can delete what you don't want to if the 1.txt contains paths with spaces. That's why the new line delimiter a bit safer.
On BSD systems, you could use -0 option to use new line characters as delimiter like this:
xargs -0 rm < 1.txt
xargs -I{} sh -c 'rm "{}"' < 1.txt should do what you want. Be careful with this command as one incorrect entry in that file could cause a lot of trouble.
This answer was edited after #tdavies pointed out that the original did not do shell expansion.
You can use this one-liner:
cat 1.txt | xargs echo rm | sh
Which does shell expansion but executes rm the minimum number of times.
Just to provide an another way, you can also simply use the following command
$ cat to_remove
/tmp/file1
/tmp/file2
/tmp/file3
$ rm $( cat to_remove )
In this particular case, due to the dangers cited in other answers, I would
Edit in e.g. Vim and :%s/\s/\\\0/g, escaping all space characters with a backslash.
Then :%s/^/rm -rf /, prepending the command. With -r you don't have to worry to have directories listed after the files contained therein, and with -f it won't complain due to missing files or duplicate entries.
Run all the commands: $ source 1.txt
cat 1.txt | xargs rm -f | bash Run the command will do the following for files only.
cat 1.txt | xargs rm -rf | bash Run the command will do the following recursive behaviour.
Here's another looping example. This one also contains an 'if-statement' as an example of checking to see if the entry is a 'file' (or a 'directory' for example):
for f in $(cat 1.txt); do if [ -f $f ]; then rm $f; fi; done
Here you can use set of folders from deletelist.txt while avoiding some patterns as well
foreach f (cat deletelist.txt)
rm -rf ls | egrep -v "needthisfile|*.cpp|*.h"
end
This will allow file names to have spaces (reproducible example).
# Select files of interest, here, only text files for ex.
find -type f -exec file {} \; > findresult.txt
grep ": ASCII text$" findresult.txt > textfiles.txt
# leave only the path to the file removing suffix and prefix
sed -i -e 's/:.*$//' textfiles.txt
sed -i -e 's/\.\///' textfiles.txt
#write a script that deletes the files in textfiles.txt
IFS_backup=$IFS
IFS=$(echo "\n\b")
for f in $(cat textfiles.txt);
do
rm "$f";
done
IFS=$IFS_backup
# save script as "some.sh" and run: sh some.sh
In case somebody prefers sed and removing without wildcard expansion:
sed -e "s/^\(.*\)$/rm -f -- \'\1\'/" deletelist.txt | /bin/sh
Reminder: use absolute pathnames in the file or make sure you are in the right directory.
And for completeness the same with awk:
awk '{printf "rm -f -- '\''%s'\''\n",$1}' deletelist.txt | /bin/sh
Wildcard expansion will work if the single quotes are remove, but this is dangerous in case the filename contains spaces. This would need to add quotes around the wildcards.

Resources