Why does touch $(ls).bla create only one file - linux

Well in one directory I have 24 files but when I run the above command it makes only one file, but if i type echo $(ls) it lists all files

Guess, you mixed up the file expansion. Try this:
echo $(ls *.bla)
touch $(ls *.bla)

Here is an example of what is happening when running $ touch "$(ls).bla" in a folder containing a single file foo that illustrates what is happening:
$ ls
foo
$ touch "$(ls).bla"
$ ls
foo foo.bla
$ touch "$(ls).bla"
$ ls
foo 'foo'$'\n''foo.bla.bla' foo.bla
As you can see the first time the touch command is called it creates a single file named foo.bla. When creating a subshell with $(ls) as the argument the touch command will get that as a single command line parameter not multiple ones. That is why the second time touch "$(ls).bla" is run above the result yields a single file with a strange filename.
What you probably want if you want to create empty files with the same name as other files in a folder is to pipe the output of ls to xargs like this:
$ ls | xargs -i touch "{}.bla"
The -i flag to xargs causes it to replace {} with the file names it gets from ls.

Related

Remove part of filename with common delimiter

I have a number of files with the following naming:
name1.name2.s01.ep01.RANDOMWORD.mp4
name1.name2.s01.ep02.RANDOMWORD.mp4
name1.name2.s01.ep03.RANDOMWORD.mp4
I need to remove everything between the last . and ep# from the file names and only have name1.name2.s01.ep01.mp4 (sometimes the extension can be different)
name1.name2.s01.ep01.mp4
name1.name2.s01.ep02.mp4
name1.name2.s01.ep03.mp4
This is a simpler version of #Jesse's [answer]
for file in /path/to/base_folder/* #Globbing to get the files
do
epno=${file#*.ep}
mv "$file" "${file%.ep*}."ep${epno%%.*}".${file##*.}"
#For the renaming part,see the note below
done
Note : Didn't get a grab of shell parameter expansion yet ? Check [ this ].
Using Linux string manipulation (refer: http://www.tldp.org/LDP/abs/html/string-manipulation.html) you could achieve like so:
You need to do per file-extension type.
for file in <directory>/*
do
name=${file}
firstchar="${name:0:1}"
extension=${name##${firstchar}*.}
lastchar=$(echo ${name} | tail -c 2)
strip1=${name%.*$lastchar}
lastchar=$(echo ${strip1} | tail -c 2)
strip2=${strip1%.*$lastchar}
mv $name "${strip2}.${extension}"
done
You can use rename (you may need to install it). But it works like sed on filenames.
As an example
$ for i in `seq 3`; do touch "name1.name2.s01.ep0$i.RANDOMWORD.txt"; done
$ ls -l
name1.name2.s01.ep01.RANDOMWORD.txt
name1.name2.s01.ep02.RANDOMWORD.txt
name1.name2.s01.ep03.RANDOMWORD.txt
$ rename 's/(name1.name2.s01.ep\d{2})\..*(.txt)$/$1$2/' name1.name2.s01.ep0*
$ ls -l
name1.name2.s01.ep01.txt
name1.name2.s01.ep02.txt
name1.name2.s01.ep03.txt
Where this expression matches your filenames, and using two capture groups so that the $1$2 in the replacement operation are the parts outside the "RANDOMWORD"
(name1.name2.s01.ep\d{2})\..*(.txt)$

perl -p -i -e inside a shell script

#!/usr/bin/env bash
DOCUMENT_ROOT=/var/www/html/
rm index.php
cd modules/mymodules/
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
shows shows a warning
-i used with no filenames on the command line, reading from STDIN.
It prevents running rest of the scripts.
Any solutions ?
Actually I need to run
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
Inside a shell script
I am using ubuntu with docker.
Let's look at this a step at a time. First, you're running this grep command:
grep -ril y.yy.yy.y *
This recursively searches all files and directories in your current directory. It looks for files containing the string "y.yy.yy.yy" in any case and returns a list of the files which contain this text.
This command will return either a list of filenames or nothing.
Whatever is returned from that grep command is then passed as arguments to your Perl command:
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' [list of filenames]
If grep returns a list of files, then the -p option here means that each line in every file in the list is (in turn) run through that substitution and then printed to a new file. The -i means there's one new file for each old file and the new files are given the same names as the old files (the old files are deleted once the command has run).
So far so good. But what happens if the grep doesn't return any filenames? In that case, your Perl command doesn't get any filenames and that would trigger the error that you are seeing.
So my (second) guess is that your grep command isn't doing what you want it to and is returning an empty list of filenames. Try running the grep command on its own and see what you get.

how to copy file to multiple sub directories linux

I have a file needs to copy unique directory call test
directory structure as below
/contentroot/path/a/x/test
/contentroot/path/a/y/test
/contentroot/path/a/z/test
--------------------------
as above I have more then 250 combination test directory
I have try below command ( by using asterisk) but it's only copy one test directly only and giving issue (cp: omitting directory )
cp myfile.txt /contentroot/path/a/*/test
any Help
Perhaps a for loop?
for FOLDER in /contentroot/path/a/*/test; do
cp myfile.txt $FOLDER
done
You can feed the output of an echo as an input to xargs. xargs will then run the cp command three times, appending the next directory path piped to it from the echo.
The -n 1 option on the xargs command is so it only appends one of those arguments at a time to the cp each time it runs.
echo /contentroot/path/a/x/test /contentroot/path/a/y/test /contentroot/path/a/z/test | xargs -n 1 cp myfile.txt
Warnings! Firstly this will over-write files (if they exist) and secondlt any bash command should be tested and used at the runners risk! ;)

Launching program several times

I am using Mac Os. This is command line code to lauch my programm (two parts)
nucmer --mum file1.txt file2.txt
show-snps -Clr -x 2 out.delta > out_file1.snps
First part of the programm creates file out.delta. My file2.txt is always the same, but I want to launch this both parts 35000 times whith different file1.txt. All the file1s are located in the same directory.
Is it possible to do it using BASH?
Keep all the input files in a directory. Create a wrapper script to invoke nucmer script and then show-snps script. Your wrapper script will accept path to file directory as input. Iterate over all files in the directory and call your two scripts.
You could do something along these lines:
find . -maxdepth 1 -type f -print | grep -v './out_' | while read f
do
b=$(basename ${f})
nucmer --mum ${f} file2.txt
show-snps -Clr -x 2 out.delta > out_${b}.snps
done
The find bit finds all files in the current directory. grep filters out any previous output files, in case you've run some previously. The basename line strips off the leading ./ and trailing extension, and then your two programs get run with the input file name and an output filename based on the basename output.
If you don't get an argument list too long error, you could just use for:
for f in file*.txt; do nucmer --mum $f second.txt; show-snps -Clr -x 2 out.delta > out_${f%.txt}.snps; done

Remove all files of a certain type except for one type in linux terminal

On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.

Resources