Create mutiple files in multiple directories [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I 've got tree of folders like:
00 -- 0
-- 1
...
-- 9
...
99 -- 0
-- 1
...
-- 9
How is the simplest way to create in every single subfolders a file like:
/00/0/00_0.txt
and save to every files some kind of data?
I tried with touch and with loop but without success.
Any ideas how to make it very simple?

List all directories using globs. Modify the listed paths with sed so that 37/4 becomes 37/4/37_4.txt. Use touch to create empty files for all modified paths.
touch $(printf %s\\n */*/ | sed -E 's|(.*)/(.*)/|&\1_\2.txt|')
This works even if 12/3 was just a placeholder and your actual paths are something like abcdef/123. However it will fail when your paths contain any special symbols like whitespaces, *, or ?.
To handle arbitrary path names use the following command. It even supports linebreaks in path names.
mapfile -td '' a < <(printf %s\\0 */*/ | sed -Ez 's|(.*)/(.*)/|&\1_\2.txt|')
touch "${a[#]}"

You may use find and then run commands using -exec
find . -type d -maxdepth 2 -mindepth 2 -exec bash -c 'f={};
cmd=$(echo "${f}/${f%/*}_${f##*/}.txt"); touch $cmd' \;
the bash substitution ${f%/*}_${f##*/} replaces the last / with _

Related

How to rename folders with name DD.MM.YYYY-X to YYMMDDIX on bash? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
I have folders inside a directory with names having the pattern DD.MM.YYYY-X where X refers to an index from 0-9 to distinguish folders with names having the same date.
How could I use regex on bash to replace this pattern with YYMMDDIX where
I is an actual I to signal that what follows is the index of the folder
YY is the last two numbers in YYYY
DD and MM are the same as in the original name
Running this script in the same directory containing folders with names having the pattern DD.MM.YYYY-X will copy those folders in the same directory with naming syntax you requested.
script2.sh file containing the following
#!/bin/bash
for dir in $(find . -maxdepth 1 -name "*-*" -type d -exec basename {} \;) ;do
dd=$(cut -d'.' -f1 <<< "${dir}")
mm=$(cut -d'.' -f2 <<< "${dir}")
yyyy=$(cut -d'.' -f3 <<< "${dir}" | cut -d'-' -f1)
yy="${yyyy: -2}"
x="${dir: -1}"
cp -rvi "${dir}" "${yy}${mm}${dd}I${x}"
done
exit 0
Script Output
'22.12.1983-1' -> '831222I1'
'22.12.1982-1' -> '821222I1'
'22.12.1983-0' -> '831222I0'
'22.12.1982-2' -> '821222I2'
ls output after running script
22.12.1982-1 22.12.1982-2 22.12.1983-0 22.12.1983-1 821222I1 821222I2 831222I0 831222I1 script2.sh
Recommendation (Update #1)
It is recommended to update and use unique variable names. Like with a _ prefix.
Here dd= variable can be changed to _dd=... to avoid conflicting/confusing with the dd command.

How to print output twice in Linux? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Which command is use to print the file name twice on output?
I want to write a pipe that List all the files beginning with the character ā€˜Pā€™ on the screen twice in succession.
Something like:
ls -1 | while read i ; do echo $i $i ; done
ā€¦ should do the trick.
ls | sed -E 's/^(P.*)/\1 \1/'
ls, when used with a pipe, puts 1 file per line.
We use sed with extended RE support -E.
We capture the name of any word beginning with P: ^(P.*)
and replace it with itself, a space, followed by itself \1 is a back-reference to what is captured in the parenthesis ( ... ) .
I suggest to use the find utility:
find . -maxdepth 1 -type f -name 'P*' -print -print

Get a subset of several alphabetically ordered files in Bash [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Let's assume I have a group of files "aab.md", "aac.md", "aad.md" ... "csdw.md". Content/filenames are actually in (non-latin) utf-8. They can be sorted alphabetically.
How can I get in Bash a subset of those files starting with e.g. "aad.md" and upwards?
declare -a files
while IFS= read -r -d $'\0' file; do
filename=${file##*/}
if [[ ! "$filename" < "aad.md" ]]
then
files=("${files[#]}" "$file")
fi
done < <(find . -name "*.md" -print0)
The array "${files[#]}" should now contain paths to files whose basename is greater than aad.md.
This uses a number of less well-known techniques in bash: arrays, prefix substitution, zero-terminated records (and their reading), and process substitution; so don't hesitate to ask if something is unclear.
Note that bash [[...]] construct doesn't know about >= operator, so we need to improvise with ! ...<....
This is almost pure bash, no external commands except find. If you accept external commands, $(basename $file) is more obvious than ${file##*/}, but at that point you might as well use awk... and if you can use awk, why not Ruby?
ruby -e "puts Dir['**/*.md'].select{|x| File.basename(x) >= 'aad.md'}"

How do I create a bash file that creates a symbolic link (linux) for each file moved from Folder A to Folder B [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
How do I create a bash file that creates a symbolic link (linux) for each file moved from Folder A to Folder B.
However, this would be done selecting the 150 biggest current files from Folder A.
Can probably write it in a one liner, but easier in a simple bash script like
#!/bin/bash
FolderA="/some/folder/to/copy/from/"
FolderB="/some/folder/to/copy/to/"
while read -r size file; do
mv -iv "$file" "$FolderB"
ln -s "${FolderB}${file##*/}" "$file"
done < <(find "$FolderA" -maxdepth 1 -type f -printf '%s %p\n'| sort -rn | head -n150)
Note ${file##*/} removes everything before the last /, per
${parameter##word}
Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the
pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of
parameter with the shortest matching pattern (the ``#'' case) or the longest matching pattern (the ``##'' case)
deleted. If parameter is # or *, the pattern removal operation is applied to each positional parameter in turn, and
the expansion is the resultant list. If parameter is an array variable subscripted with # or *, the pattern removal
operation is applied to each member of the array in turn, and the expansion is the resultant list.
Also, it may seem like a good idea to just do for file in $(command), but process substitution and while/read works better in general to avoid word-splitting issues like splitting up files with spaces, etc...
As with any task, break it up into smaller pieces, and things will fall into place.
Select the biggest 150 files from FolderA
This can be done with du, sort, and awk, the output of which you stuff into an array:
du -h /path/to/FolderA/* | sort -h | head -n150 | awk '{print $2}'
Move files from FolderA into FolderB
Take the list from the last command, and iterate through it:
for file in ${myarray[#]}; do mv "$file" /path/to/FolderB/; done
Make a symlink to the new location
Again, just iterate through the list:
for file in ${myarray[#]; do ln -s "$file" "/path/to/FolderB/${file/*FolderA\/}"; done

how to delete a line that contains a word in all text files of a folder? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So, in linux, I have a folder with lots of big text files.
I want to delete all the lines of these files that contain a specific keyword.
Is there any easy way to do that across all files?
There already many similar answers. I'd like to add that if you want to match this is a line containing a keyword but not this is a line containing someoneelseskeyword, then you had better added brackets around the word:
sed -i '/\<keyword\>/d' *.txt
I cannot test this right now, but it should get you started
find /path/to/folder -type f -exec sed -i '/foo/d' {} ';'
find files in the directory /path/to/folder
find lines in these files containing foo
delete those lines from those files
Sure:
for x in files*
do
grep -v your_pattern "$x" > x && mv x "$x"
done
try this:
find your_path_filename |xargs sed -i '/key_word/d'
sed -i '/keyword/d' *.txt -- run this in your directory.
sed - stream editor , use it here for deleting lines in individual files
-i option : to make the changes permenent in the input files
'/keywprd/' : specifies the pattern or the key to be searched in the files
option d : informs sed that matching lines need to be deleted.
*.txt : simply tells sed to use all the text files in the directory as input for
processing , you can specify a individual or a extension like *.txt the way i did.

Resources