LINUX: How can I recursively zip files in sub-folders? - linux

I have the following directory structure:
/Data
- file 1
- file 2
/Folder1
- file 3
- file 4
/Folder2
- file 5
- file 6
/Folder3
- file 7
- file 8
In Linux I want to zip files (excluding folders) in every directory and create a 7z (or zip) archive in each folder resulting the following:
/Data
Data.7z (Note: this should contain only file1 & 2, not any sub directories)
/Folder1
Folder1.7z (this should contain only file3 & 4, not any sub directories)
/Folder2
Folder2.7z (this should contain only file5 & 6, no Folder3)
/Folder3
Folder3.7z (should contain only file7 & 8)
Following script works on the first directory but not on the sub-directories : for i in */ ; do base=$(basename ā€œ$iā€) ; cd $base ; 7za a -t7z -r $base * ; .. ; cd .. ; done;
How can I achieve this? Thank you.

Related

How to loop through text files (containing path to some files lets say a,b,c) and copy the files inside the text file

Please help me with linux command to loop through and copy the files a,b,c. inside a,b,c there are further path of files mentioned (lets say a has 1,2,3| file b has 4,5,6 | file c has 7,8,9). i need to copy all files, a,b,c,1,2,3,4,5,6,7,8,9 in my local directory.
example: content of file a is shown:
/path/to/file/1
/path/to/file/2
/path/to/file/3
content of file b is shown:
/path/to/file/4
/path/to/file/5
/path/to/file/6
content of file c is shown:
/path/to/file/7 and so on.
i need to copy all the files which has their path mentioned
Using tcl:
set destdir [pwd]
foreach filename {a b c} {
set f [open $filename r]
set files [split [read $f] \n]
close $f
file copy -- {*}$files $destdir
}
Or bash:
for filename in a b c; do
readarray -t files <"$filename"
cp -- "${files[#]}" .
done

Trying to get file name and folder name with space

I'm trying to get my folder name and file name (without extension, but i figure out doing without extension).
if I can get in array that will be great but I'm still working on space problem.
its getting only parent folder name and file name including space in folder name.
For Example)
bin/data/ex/1.json
bin/data/2.json
bin/data fix/3.json (there is a space between data and fix, 'data fix')
bin/4.json
Result)
ex_1
data_2
data fix_3 or datafix_3 (with space or no space is fine)
bin_4
actually my data)
./123, 1/2.json
./ex/1.json
wanted result)
123, 1_2 or 123,1_2
ex_1
Here is my codes
for i in $(find . -iname "*.json")
#find *.json under folder and current folder
do
echo "i(.json) is $i"
# TEst Print .json file
dr="$(basename "$(dirname "$i")")"
echo "dr is $dr"
# Test pring $dr
name=$(sudo basename $i .json);
# Test print file name
echo "name is $name"
echo "foldername_filename is $dr _$name"
echo "foldername_filename2 is $(basename "$(dirname "$i")")_$(basename $i .json)"
done
but I got
i(.json) is ./123,
dr is .
name is 123,
foldername_filename is . _123,
foldername_filename2 is ._123,
i(json) is 1/2.json
dr is 1
name is 2
foldername_filename is 1 _2
foldername_filename2 is 1_2
i(json) is ./ex/1.json
dr is ex
name is 1
foldername_filename is ex _1
foldername_filename2 is ex_1

How to run bash file for (different directory) as input automatically

I have a bash file which takes 5 inputs.
Input1 = file1
Input2 = file2
Input3 = directory1
Input4 = func
Input5 = 50
input 4 and 5 are always the same, never change.
file1 and file 2 are located inside directory1
directory1 is located inside a code directory
/code/directory1/file1
/code/directory1/file2
and there are many directories with the same structure directory(1-70) inside the code folder
/code/directory1/*
/code/directory2/*
/code/directory3/*
...
/code/directory70/*
In order to run the bash file, I have to run the command from terminal 70 times :<
Is there a way to automatically run all these folders at once?
UPDATE: the directory(1-7) each one has a different name e.g bug1, test, 4-A and so on. Even the files are different e.g. bug1.c hash.c
/code/bug1/bug1.c
code/bug1/hash.c
Try this:
for dirs in $(ls -F /code | grep '/')
do
eval files=( "$(ls -1 ${dirs})" )
<ShellScript>.sh "${dirs}${files[0]}" "${dirs}${files[1]}" "${dirs%/}" func 50
done

How to rename all files in a folder and create a renaming map

Note: I have access to both Linux and Windows platform so answers for any of these platforms are fine.
I have a folder which contains less than 10K .png files. I would like to:
1. rename all files as follows:
<some_filename>.png to 0001.png
<some_other_name>.png to 0002.png
<another_name>.png to 0003.png
and so on...
2. keep this name mapping in a file (see 1 for mapping)
In Windows: This should sort the list alphabetically and rename them all with numbers, padded to 4 characters.
It writes the bat file that does the renaming. You can examine it before renaming and running it, and doubles as a map of the filenames.
Filenames with ! characters will probably be an issue.
#echo off
setlocal enabledelayedexpansion
set c=0
for %%a in (*.png) do (
set /a c=c+1
set num=0000!c!
set num=!num:~-4!
>>renfile.bat.txt echo ren "%%a" "!num!%%~xa"
)
To rename all .png files in the current directory and to save the renaming map to renaming-map.txt file:
$ perl -E'while (<*.png>) { $new = sprintf q(%04d.png), ++$i; say qq($_ $new);
rename($_, $new) }' > renaming-map.txt
For example, given the following directory content:
$ ls
a.png b.png c.png d.png e.png f.png g.png h.png i.png j.png
It produces:
$ perl -E'while (<*.png>) { $new = sprintf q(%04d.png), ++$i; say qq($_ $new);
rename($_, $new) }'
a.png 0001.png
b.png 0002.png
c.png 0003.png
d.png 0004.png
e.png 0005.png
f.png 0006.png
g.png 0007.png
h.png 0008.png
i.png 0009.png
j.png 0010.png
Result:
$ ls
0001.png 0003.png 0005.png 0007.png 0009.png
0002.png 0004.png 0006.png 0008.png 0010.png
It should work both on Windows and Linux if perl is available (replace perl -E'...' with perl -E "..." on Windows (single -> double quotes)).

Bash script - iterate through folders and move into folders of 1000

I have 1.2 million files split out into folders, like so:
Everything
..........Folder 1
..................File 1
..................File 2
..................File 3
..................File 4
..................File 5 etc
..........Folder 2
..................File 1
..................File 2
..................File 3
..................File 4
..................File 5 etc
If I cd into Folder 1 I can run the following script to organize the files there into folders called 1, 2, 3, etc. of 1000 files each:
dir="${1-.}"
x="${2-1000}"
let n=0
let sub=0
while IFS= read -r file ; do
if [ $(bc <<< "$n % $x") -eq 0 ] ; then
let sub+=1
mkdir -p "$sub"
n=0
fi
mv "$file" "$sub"
let n+=1
done < <(find "$dir" -maxdepth 1 -type f)
However I really would like to run it once on the Everything folder at the top level. From there it would consider the child folders, and do the by-1000 sorting so I could move everything out of Folder 1, Folder 2, etc. and into folders of 1000 items each called 1, 2, 3, etc.
Any ideas?
Edit: Heres how I would like the files to end up (as per comments):
Everything
..........Folder1
.................file1(these filenames can be anything, they shouldnt be renamed)
.................(every file in between so file2 > file 999)
.................file1000
..........Folder2
.................file1001
.................(every file in between so file1002 > file file1999)
.................file2000
Every single possible file that is in the original folder structure is grouped into folders of 1000 items under the top level.
Let's assume your script is called organize.sh, and the Everything folder contains only directories. Try the following:
cd Everything
for d in *; do
pushd $d
bash ~/temp/organize.sh
popd
done
Update
To answer Tom's question in the comment: you only need one copy of organize.sh. Say if you put it in ~/temp, then you can invoke as updated above.
Pseudo Algo:
1) Do ls for all your directories and store them in a file.
2) Do cd into each directory you copied into your file.
3) Sort all your files
4) Do cd ..
5) Repeat step 2-4 in a for loop.

Resources