I have this structure:
release/folder1/file1
release/folder2/file2
...
release/folderN/fileN
I want to include all those folders (folder1, folder2 ... folderN) in a tar file.
The key is that I want these folders to be in the final tar within another directory named MYAPP so when you open the tar you can see this:
MYAPP/folder1/file1
MYAPP/folder2/file2
...
MYAPP/folderN/fileN
How can I achieve this without renaming the original "release" directory and/or creating new directories.
Is this possible to achive just in the tar process?
Thanks
Add
--transform=s#^release/#MYAPP/#
to your tar command line.
The argument of the --transform command line is a command that is passed to sed together with the file path before it is stored in the archive (use tar -tf to show the names of the files stored in the archive).
The command s#^release/#MYAPP/# tells sed to search (s) release/ at the beginning of the string (^) and replace it with MYAPP/.
The / at the end of the search and replace strings is needed to be sure the complete name of the component is release (to not replace release.txt). The # character is just a regex delimiter. Usually / is used as a regex delimiter but we prefer to use a different delimiter here to avoid the need to escape / (because it is used in the search and replace strings).
Read more in the documentation of tar and sed.
Related
I am trying to rename all files located in a directory (recursively) with a specific meta data field appended to the end of the png file name.
the meta data field name is "aesthetic_score" with a value range from 1.0-9.0
when I type:
exiftool -Aesthetic_score -G1 -s testn.png
the result is:
[PNG] Aesthetic_score : 7.0
This is how I would like to append the png files recursively within a directory.
Note i would like to swap out the word aesthetic with the word chad in the append, and not all files will have this data field:
input file:
filename001.png (metadata aesthetic_score:7.0)
output:
filename001-chad-score-70.png
I tried to use Digikam and JExifToolGui-2.01, without success.
I am trying to perform this task in the cmd line, although other solutions are welcome. Thank you for your help.
So, this might work for you, I can't really test it; note that you would need to get rid of the echo before the mv for it to actually do something (rename rather than just show what it would do).
while read name
do
newname=$(exiftool -G1 -s "$name"|awk '$2~/FileName/{name=$4}; $2~/Aesthetic_score/{basename=gensub(/^(.+)\....$/,"\\1","1",name);ext=gensub(/^.*\.(...)$/,"\\1","1",name);gsub(/\./,"",$4);print basename"."$4"."ext}')
echo mv "$name" "$newname"
done <<<$( find -iname \*.png )
Basically the find at the very end finds all the pngs.
The while loop takes every name find throws it, and passes each file through exiftool (using your specs) and parses the output using awk, which then outputs the new name, which gets captured in the shell variable by the same name.
And finally the mv (without the echo) renames the files.
I have a folder /home/user/Document/filepath where I have three files namely file1-1.1.0.txt, file2-1.1.1.txt, file3-1.1.2.txt
and another folder named /home/user/Document/backuppath where I have to move files from /home/user/Document/folderpath which has file1-1.0.0.txt, file2-1.0.1.txt and file3-1.0.2.txt
task is to copy the specific files from folder path to backup path.
To summarize:
the below is the files.txt where I listed the files which has to be copied:
file1-*.txt
file2-*.txt
The below is the move.sh script that execute the movements
for file in `cat files.txt`; do cp "/home/user/Document/folderpath/$file" "/home/user/Documents/backuppath/" ; done
for the above script I am getting the error like
cp: cannot stat '/home/user/Document/folderpath/file1-*.txt': No such file or directory found
cp: cannot stat '/home/user/Document/folderpath/file2-*.txt': No such file or directory found
what I would like to accomplish is that I would like to use the script to copy specific files using * in the place of version numbers., since the version number may vary in the future.
You have wildcard characters in your files.txt. In your cp command, you are using quotes. These quotes prevent the wildcards to be expanded, as you can clearly see from the error message.
One obvious possibility is to not use quotes:
cp /home/user/Document/folderpath/$file /home/user/Documents/backuppath/
Or not use a loop at all:
cp $(<files.txt) /home/user/Documents/backuppath/
However, this would of course break if one line in your files.txt is a filename pattern which contains white spaces. Therefore, I would recommend a second loop over the expanded pattern:
while read file # Puts the next line into 'file'
do
for f in $file # This expands the pattern in 'file'
do
cp "/home/user/Document/folderpath/$f" /home/user/Documents/backuppath
done
done < files.txt
I have the following folder,
(Project) [Usr#hpc FOB]$ ls
exec_train.sh FOB_RE2250_BS4ES025.py network_checkpoint_FOB_RE2250_BS2ES05
FOB_RE1150.py FOB_RE2250_BS4ES05.py network_checkpoint_FOB_RE2250_BS2ES1
FOB_RE1200.py FOB_RE2250_BS4ES1.py network_checkpoint_FOB_RE2250_BS4ES025
FOB_RE2250_BS05ES1.py FOB_RE2250.py network_checkpoint_FOB_RE2250_BS4ES05
FOB_RE2250_BS05ES2.py FOB_RE50.py network_checkpoint_FOB_RE2250_BS4ES1
FOB_RE2250_BS1ES1.py network_checkpoint_FOB_RE2250_BS05ES1
FOB_RE2250_BS2ES05.py network_checkpoint_FOB_RE2250_BS05ES2
FOB_RE2250_BS2ES1.py network_checkpoint_FOB_RE2250_BS1ES1
How do I compress the all the network_checkpoint_FOB.... into one .tar.gz archive?
I know I could manually use $ tar -czf FOB.tar.gz network_checkpoint_FOB_RE2250_BS1ES1 network_checkpoint_FOB_RE2250_BS05ES1 ... but this seams cumbersome. I think there should be a way to use string matching but I haven't been able to find a clear concise solution.
You can use wildcard character * in Bash:
$ tar -czf FOB.tar.gz network_checkpoint_FOB*
Bash automatically expands network_checkpoint_FOB* expression to space separated matching file/folder names.
In colab, I have unzip a file, but now there is too much files in the directory according to the colab's setup. Is there a command line to remove the last x files of a directory?
I know I can remove all the files from this repository with rm -rf *, but I just want to remove for instance the last 100 files of the repository.
Try globing or better REGEX.
The most easy way is with globing you use the star * and some differentiation example: rm *.txt # will delete all files that end with .txt or rm document*.local # will delete all files which start with document and end with .local
The better wey is searching files by attribut and executing command on the result but is a bit complex to explain so check this out.
https://www.cyberciti.biz/faq/linux-unix-how-to-find-and-remove-files/
Using a shell array and parameter expansion:
all_files=(*)
printf '%s\n' "${all_files[#]: -100}" | nl
#rm "${all_files[#]: -100}"
Uncomment the last line if it looks like the correct list of files to delete.
The space between the colon and the minus sign is required to disambiguate from another form of parameter expansion.
Ref: 3.5.3 Shell Parameter Expansion
I am aware there isn't a special bash function to do this and we will have to build this with available tools -- e.g. sed, awk, grep, etc.
We dump files into a directory and while their filename looks random, they can be mapped to their full description. For example:
/tmp/abcxyz.csv
/tmp/efgwaz.csv
/tmp/mnostu.csv
In filemapping.dat, we have:
abcxyz, customer_records_abcxyz
efgwaz, routernodes_logs_efgwaz
mnostu, products_campaign
We need to go through each of them in the directory recursively and rename the file with its full description. Final outcome:
/tmp/customer_records_abcxyz.csv
/tmp/routernodes_logs_efgwaz.csv
/tmp/products_campaign_mnostu.csv
I found something similar here but not sure how to work it out at directory level dealing with only one file as the lookup/referece file. Please help. Thanks!
I would try something like this:
sed 's/,/.csv/;s/$/.csv/' filemapping.dat | xargs -n2 mv
Either cd to tmp beforehand, or modify the sed command to include the path name.
The sed commands simply replace the comma and the line end with the string ".csv".