I am working in linux and have tot extract archives that were already in an archive. Could anyone explain how to extract this while using loops? - linux

#!/bin/bash
tar -xvf assignment_UA_InleidingProgrammeren_Huistaak1-HelloWorld_2019-11-11.tgz
a=$(echo assignment_UA_InleidingProgrammeren_Huistaak1-HelloWorld_2019-11-11.tgz | cut -b 15-35)
b=$(echo assignment_UA_InleidingProgrammeren_Huistaak1-HelloWorld_2019-11-11.tgz | cut -b 37-56)
#cutcommand van geeksforgeeks.org
mkdir -p "$a"/"$b"
mv assignment_UA_InleidingProgrammeren_Huistaak1-HelloWorld_2019-11-11/*.tgz InleidingProgrammeren/Huistaak1-HelloWorld
rmdir assignment_UA_InleidingProgrammeren_Huistaak1-HelloWorld_2019-11-11
for x in InleidingProgrammeren/Huistaak1-HelloWorld
I have already extracted the first archive but i have to extract tthe tgz archives that are in this archive without using hardcode.
I have tried using different loops but it doesn't work and i don't know if i am using them correctly.

Assumptions:
you have an archive, a.tgz, containing some files.
you have an archive, b.tgz, containing some files.
both a.tgz and b.tgz are themselves contained in another archive, top.tgz.
both a.tgz and b.tgz do not exist outside top.tgz when the script starts.
t.tgz
    - a.tgz
         - some files
    - b.tgz
         - some files
Script:
#!/bin/bash
tar -xzf top.tgz
rm -f top.tgz
for F in *.tgz
do
tar -xzf "$F"
rm -f "$F"
done
Extract the top archive first.
Delete that top archive (or move it somewhere else) so the for F in *.tgz does not process it again.
Then loop on the new archives and extract them.
Final result, all files from a.tgz and b.tgz are available.

Related

Download, extract and copy file from a folder that has a version in its name

I'm writing a bash script that downloads an compressed archive from an universal URL (in which new release of the software will automatically be presented), extracts it and copies a file called wimboot to a folder.
This is what I currently have:
sudo curl http://git.ipxe.org/releases/wimboot/wimboot-latest.tar.gz -o ./wimboot.tar.gz
sudo tar -zxvf ./wimboot.tar.gz #Extracts into a folder called "wimboot-2.5.2-signed", in it is the file I need ("wimboot").
cd ./wimboot*/
sudo cp wimboot /my-folder/
But this doesn't work. Is there a method that will allow me to do this?
You can ask tar for a file listing (-t option), which you can then grep for wimboot -- that should give you the relative path to the file also. A naive first try would be something like:
src_file=$(tar tf wimboot.tar.gz | grep wimboot)
cp "$src_file" my_folder/
But you will probably want to add some error checking and stuff to that. And probably a more complicated grep expression to ensure you get the one thing you're after.
There's also no need to extract the entire archive. You can just ask tar to extract the file you're interested in:
tar zxf wimboot.tar.gz "$src_file"
I've built on top of the other answers and this worked for me:
#Create a temporary folder, this folder must be empty!
sudo mkdir temp
#Download the archive and save it in the temporary folder.
sudo curl http://git.ipxe.org/releases/wimboot/wimboot-latest.tar.gz -o ./temp/wimboot-latest.tar.gz
#Extract the downloaded archive to the temporary folder.
sudo tar xvf ./temp/wimboot-latest.tar.gz -C ./temp
#Search for and copy files with the name "wimboot" to the web directory.
sudo find ./temp/ -name 'wimboot' -exec cp {} /var/www/ \;
#Delete the temporary folder.
sudo rm -Rf temp
I do not recommend this for large archives.

extract file using bash in one tar file but composed of many tar files inside

I have created a script below and when I execute it, I face a problem. Instead to extract it 5 times it will just extract once. So how to get this issue resolved?
i=0
for tarfile in *.tar.gz;
do
((i++))
[ $i = 5 ] && break ;
tar -xzvf $tarfile
done
rm -rvf $tarfile
Help is greatly appreciated. I want to extract the tar.gz file and inside of it is only tar.gz file. Noted: it is decompressed 5 times and I want to get the last tar.gz file decompressed. Please help me.
If and only if the filenames for each of the nested tar do not collide, you can do the following.
for i in `seq 1 4`; do name=$(ls *.tar.gz); tar xvfz $name; rm $name; done;
If they are all the same, like foo.tar.gz inside foo.tar.gz inside foo.tar.gz, you can simply delete the rm $name from the above.
If there are only some collisions you will have to play more clever tricks, such as moving intermediate files into another directory.
Update: If you just want to move the final tar to a new directory, just do use mv after the loop above.
mv *.tar.gz OtherDirectoryName

Extracting a .tar file and creating a filelist

i have this little code here
for file in *.tar.gz;
do tar xzvf "${file}" && rm "${file}";
done
It extracts a tar.gz and deletes it. Now I have to create a filelist file (.fl) named like a substring from the .tar . For example, I have to delete the first 5 letters and the last 5 (the extension) from the name of the .tar.gz . And that for every .tar.gz that I extract.
Example:
I have a ABC_A.tar.gz with a ABC_A.xml in it.
I have to make a A.fl
and in that A.fl i have to write ABC_A.xml
Thanks in advance.
In your loop, you can do the following for each file:
# delete first five characters
name=${file:5}
# delete .tar.gz suffix
name=${file%%.tar.gz}
Use
for file in *.tar.gz
do
# this creates the file list into a .fl file:
tar tfz "${file}" > "${file:5:-5}.fl"
# this extracts and afterwards removes the tar archive:
tar xzvf "${file}" && rm "${file}"
done
You also can combine the two in one step:
for file in *.tar.gz
do
tar xzvf "${file}" > "${file:5:-5}.fl" && rm "${file}"
done

Having trouble compressing a file in a different directory

Okay so essentially what I'm doing, is I'm taking all the directories inside of the /servers/ folder, and moving them to a secondary hard drive mounted at /media/backupdrive/. This script is ran once a day, so it makes the directory with the name of the date, and should copy the folders directly over there (The reason I have to do it this way is because my client has limited disk space on his main hard drive and his worlds are upwards of 6-7gb each). Anyway, I can get them to copy the folders to /media/backupdrive/currentdate, but then when I try to compress it, it says it can't compress an empty directory or something along the lines of that.
Here's the code:
#!bin/bash
folderName=$(date +"%m-%d-%y")
mkdir "/media/backupdrive/$folderName"
for i in servers/*; do
cp -rf $i /media/backupdrive/$folderName/
cd /media/backupdrive/$folderName/
tar -C ${i:8} -czvf "${i:8}.tar.gz"
cd /root/multicraft/
done
Sorry for the image, it was on a virtual machine and I had to re-type it, because I couldn't copy and paste.
It looks to me like your tar command is missing its input (e.g., a final "."), and therefore says, "tar: Cowardly refusing to create an empty archive".
Your script appears to work for me with this tar command:
tar -C ${i#servers/} -czvf "${i#servers/}.tar.gz" .
I'd try a slightly different approach. tar by itself doesn't use temporary files, so you could tar the sources directly to the destination and compress them wizh gzip in a second step.
#!bin/bash
dst="/media/backupdrive/$(date +"%m-%d-%y")"
for d in servers/*; do
tarfile="$dst/${d#servers/}.tar"
tar -C "$d" -cvf "$tarfile" .
gzip -9 "$tarfile"
done

uncompressing a large number of files on the fly

I have a script that I need to run on a large number of files with the extension **.tar.gz*.
Instead of uncompressing them and then running the script, I want to be able to uncompress them as I run the command and then work on the uncompressed folder, all with a single command.
I think a pipe is a good solution for this but i haven't used it before. How would I do this?
The -v orders tar to print filenames as it extracts each file:
tar -xzvf file.tar.gz | xargs -I {} -d\\n myscript "{}"
This way the script will contain commands to deal with a single file, passed as a parameter (thanks to xargs) to your script ($1 in the script context).
Edit: the -I {} -d\\n part will make it work with spaces in filenames.
The following three lines of bash...
for archive in *.tar.gz; do
tar zxvf "${archive}" 2>&1 | sed -e 's!x \([^/]*\)/.*!\1!' | sort -u | xargs some_script.sh
done
...will iterate over each gzipped tarball in the current directory, decompress it, grab the top-most directories of the decompressed contents and pass those as arguments to somescript.sh. This probably uses more pipes than you were expecting but seems to do what you are asking for.
N.B: tar xf can only take one file per invocation.
You can use a for loop:
for file in *.tar.gz; do tar -xf "$file"; your commands here; done
Or expanded:
for file in *.tar.gz; do
tar -xf "$file"
# your commands here
done

Resources