Copy files from a list to a folder - linux

i have a text file abc.txt and it's contents are,
/lag/cnn/org/one.txt
/lag/cnn/org/two.txt
/lag/cnn/org/three.txt
if i use ,
tar -cvf allfiles.tar -T abc.txt
i am getting the tar of files in the list. Similarly is it possible to copy those files in abc.txt to a folder.
I tried ,
cp --files-from test1.txt ./Folder
but it is not working. Please help

You could use xargs:
cat test1.txt | xargs -I{} cp {} ./Folder
In order to avoid Useless use of cat, you could say:
xargs -a test1.txt -I{} cp {} ./Folder

You can use xargs for example, or do it in a loop:
while read FILE; do cp "$FILE" ./Folder; done <test1.txt

You can write:
while IFS= read -r FILENAME ; do
cp -- "$FILENAME" ./Folder || break
done < test1.txt
which loops over the lines of the file, reading each line into the variable FILENAME, and running cp for each one.

You could use tar cvf - to write the tar to stdout and pipe it right into a tar xvfC - Folder.

Related

Copy the newest two files and append the date

I'm looking for a way to copy the newest .gz files I have in Dir A to Dir B, and append the date to that files.
Example:
Dir A
cinema.gz
radio.gz
cars.txt
camera.gz
Dir B
cinema.gz.20200310
radio.gz.20200310
Using following command I can copy the newest .gz files to dirb
cp $(ls -1t /dira *gz | head -2) "dirb"
However I don't find a way to change append the date to the filename.
I was trying something like this:
cp $(ls -1t /dira *gz | head -2) "dirb/*.$(date +%F_%R)"
But don't works at all.
Your help please :)
for TO_MOVE in `ls -t *.gz | head -n2`; do
cp $TO_MOVE dirb/;
mv dirb/$TO_MOVE dirb/$TO_MOVE.`date +%Y%m%d`;
done
You can't compact this into one cp command sadly. However you can do this.
for f in $(ls -1t/dira gz | head -2); do
cp "$f" "dirb/$f.$(date +%F_%R | sed 's;/;\\/;g')"
done
Edit: I pipe the date through sed to escape the '/' characters in the date string (Otherwise cp will interpret them as directory names).
Try the below code from parent directory of dira and dirb;
ls -1t dira/*.gz | head -2 | while read -r line; do cp $line dirb/${line#*/}.$(date +%F_%R); done
I'm using while to loop over the files and ${line#*/} trims the directory name. Let me know if you have any query.

copy few files from a directory that are specified in a text file (linux)

I have a directory called images
and it contains many images.
For example:
images/
imag001.png
imag002.png
imag003.png
imag004.png
And I have a text file that has the files that I want to copy somewhere else. Say the test.txt file has
img001.png
img003.png
How do I copy the files specified in test.txt from the images folder to some other place?
try this one-liner under your images directory:
awk '{print "cp "$0" /target/path"}' test.txt|sh
There are probably many solutions to this problem. I would do it by using xargs:
cd images/
cat path/to/test.txt | xargs -I FILES cp FILES path/to/dest/
I think in the bash shell you can do:
for image in $(cat copylist.txt); do
cp $image destination
done
You can write:
while IFS= read -r image_filename ; do
cp images/"$image_filename" some_other_place/
done < test.txt
cat copylist.txt | xargs -n1 -I % echo cp % destination/.
# remove echo after testing

Extract and delete all .gz in a directory- Linux

I have a directory. It has about 500K .gz files.
How can I extract all .gz in that directory and delete the .gz files?
This should do it:
gunzip *.gz
#techedemic is correct but is missing '.' to mention the current directory, and this command go throught all subdirectories.
find . -name '*.gz' -exec gunzip '{}' \;
There's more than one way to do this obviously.
# This will find files recursively (you can limit it by using some 'find' parameters.
# see the man pages
# Final backslash required for exec example to work
find . -name '*.gz' -exec gunzip '{}' \;
# This will do it only in the current directory
for a in *.gz; do gunzip $a; done
I'm sure there's other ways as well, but this is probably the simplest.
And to remove it, just do a rm -rf *.gz in the applicable directory
Extract all gz files in current directory and its subdirectories:
find . -name "*.gz" | xargs gunzip
If you want to extract a single file use:
gunzip file.gz
It will extract the file and remove .gz file.
for foo in *.gz
do
tar xf "$foo"
rm "$foo"
done
Try:
ls -1 | grep -E "\.tar\.gz$" | xargs -n 1 tar xvfz
Then Try:
ls -1 | grep -E "\.tar\.gz$" | xargs -n 1 rm
This will untar all .tar.gz files in the current directory and then delete all the .tar.gz files. If you want an explanation, the "|" takes the stdout of the command before it, and uses that as the stdin of the command after it. Use "man command" w/o the quotes to figure out what those commands and arguments do. Or, you can research online.

Use grep -lr output to add files to tar

In UBUNTU and CENTOS.
I have some files I want to tar based on their contents.
$ grep -rl "123.45" .
returns a list of about 10 files in this kind of format:
./somefolder/someotherfolder/somefile.txt
./anotherfolder/anotherfile.txt
etc...
I want to tar.gz all of them.
I tried:
$ grep -rl "123.45" . | tar -czf files.tar.gz
Doesn't work. That's why I'm here. Any ideas? Thanks.
Just tried this, and it worked in Ubuntu, but in CentOS I get "tar: 02: Cannot stat: No such file or directory".
$ tar -czf test.tar.gz `grep -rl "123.45" .`
If anyone else has a better way, let me know. That above one works great in Ubuntu, at least.
Like this:
... | tar -T - -czf files.tar.gz
"-T -" causes tar to read filenames from stdin. Second minus stands for stdin. –
grep -rl "123.45" . | xargs tar -czf files.tar.gz
Tar wants to be told what files to process, not given the names of the files via stdin.
It does however have a -T / --files-from option. So I'd suggest using that. Output your list of selected files to a temp file and then have tar read that, like this:
T=$(mktemp)
grep -rl "123.45" . > $T
tar cfz files.tar.gz -T $T
rm -f $T
If you want, you can also use shell expansion to do it like this:
tar cfz files.tar.gz -- $(grep -rl "123.45" .)
But that will fail if you have too many files or if any of the files have strange names (like spaces etc).

How to delete all files that were recently created in a directory in linux?

I untarred something into a directory that already contained a lot of things. I wanted to untar into a separate directory instead. Now there are too many files to distinguish between. However the files that I have untarred have been created just now (right ?) and the original files haven’t been modified for long (at least a day). Is there a way to delete just these untarred files based on their creation information ?
Tar usually restores file timestamps, so filtering by time is not likely to work.
If you still have the tar file, you can use it to delete what you unpacked with something like:
tar tf file.tar --quoting-style=shell-always |xargs rm -i
The above will work in most cases, but not all (filenames that have a carriage return in them will break it), so be careful.
You could remove the directories by adding -r to that, but it's probably safer to just remove the toplevel directories manually.
find . -mtime -1 -type f | xargs rm
but test first with
find . -mtime -1 -type f | xargs echo
There are several different answers to this question in order of increasing complexity.
First, if this is a one off, and in this particular instance you are absolutely sure that there are no weird characters in your filenames (spaces are OK, but not tabs, newlines or other control characters, nor unicode characters) this will work:
tar -tf file.tar | egrep '^(\./)?[^/]+(/)?$' | egrep -v '^\./$' | tr '\n' '\0' | xargs -0 rm -r
All that egrepping is to skip out on all the subdirectories of the subdirectories.
Another way to do this that works with funky filenames is this:
mkdir foodir
cd foodir
tar -xf ../file.tar
for file in *; do rm -rf ../"$file"; done
That will create a directory in which your archive has been expanded, but it sounds like you wanted that already anyway. It also will not handle any files who's names start with ..
To make that method work with files that start with ., do this:
mkdir foodir
cd foodir
tar -xf ../file.tar
find . -mindepth 1 -maxdepth 1 -print0 | xargs -0 sh -c 'for file in "$#"; do rm -rf ../"$file"; done' junk
Lastly, taking from Mat's answer, you can do this and it will work for any filename and not require you to untar the directory again:
tar -tf file.tar | egrep '^(\./)?[^/]+(/)?$' | grep -v '^\./$' | tr '\n' '\0' | xargs -0 bash -c 'for fname in "$#"; do fname="$(echo -ne "$fname")"; echo -n "$fname"; echo -ne "\0"; done' junk | xargs -0 rm -r
You can handle files and directories in one pass with:
tar -tf ../test/bob.tar --quoting-style=shell-always | sed -e "s/^\(.*\/\)'$/rmdir \1'/; t; s/^\(.*\)$/rm \1/;" | sort | bash
You can see what is going to happen leave off the pipe to 'bash'
tar -tf ../test/bob.tar --quoting-style=shell-always | sed -e "s/^\(.*\/\)'$/rmdir \1'/; t; s/^\(.*\)$/rm \1/;" | sort
to handle filenames with linefeeds you need more processing.

Resources