Currently I am in this directory-
/data/real/test
When I do ls -lt at the command prompt. I get like below something-
REALTIME_235000.dat.gz
REALTIME_234800.dat.gz
REALTIME_234600.dat.gz
REALTIME_234400.dat.gz
REALTIME_234200.dat.gz
How can I consolidate the above five dat.gz files into one dat.gz file in Unix without any data loss. I am new to Unix and I am not sure on this. Can anyone help me on this?
Update:-
I am not sure which is the best way whether I should unzip each of the five file then combine into one? Or
combine all those five dat.gz into one dat.gz?
If it's OK to concatenate files content in random order, then following command will do the trick:
zcat REALTIME*.dat.gz | gzip > out.dat.gz
Update
This should solve order problem:
zcat $(ls -t REALTIME*.dat.gz) | gzip > out.dat.gz
What do you want to happen when you gunzip the result? If you want the five files to reappear, then you need to use something other than the gzip (.gz) format. You would need to either use tar (.tar.gz) or zip (.zip).
If you want the result of the gunzip to be the concatenation of the gunzip of the original files, then you can simply cat (not zcat or gzcat) the files together. gunzip will then decompress them to a single file.
cat [files in whatever order you like] > combined.gz
Then:
gunzip combined.gz
will produce an output that is the concatenation of the gunzip of the original files.
The suggestion to decompress them all and then recompress them as one stream is completely unnecessary.
Related
After an HD problem and some work, I have a bunch of files with names like "f1234", "f1235", etc.
My goal is to sort this files according to their filetype. For example, I want to move all the PDF files in the "pdfs" directory.
For one file, I can do : "file f1234", and if it's a PDF, I can "mv f1234 pdfs/". But I have thousands of file... Can you help me with a bash or zsh command for sort all the PDF in one pass ? Thanks
The hard part here is reliably turning the output of file into a directory name. I think probably the best candidate for that is the mime-type of the file rather than the human readable output of file. I'd use something like:
mkdir sorted
for f in f*
do
d=$(file -b --mime-type "$f" | tr / -)
mkdir -p "sorted/$d"
mv "$f" "sorted/$d/"
done
Obviously I'd test that out a bit before running it on your files, but something pretty close to that should work.
another newbie question..please bear with me.
I have a multiple .tar.gz files that contain the same XX.log file ( Named the same in each .tar.gz file ).
I need to extract only that specific XX.log file from each .tar.gz file and then append them in on list file named DataByDate.csv
I've tried multiple ways to accomplish this in one line:
zcat /tmp/jhoney/DATA.2015-10-09* | tar --extract --file=XX.log | perl -lne '/.{0,0}2015-10-09.{0,30}/ $$ print $&' >/tmp/jhoney/DataByDate.csv
This returns the error :
tar: XX.log: Cannot open:No such file or directory
tar: Error is not recoverable: existing now.
Any idea's?
You need to read man tar. I think you need something more like this:
for t in /tmp/jhoney/DATA.2015-10-09*;do tar -zxOf $t XX.log |
perl -lne '/.{0,0}2015-10-09.{0,30}/ && print $&';done >/tmp/jhoney/DataByDate.csv
Also, {0,0} doesn't seem to make sense. And if you really meant "append", the redirect should maybe be >> instead of just >.
I have some zip files that are really large and I want to print them without extracting first. I am using zcat and zless to do that and then I redirect the output to a different application. When my zip file contains more than one text file I receive the following error:
zcat tweets.zip >a
gzip: tweets.zip has more than one entry--rest ignored
How can I do what I want with zip files that contain more than one text file?
You can do this to output a file without extracting:
$ unzip -p <zip_file> <file_to_print>
For example:
$ unzip -p MyEar.ear META-INF/MANIFEST.MF
As cur4so mentioned you can also list all files using:
$ unzip -l <zip_file>
Use the -p option of unzip to pipe the output. Multiple files are concatenated. The -c option does the same thing, but includes the file name in front of each file.
If you just want to see a list of files in your zip archive use:
unzip -l tweets.zip
if you want to extract just some file:
unzip tweets.zip file-of-interest-as-it-is-pointed-in-the-archive
if you want something else, could you clarify your question?
I have a tar file which has lot of csv files in it.
How to get the first few lines of each csv file without extracting it?
I tried:
$(tar -Oxf $tarfile $file | head -n "$NL") >> cdn.log
But got error saying:
time(http:index: command not found
This is some line in one of the csv files. Similar errors are reported for all csv files...
Any idea??
Using -O you can tell tar to extract a file to standard output instead of to file. So you should be able to first use tar tf <YOUR_FILE> to list the files from archive and filter it using grep to find the CSV files, and then for each file use tar xf <YOUR_FILE> <NAME_OF_CSV> -O | head to get the file's beginning to stdout. This may be a bit ineffective since you unpack the archive as many tiems as there are CSV files, but should work.
You can use perl and its Archive::Tar module. Here a one-liner that extract the first two lines of each one:
perl -MArchive::Tar -E '
for (Archive::Tar->new(shift)->get_files) {
say (join qq|\n|, (split /\n/, $_->get_content, 3)[0..1])
}
' file.tar
It assumes that the tar file only has text files and they are csv. Otherwise you will have to grep the list to filter those you want.
I have all my Apache access log files as access.log, access.log.1 access.log.1.gz etc... What I want is to zcat all files in and not in gzip format and pipe them into an X program.
I know I can do: zcat /var/log/apache2/access.log.*.gz | someapp... but that will just work for *.gz and not the first two logs.
Any ideas will be appreciate it
use zcat -f, it will copy uncompressed files as is
For the specific use case of HTTP log server files, consider the zmergelog command (from the mergelog package). It additionally sorts the result of the merge chronologically.