Ok, so I can print a PDF doing:
pdf2ps file.pdf - | lp -s
But now I want to use convert to merge several PDF files, I can do this with:
convert file1.pdf file2.pdf merged.pdf
which merges file1.pdf and file2.pdf into merged.pdf, target can be replaced with '-'.
Question
How could I pipe convert into pdf2ps and then into lp though?
convert file1.pdf file2.pdf - | pdf2ps - - | lp -s
should do the job.
You send the output of the convert command to psf2ps, which in turn feeds its output to lp.
You can use /dev/stdout like a file:
convert file1.pdf file2.pdf /dev/stdout | ...
I use gs for merging pdfs like:
gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=/dev/stdout -f ...
Since hidden behind your pdf2ps command there is a Ghostscript command running (which accomplishes the PDF -> PS conversion), you could also run Ghostscript directly to generate the PostScript:
gs -o output.ps \
-sDEVICE=ps2write \
file1.pdf \
file2.pdf \
file3.pdf ...
Note, that older GS releases didn't include the ps2write device (which generates PostScript Level 2), but only pswrite (which generates the much larger PostScript Level 1). So change the above parameter accordingly if need be.
Older Ghostscript versions may also need to replace the modern abbreviation of -o - with the more verbose -dNOPAUSE -dBATCH -sOutputFile=/dev/stdout. Only newer GS releases (all after April 2006) know about the -o parameter.
Now, to directly pipe the PostScript output to the lp command, you would have to do this:
gs -o - \
-sDEVICE=ps2write \
file1.pdf \
file2.pdf \
file3.pdf ... \
| lp -s <other-lp-options>
This may be considerably faster than running pdftk first (but this also depends on your input files).
convert file1.pdf file2.pdf merged.pdf
pdf2ps merged.pdf - | lp -s
Related
I am using SoX command line tool on Linux inside a Makefile to interleave two raw (float 32 bit) input audio files into one file:
make_combine:
sox \
--bits 32 --channels 1 --rate 48000 signal_1.f32 \
--bits 32 --channels 1 --rate 48000 signal_2.f32 \
--type raw --channels 2 --combine merge signal_mixed.f32
I ran into problems when signal_1 and signal_2 are different length. How would I limit the mixed output to shorter of the two inputs?
Use soxi -s to find the shortest file, e.g.:
samps=$(soxi -s signal_1.f32 signal_2.f32 | sort -n | head -n1)
Then use the trim effect to shorten the files, e.g. (untested):
sox --combine merge \
"| sox signal_1.f32 -p trim 0 ${samps}s" \
"| sox signal_2.f32 -p trim 0 ${samps}s" \
signal_mixed.f32
Note: If you want me to test it, provide some sample data.
I have a 250GB gzipped file on Linux and I want to split it in 250 1GB files and compress the generated part files on the fly (as soon as one file is generated, it should be compressed).
I tried using this -
zcat file.gz | split -b 1G – file.gz.part
But this is generating uncompressed file and rightly so. I modified it to look like this, but got an error:
zcat file.gz | split -b 1G - file.gz.part | gzip
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
I also tried this, and it did not throw any error, but did not compress the part file as soon as they are generated. I assume that this will compress each file when the whole split is done (or it may pack all part files and create single gz file once the split completed, I am not sure).
zcat file.gz | split -b 1G - file.gz.part && gzip
I read here that there is a filter option, but my version of split is (GNU coreutils) 8.4, hence the filter is not supported.
$ split --version
split (GNU coreutils) 8.4
Please advise a suitable way to achieve this, preferably using a one liner code (if possible) or a shell (bash/ksh) script will also work.
split supports filter commands. Use this:
zcat file.gz | split - -b 1G --filter='gzip > $FILE.gz' file.part.
it's definitely suboptimal but I tried to write it in bash just for fun ( I haven't actually tested it so there may be some minor mistakes)
GB_IN_BLOCKS=`expr 2048 \* 1024`
GB=`expr $GB_IN_BLOCKS \* 512`
COMPLETE_SIZE=`zcat asdf.gz | wc -c`
PARTS=`expr $COMPLETE_SIZE \/ $GB`
for i in `seq 0 $PARTS`
do
zcat asdf.gz | dd skip=`expr $i \* GB_IN_BLOCKS` count=$GB_IN_BLOCKS | gzip > asdf.gz.part$i
done
Easy way to count key
my way:
cat \
public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803
| grep 18310680207 | wc -l
I need easy way to count this. In fact, my question is how does cat use grep?
File list:
public.log.2015050723
public.log.2015050800
public.log.2015050801
public.log.2015050802
public.log.2015050803
This is easier because it uses one fewer processes:
cat public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803 | # Note pipe or backslash needed here!
grep -c 18310680207
Note that the pipe symbol needs to appear after the last file name, or you need a backslash after the last file name.
If you need the occurrences per file, then you can lose the cat too (which is what anubhava suggested):
grep -c 18310680207 \
public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803
You can reduce the list of file names, with your sample file names, to:
cat public.log.2015050723 public.log.201505080[0-3] |
grep -c 18310680207
or:
grep -c public.log.2015050723 public.log.201505080[0-3]
I want to convert files in a specific order. For conversion I am using this command:
convert *.jpg output.pdf
The order of the image files in the output pdf should be:
ls -v
How can I combine these 2 commands?
Probably you mean this:
convert $(ls -v *.jpg) output.pdf
Using $() you can place the output of one command as part of an outer command.
PICS=`ls -v *.jpg`
convert $PICS output.pdf
I'm trying tar up some files and pass them along to the user through the php passthru command.
The problem is that even though the tar file should only be like 2k it is always 10240. Funny number right?
So I have broken it down to:
-sh-4.1# tar czf - test | wc -c
10240
VS:
-sh-4.1# tar czf test.tar.gz test && wc -c test.tar.gz
2052 test.tar.gz
So tar is clearly padding out the file with NULL.
So how can I make tar stop doing that. Alternatively, how can I strip the trailing NULLs.
I'm running on tar (GNU tar) 1.15.1 and cannot reproduce on my workstation which is tar (GNU tar) 1.23, and since this is an embedded project upgrading is not the answer I'm looking for (yet).
Edit: I am hoping for a workaround that does need to write to the file system.. Maybe a way to stop it from padding or to pipe it through sed or something to strip out the padding.
you can attenuate the padding effect by using a smaller block size, try to pass -b1 to tar
You can minimise the padding by setting the block size to the minimum possible value - on my system this is 512.
$ cat test
a few bytes
$ tar -c test | wc -c
10240
$ tar -b 1 -c test | wc -c
2048
$ tar --record-size=512 -c test | wc -c
2048
$
This keeps the padding to at most 511 bytes. Short of piping through a program to remove the padding, rewrite the block header, and recreate the end-of-archive signature, I think this is the best you can do. At that point you might consider using a scripting language and it's native tar implementation directly, e.g.:
PHP's PharData (http://php.net/manual/en/class.phardata.php)
Perl's Archive::Tar (https://perldoc.perl.org/Archive/Tar.html)
Python's tarfile (https://docs.python.org/2/library/tarfile.html)