I need to convert 750 jpgs into a single a4 size pdf serially - linux

I need to convert 750 jpg images into a single a4 size pdf. My problem is page no 10 comes right after page no. 1 :). I tried various combinations of find, ls, grep available on the net. But the pdf come out all mixed up. Is there any command to do the needful?

OK.Here is what I did:
Step 1: Convert all the jpg files into pdf
for img in *.jpg; do
filename=${img%.*}
convert "$filename.jpg" "$filename.pdf"
done
Step 2: Install pdfchain from the ubuntu repos, and start it. It did the job like a hotknife through butter :)
No jumbling of page numbers...

Related

how to merge pdf as a table with pdftk or convert

How can one use convert or pdftk to merge several pdfs organized as a table?
For example, given 4 files: file1.pdf, file2.pdf, file3.pdf, file4.pdf, each of a single page, I would like to have a single-page pdf like
file1.pdf file2.pdf
file3.pdf file4.pdf
That is, the files are arranged like an array.
By far the easiest way to convert 4 PDF pages to 1 page on any OS is by N-Up imposition/printing with output to a virtual PDF printer such as Ghostscript. For the most basic 4-Up command line usage see https://stackoverflow.com/a/72850245/10802527
Thus to combine 4 pages (others such as 2 6 9 or 16 are possible) using here in a gui I can very easily set the order.
On Linux or MacOS you can use, along with other options, the CUPS command
lp -o number-up=4 filename
see https://www.cups.org/doc/options.html
The major advantage over using tools such as PDFtk with convert is that it resolves both scaling and preserving most PDF structures without degrading to inferior down-scaled imagery by NOT passing in and out of images before calling Ghostscript.
If you have single pdfs then you can merge before print using PDFtk (uses Ghostscript) instead of poppler pdfunite. Note that with either the Original PDF format is preserved.
If you want to convert to half size images and stitch them together, then reprint to one pdf page, then that can easily be done using imagemagik convert and other commands to call Ghostscript to suit your requirements direct. However, the results will in many ways be degraded by translation to image output.
Since all of the above pass through GS it makes sense, where possible, to install GS as a PDF printer driver.
If you want to avoid installing GhostScript printing then you can use cross platform Coherent cpdf (it only uses GS if the files need repairs)
Note these are "windows double quoted names" adjust as required and is based on the 4 sequential pages in one file are then to be placed 4 at a time on each new page, thus can be used with any multiple of pages in the input.pdf
cpdf -twoup "input.pdf" -o "in-2-Up-tmp.pdf"
cpdf "in-2-Up-tmp.pdf" -rotate 90 -o "out-2-Up.pdf"
cpdf -twoup "out-2-Up.pdf" -o "out-4-Up-tmp.pdf"
cpdf "out-4-Up-tmp.pdf" -rotate 90 -o "out-4-Up.pdf"

Tesseract Batch Convert Images to Searchable PDF And Multiple Corresponding Text Files

I’m using tesseract to batch convert a list of images to both a searchable PDF as well as a TXT file containing the OCRd text.
tesseract infile outfile -l eng myconfig
infile contains a list of image paths to process
myconfig contains tesseract preferences to specify the output types (tessedit_create_text 1 and tessedit_create_pdf 1)
This leaves me with outfile.pdf and outfile.txt, the latter of which contains page separators for delimiting text between images.
What I’m really looking to do, however, is to output multiple TXT files on a per-image basis, using the same corresponding image name. For example, Image1.jpg.txt, Image2.jpg.txt, Image3.jpg.txt...
Does tesseract have the option to support this behavior natively? I realize that I can loop through the image file list and execute tesseract on a per-image basis, but this is not ideal as I’d also have to run tesseract a second time to generate the merged PDF. Instead, I’d like to run both options at the same time, with less overall execution time.
I also realize that I can split the merged TXT file on the page separator into multiple text files, but then I have to introduce less elegant code to map and rename all of those split files to correspond to their original image names: Rename 0001.txt to Image1.jpg.txt...
I’m working with both Python 3 and Linux commands at my disposal.
You can prepare a batch file that loops through the input images and output to both txt and pdf at the same time -- more efficient, one single OCR operation instead of two. You can then split output .txt file to pages.
tesseract inimagefile outfile txt pdf
Converting multiple images to a single PDF file.
On Linux, you can list all images and then pipe them to tesseract
ls *.jpg | tesseract - yourFileName txt pdf
Where:
youFileName: is the name of the output file.
txt pdf: are the output formats, you can also use only one of them.
Converting images to individual text files
On Linux, you can use the for loop to go through files and execute an action for every file.
for FILE in *.jpg; do tesseract $FILE ${FILE::-4}; done
Where:
for FILE in *.jpg : loop through all JPG files (you can change the extension based on your format)
$FILE: is the name of the image file, e.g. 001.jpg
${FILE::-4}: is the name of the image but without the extension, e.g. 001.jpg will be 001 because we removed the last 4 characters.
We need this to name the text files to the corresponding names, e.g.
001.jpg will be converted to 001.txt
002.jpg will be converted to 002.txt
Since Tesseract doesn't seem to handle this natively, I've just developed a function to split the merged TXT file on the page separator into multiple text files. Although from my observations, I'm not sure that Tesseract runs any faster by simultaneously converting batch images to both PDF and TXT (versus running it twice - once for PDF, and once for TXT).
Thank you!
BTW i'm using 4.1.1.
And i discovered another trainedata for spanish language that do a better job than the standard one. Actually recognizes well the "o" character. The only problem is the processing time, but i let the PC working overnight.
Honestly i don't know how the new trainedata file is doing the job better. I donwloaded at:
https://github.com/tesseract-ocr/tessdata_best

How to generate pdf file of text and image in linux?

I am generating a logfile on one of my servers.
Storing alot of data, then sending it to my mail once a month as a pdf file.
The prosess i am using is to 'cat' alot of commands to a text file, then convert it and send.
Is there any linux programs or some eazy way to do something simulare and add a image i have stored on the server in the pdf file?
This answer assumes that you just want to put the image at the end of the PDF.
You could first convert the image using imagemagick to a PDF doing this (will also work with different file types):
convert image.jpg image.pdf
Then, you can use a tool like stapler or pdftk to combine your generated text PDF and the image.pdf (you can add multiple images):
stapler cat text.pdf image.pdf combined.pdf
pdftk text.pdf image.pdf output combined.pdf

How can I convert E01 image file to dd image file?

I'm working on forensics tools and I have Encase E01 type image file. I would like to analyze this image by using other tools. However, those tools such as tsk_recover doesn't accept E01 file type as input. So, I need to convert E01 image file to dd format without any alteration.
FTK Imager from Access Data (http://accessdata.com/product-download) is a free tool that can do many things with several evidence file formats (E01, DD, and AD1), including mounting them logically and converting them to different formats.
You can use it to convert an E01 image to a DD image by:
Opening the E01 with FTK Imager
Right-clicking on the E01 file in the left 'Evidence Tree'
Selecting 'Export Disk Image'
'Add' Image Destination
Select 'Raw (dd)' in the popup box, and finish the wizard
Hit start and wait for it to finish, then you'll have your DD image
tsk_recover (and all of The Sleuth Kit and Autopsy tools) support E01 if you compile it with libewf (http://sourceforge.net/projects/libewf/). If you want the raw image though, libewf has tools to do the conversion and you can use 'img_cat' in TSK to do it (but it requires you to have compiled in libewf).
I personally prefer using the winpmem tool for this.
Syntax is very simple:
"winpmem_v3.3.rc3.exe -i $Source -o $Target --volume_format aff4"
-i=input;
-o=output;
--volume_format=output format
You can convert images into as many as different available memory formats.
While merging files can also be performed:
"winpmem_v3.3.rc3.exe -i $Source1[whatever format raw, dd, etc] -i $Source2 -o $Target --format raw"

Ghoscript /cropbox not printing correctly in linux

I'm using the Domestic shipping label api in usps to generate domestic shipping labels in pdf format. I managed to crop the top section of the pdf file which is the label needed by the usps and Ignored the bottom section which is the receipt which is not needed in shipping.
I use Ghostscript /Cropbox to crop the section that I only want which is successful but when I try to print the cropped pdf file in linux cups I get the whole uncropped pdf printed instead of the cropped pdf file. Why is it still printing the whole file instead of just printing the cropped section?.
Here's the script I'm using to crop the usps Shipping label.
gs -o cropped.pdf -sDEVICE=pdfwrite -c "[/CropBox [50.4 460.5 484.4 750.5] /PAGES pdfmark" -f uncropped.pdf
Then to change its orientation to portrait i use pdftk
pdftk cropped.pdf cat 1L output cropped_portrait.pdf
To print it in linux cups I'm using the command.
lp cropped_portrait.pdf
But when i print it it is printing the uncropped.pdf file instead of cropped_portrait.pdf.
Why is it doing that? I even deleted uncropped.pdf and tried printing again but it still prints uncropped.pdf.
Here's the two files the uncropped and cropped usps shipping labels.
Uncropped PDF file
Cropped PDF file
Hope you can help me on this one,
Thank you
Presumably the reduced PDF file displays correctly, so there is no problem with Ghostscript producing the PDF file.
As to why the printing process doesn't respect the CropBox, there is no reason really why it should. There are many Boxes in PDF and no real way for a print application to know which one you want to use. As a result printing applications often default to the MediaBox, which you haven't altered (Note that altering the CropBox doesn't change the content of the PDF file, just what is displayed).
Now, if your CUPS chain is using Ghostscript to render the PDF file, or convert it to PostScript, then this can be solved, you need to add -dUseCropBox to the command line. However I'm not a CUPS expert so I can't tell you how to do that. If CUPS isn't using Ghostscript then its probably still possible to instruct whatever is doing the conversion to use the CropBox, but you're going to have to find out what application is involved and alter the command appropriately for that application.

Resources