I'm trying to render .pdf to .png file using multithreaded ghostscript 9.07. Installed from .exe file.
For this I call following procedure:
gswin64c.exe -dNumRenderingThreads=4 -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -r300 -sOutputFile=Graphic1.png Graphic1.pdf
My system is Windows 8 x64 running on quad core AMD phenom II processor and my test graphic is single page 109 MB pdf.
The procedure is processing the same time (about 32s for 300 dpi) regardless of whether the dNumRenderingThreads is set or not. What's more windows task manager shows that gs process uses only 2 threads (one for parsing and one for rednering as far as I know)
What am I doing wrong that rendering is not spread on many threads?
Related
I have a utility I wrote years ago in C++ which takes all the files in all the subdirectories of a given directory and moves them to new numbered subdirectories based on a count of the files. It has worked without error for several years.
Yesterday it failed for the first time. It always fails on a 2.7Gig video file, perhaps the largest this utility has ever encountered. The file itself is not corrupt. It will play in a video player. I can move it with command line or file manager apps without a problem.
I use ntfw() to walk the directory subtree. On this file, ntfw() returns an error code of -1 on encountering the file, before calling my callback function. Since (I thought) the code is only dealing with filenames and not actually opening or reading the file, I don't understand why the file size should be an issue.
The number of open file descriptors is not the problem. Nor the number of files. It was in a subtree of over 5,000 files, but when moving it to one of only 50 it still fails, while the original subtree is processed without error. File permissions are not the problem. This file has the same as all the others. This includes ACL permissions.
The question is: Is file size the issue? Why?
The file system is ext4.
ldd --version /usr/lib/i386-linux-gnu/libc.so
ldd (Ubuntu GLIBC 2.27-3ubuntu1.4) 2.27
Linux version 4.15.0-161-generic (buildd#lgw01-amd64-050)
(gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04))
#169-Ubuntu SMP Fri Oct 15 13:39:59 UTC 2021
As you're using a 32-bit application, in order to work properly with files larger than 2 GB you should compile with -D_FILE_OFFSET_BITS=64 in order to use 64-bit file handling syscalls and types.
In particular, nftw() calls stat() which fails with EOVERFLOW if the size of the file exceeds 2 GB: https://man7.org/linux/man-pages/man2/stat.2.html
Also, for using mmap() (which it seems you're not using, but just in case as a comment was mentioning it), you can't allocate all of 4 GB, some of the address space is reserved for the kernel (typically 1 GB on Linux). Then some space is used by the stack(s), shared libraries etc. Maybe you'll be able to map 2 GB at a time, if you're lucky.
This question already has an answer here:
Reading writing fortran direct access unformatted files with different compilers
(1 answer)
Closed 6 years ago.
I have a Fortran code which writes an unformatted direct access file. The problem is that the size and the contents of the file changes when I switch to different platforms:
First platform is Windows (32bit version of the program using Intel compiler - version around 2009)
Second platform is Linux (64 bit version of the program with gfortran compiler v4.9.0).
Unfortunately the file that is produced in Linux can not be read from windows. The file in LINUX is 5-6 times smaller. However, the total number of records that are written seems to be the same. I opened both files with a hex editor and the main difference is that a lot of zeros exist in the windows version of the file.
Is there any way produce exactly the same file in LINUX?
If it helps, you can find both files here: https://www.dropbox.com/sh/erjlf5sps40in0e/AAC4XEi-p4nnTNzhyai_ZCZVa?dl=0
I open the file with the command: OPEN(IAST,FILE=ASTFILR,ACCESS='DIRECT',FORM='UNFORMATTED',RECL=80)
I write with the command:
WRITE(IAST,REC=IRC) (SNGL(PHI(I)-REF), I=IBR,IER)
I read with the command: READ(IAST,REC=IRC,ERR=999) (PHIS(I), I=1,ISTEP)
where PHIS is a REAL*4 array
The issue is that by default Intel Fortran specifies that RECL= is in units of words, whereas GFortran uses bytes. There's an Intel Fortran compiler option that you can use to make it use byte units. On Linux that option is
-assume byterecl
for Windows I'm not sure what the syntax is, maybe something like
/assume:byterecl
If I create 6 graphicsmagick batch files for converting 35k images, this is what I see in htop:
Why aren't more threads being used? I'm guessing that both of those threads are even on the same core (4 core intel with hyperthreading). I can't find a graphicsmagick config about this online. Do I blame my OS for poorly scheduling?
The only related option in the gm man page, -limit <type> <value>, is a resource limit per image, while I am looking for a way to increase the number of threads used for multiple images, not for a single image.
It is true that the only thing graphicsmagick says about parallelism is about OpenMP (which seems to be about multi-threaded single image processing). So maybe there is no support for what I am trying to do. My question might be more general then: "if I launch multiple instances of gm, why do they all run on the same thread?" I'm not sure if this an OS question or a gm question.
Each line in the batch files is:
convert in/file1.jpeg -fuzz 10% -trim -scale 112x112^ -gravity center -extent 112x112 -quality 100 out/file2.jpeg
I run the batch file with: gm batch -echo on -feedback on data/convert/simple_crop_batchfile_2.txt
I am on GraphicsMagick 1.3.18 2013-03-10 Q8 and Ubuntu 14.10, which when I upgrade with apt-get tells me: Calculating upgrade... graphicsmagick is already the newest version
My story here does show the pointlessness of using multiple batch files (although there is a 30% speedup in overall processing time using 2 batch files concurrently over 1)
Turns out I can blame this on the CPU, the picture in the question of core utilization comes from an Intel Xeon X5365 # 3.00GHz processor. Here is a picture of just 4 concurrent processes on an Intel Xeon E5-2620 v2 # 2.10GHz:
The OS and software versions are the same on the two machines (as well as the same exact task w/ the same exact data), the only difference is the CPU. In this case the later CPU is over 2x as fast (for the case of 4 batchfiles).
So I have a report system built using Java and iText.
PDF templates are created using Scribus. The Java code merges the data into the document using iText. The files are then copied over to a NFS share, and a BASH script prints them.
I use acroread to convert them to PS, then lpr the PS.
The FOSS application pdftops is horribly inefficient.
My main problem is that the PDF's generated using iText/Scribus are very large.
And I've recently run into the problem where acroread pukes because it hits 4gb of mem usage on large (300+ pages) documents.
(Adobe is painfully slow at updating stuff to 64 bit).
Now I can use Adobe reader on Windows, and use the Reduce file size option or whatever its called, and it greatly(> 10x) reduces the size of the PDF(it removes alot of metadata about form fields and such it appears) and produces a PDF that is basically a Print image.
My question is does anyone know of a good solution/program for doing something similiar on Linux. Ideally, it would optimize the PDF, reduce size, and reduce PS complexity so the printer could print faster as it takes about 15-20 seconds a page to print right now.
To reduce the size of a PDF file, use pdfsizeopt, the software I am developing. pdfsizeopt runs on Linux, Mac OS X, Windows (and possibly on other systems as well).
pdfsizeopt has lots of dependencies, so it might be a bit cumbersome to install (about 10 minutes of your time). I'm working on making installation easier.
If you need something quickly, you can try one of its dependencies: Multivalent tool.pdf.Compress, which is a pure Java tool.
Get Multivalent20060102.jar, install Java and run
java -cp Multivalent20060102.jar tool.pdf.Compress input.pdf
There are limitations on what gs -sDEVICE=pdfwrite can do:
it can't generate xref streams (so the PDF will be larger than necessary)
it can't generate object streams (so the PDF will be larger than necessary)
it doesn't deduplicate images or other objects (i.e., if the same image appears multiple times in the input PDF, gs makes a copy in the output for each occurrence)
it emits images suboptimally
it re-samples images to low resolution
it sometimes omits hyperlinks in the PDF
it can't convert some constructs (so the output PDF may be visually different from the input)
Neither pdfsizeopt nor Multivalent's tool.pdf.Compress suffer from these limitations.
gs \
-dCompatibilityLevel=1.4 \
-dPDFSETTINGS=/screen \
-dNOPAUSE \
-dBATCH \
-sDEVICE=pdfwrite \
-sOutputFile=output.pdf \
input.pdf
Ghostscript seems to work for most for this issue. I'm having a different problem now with ghostscript garbling the embedded fonts, but I'll open a new question for that.
I compiled XTLTest as 64 bit and attempted to test some XTLs under windows 7 x64.
All these tests were done using an XTL with one clip from the WMV showcase, with a timeline sized at 1440x1080.
buffering set to 300 - plays back fine.
buffering set to 600 and got a cant run graph error. Recompiled with large memory aware (which should be set by default on 64 bit apps), same thing.
Tested at 310 and worked fine.
Tried playing out 2 different instances of 64 bit XTLTest at the same time with 310 buffering, and the second one fails with 'can't run graph'.
buffering set to 80, was able to play 4 instances of XTLTest using a combined 4GB of memory. Execute any more instances and can't run graph.
Compiled .NET application targeted at any using DirectShowLib, and comfirmed its running as 64 bit native app. I was able to load 4 XTLs at 80 buffering until I got
System.Runtime.InteropServices.COMException (0x8007000E): Not enough storage is available to complete this operation.
So I can an only conclude that the DES subsystem has a 4GB memory limit for all applications combined.
Is this true? If so is this a DES limit or a DirectShow limit and is there any way to workaround?
best,
Tuviah Snyder
Lead programmer, MediaWan
Solid State Logic, Inc
I haven't worked with DES directly before, but my impression has always been that it was deprecated quite a long time ago. The COM objects that it is made up of are likely 32-bit.