Convert pdf to Tiff with same quality - linux

We are using following shell script to convert pdf attachment in tiff but having some issue with quality, So can you please check below shell script and let us know anything where we can improve quality as well as compress file size too as much as possible while conversation.
shell_exec('/usr/bin/gs -q -sDEVICE=tiffg4 -r204x392 -dBATCH -dPDFFitPage -dNOPAUSE -sOutputFile=america_out7.tif america_test.pdf');
We have tried following command and seems quality is better but when we are going to send fax via Free-switch
gs -q -r1233x1754 -dFitPage -sPAPERSIZE=a4 -dFIXEDMEDIA -dPDFSETTINGS=/ebook -dNOPAUSE -dBATCH -sDEVICE=tiffg4 - -sOutputFile=america_out7.tif america_test.pdf
We are getting the below error
"Fax processing not successful - result (11) Far end cannot receive at the resolution of the image. "
So, here we need your help to elude this issue and please provide any other way.
Awaiting for some response on this.

Fax machines only support certain resolutions. If you Group 3 or Group 4 CCITT compressed TIFF many fax machines can read that, extract the compressed image data, and send it directly to a compatible fax machine.
"Standard" is 204x98, "fine" is 204x96, "superfine" is 400x391.
You've chosen a resolution of 1233x1754. That's 3 times higher than any fax specification I know of supports. So of course your receiving fax machine can't cope with it. Note that there is no fax standard (unless there's been a new one which seems unlikely) supports 600x600 either, though its entirely possible that specific manufacturers may support such a thing between their own equipment.
Naturally the higher the resolution, the better quality your rendered output will be, which is why using a higher resolution results in better quality.
Everyone wants the magical goal of "better quality and lower filesize" but there is no such thing. This is always a tradeoff.
You will probably find that using superfine resolution (400x391) will give you better quality, at the cost of higher file sizes.. You can't go higher than that with ordinary fax.
Note that the PDFSETTINGS switch has no effect except with the pdfwrite device, which is used to create PDF files, not read them.
This is also off-topic for Stack Overflow, since this is not a programming question. Not even a little bit, and Stack Overflow is not a generalised support forum.

Related

Ghostscript Combine PDfs and Multithread/Core

)
I know there are a couple of questions and threads out about similar stuff but none would work for me.
I'm trying to combine ~1000 pdf files in one. I tried a couple tools but only gs (ghostscript) does a proper compression.
My problem is that is that multi threads are not working/ I've got 24cores and like to use e.g. 8 for the task, but top shows me that it uses still only one. My command:
gs -q -dNOPAUSE -dNOPROMPT -q -dBATCH -dNumRenderingThreads=8 -dBandHeight=100 -dBandBufferSpace=500000000 -sBandListStorage=memory -dBufferSpace=1000000000 -sDEVICE=pdfwrite -sOutputFile=combined_gs.pdf sourcefiles/*.pdf
I have to speed up this a bit as it takes round about 60 seconds and I need this on the fly.
Any suggestions?
The pdfwrite device doesn't use threading (it would be difficult for it to do so). The clue is in the name 'NumRenderingThreads', pdfwrite doesn't render.
Since it isn't rendering BandHeight, BandBufferSpace, BandListStorage and BufferSpace will also have no effect. (You've also specified -q twice)
Please be aware that Ghostscript and the pdfwrite device do not 'manipulate' PDF input, not combine, concatenate or anything similar. What it does is interpret all the input, creating a set of graphic primitives, these primitives are then re-assembled into a brand new PDF output file. The new output file has nothing in common with any of the inputs, and our goal in this is that the visual appearance should be the same. While we do handle a number of non-makring objects from the input these are of secondary importance.
As will be obvious, this is a much more complex process than treating the contents of a PDF file as a series of building blocks which can be rearranged, which is why its slower. To be honest reading, interpreting, rewriting 1000 files in 1 minute seems pretty fast to me.

Converting Audio From Unknown Format

I would like to create a utility in either PHP or Perl to convert an audio file created by the Nortel's Callpilot voice mail system into a wave file. The problem is that the format, which has the .vbk file extension, is unknown to virtually any audio player. To date, I have not found one that will play a .vbk file. I've looked at audio file conversion libraries in CPAN and tried many of them, they don't recognize the file. I was not successful with PHP's audio formats manipulation either. Nortel does provide a converter, however, it does not suite my needs. I would like to have this run via cron on a CentOS system. I don't know how to reverse engineer this format. There seems to be just scraps of info on this format on the web. This page indicates that it is "based on the H.232 format":
https://www.odesk.com/o/jobs/job/Reverse-Engineer-Nortel-VBK-Audio-Format_~~f501f11679f3f6bb/
I know this is a very old thread, but I've recently been looking into converting Nortel's vbk format as well. Importing the vbk files into Audacity with raw data option, Encoding: U-Law, Byte order: little-endian, Channels: 1 Channel (Mono), Sample rate: 8000 Hz. Not sure if they have multiple formats for their vbk files, but mine were from a BCM50 phone system.
Well, this is the joy of closed proprietary systems. But there is a chance they could play nice. Try to contact Callpilot and see if they'll give you the format specs. It's worth a shot.
As for reverse engineering, you need to be able to generate known content. Like a constant tone at 60Hz for exactly 1 second. Then at 50Hz. Then at 10 seconds. Compare them. Isolate the data from the metadata. There is going to be compression involved, so try a handful of common compression schemes, maybe research into Nortel's practices will probably tell you more. If you can feed that into a player and get a tone back out, you're on your way.
There's probably more informed and structured ways to go about reverse engineering, but from my experience it's a lot of trial and error.

Secure streaming video with dynamic watermark

What are some scalable and secure ways to provide a streaming video to a recipient with their name overlayed as a watermark?
Some of the comments here are very good. Using libavfilter is probably a good place to start. Watermarking every frame is going to be very expensive because it requires decoding and re-encoding the entire video for each viewer.
One idea I'd like to expand upon is watermarking only portions of the video. I will assume you're working with h.264 video, which requires far more CPU cycles to decode and encode than older codecs. I think per cpu core you could mark 1 or 2 stream in real time. If you can reduce your requirements to 10 seconds marked out of 100, then you're talking about 10-20 per core, so about 100 per server. It's probably not the performance you're looking for.
I think some companies sell watermarking hardware for TV operators, but I doubt it's any cheaper than a rack of servers and far less flexible.
I think you want to use the ffmpeg libavfilter library. Basically it allows you to overlay an image on top of a video. There is an example showing how to insert a transparent PNG logo in the bottom left corner of the input. You can interface with the library from C++ or from a shell on a command line basis.
In older versions of ffmpeg you will need to use a extension library called watermark.so, often located in /usr/lib/vhook/watermark.so
Depending on what your content is, you may want to consider using invisible digital watermarking as well. It embeds a digital sequence into your video which is not visually detectable. Even if someone were to remove the visible watermark, the invisible watermark would still remain. If a user were to redistribute your video, invisible watermarking would indicate the source of the redistribution.
Of course there are also companies which provide video content management, but I get the sense you want to do this yourself. Doing the watermarking real time is going to be very resource intensive, especialy as you scale up. I would look to do some type of predicitive watermarking.

Framebuffer Documentation

Is there any documentation on how to write software that uses the framebuffer device in Linux? I've seen a couple simple examples that basically say: "open it, mmap it, write pixels to mapped area." But no comprehensive documentation on how to use the different IOCTLS for it anything. I've seen references to "panning" and other capabilities but "googling it" gives way too many hits of useless information.
Edit:
Is the only documentation from a programming standpoint, not a "User's howto configure your system to use the fb," documentation the code?
You could have a look at fbi's source code, an image viewer which uses the linux framebuffer. You can get it here : http://linux.bytesex.org/fbida/
-- It appears there might not be too many options possible to programming with the fb from user space on a desktop beyond what you mentioned. This might be one reason why some of the docs are so old. Look at this howto for device driver writers and which is referenced from some official linux docs: www.linux-fbdev.org [slash] HOWTO [slash] index.html . It does not reference too many interfaces.. although looking at the linux source tree does offer larger code examples.
-- opentom.org [slash] Hardware_Framebuffer is not for a desktop environment. It reinforces the main methodology, but it does seem to avoid explaining all the ingredients necessary to doing the "fast" double buffer switching it mentions. Another one for a different device and which leaves some key buffering details out is wiki.gp2x.org [slash] wiki [slash] Writing_to_the_framebuffer_device , although it does at least suggest you might be able use fb1 and fb0 to engage double buffering (on this device.. though for desktop, fb1 may not be possible or it may access different hardware), that using volatile keyword might be appropriate, and that we should pay attention to the vsync.
-- asm.sourceforge.net [slash] articles [slash] fb.html assembly language routines that also appear (?) to just do the basics of querying, opening, setting a few basics, mmap, drawing pixel values to storage, and copying over to the fb memory (making sure to use a short stosb loop, I suppose, rather than some longer approach).
-- Beware of 16 bpp comments when googling Linux frame buffer: I used fbgrab and fb2png during an X session to no avail. These each rendered an image that suggested a snapshot of my desktop screen as if the picture of the desktop had been taken using a very bad camera, underwater, and then overexposed in a dark room. The image was completely broken in color, size, and missing much detail (dotted all over with pixel colors that didn't belong). It seems that /proc /sys on the computer I used (new kernel with at most minor modifications.. from a PCLOS derivative) claim that fb0 uses 16 bpp, and most things I googled stated something along those lines, but experiments lead me to a very different conclusion. Besides the results of these two failures from standard frame buffer grab utilities (for the versions held by this distro) that may have assumed 16 bits, I had a different successful test result treating frame buffer pixel data as 32 bits. I created a file from data pulled in via cat /dev/fb0. The file's size ended up being 1920000. I then wrote a small C program to try and manipulate that data (under the assumption it was pixel data in some encoding or other). I nailed it eventually, and the pixel format matched exactly what I had gotten from X when queried (TrueColor RGB 8 bits, no alpha but padded to 32 bits). Notice another clue: my screen resolution of 800x600 times 4 bytes gives 1920000 exactly. The 16 bit approaches I tried initially all produced a similar broken image to fbgrap, so it's not like if I may not have been looking at the right data. [Let me know if you want the code I used to test the data. Basically I just read in the entire fb0 dump and then spit it back out to file, after adding a header "P6\n800 600\n255\n" that creates the suitable ppm file, and while looping over all the pixels manipulating their order or expanding them,.. with the end successful result for me being to drop every 4th byte and switch the first and third in every 4 byte unit. In short, I turned the apparent BGRA fb0 dump into a ppm RGB file. ppm can be viewed with many pic viewers on Linux.]
-- You may want to reconsider the reasons for wanting to program using fb0 (this might also account for why few examples exist). You may not achieve any worthwhile performance gains over X (this was my, if limited, experience) while giving up benefits of using X. This reason might also account for why few code examples exist.
-- Note that DirectFB is not fb. DirectFB has of late gotten more love than the older fb, as it is more focused on the sexier 3d hw accel. If you want to render to a desktop screen as fast as possible without leveraging 3d hardware accel (or even 2d hw accel), then fb might be fine but won't give you anything much that X doesn't give you. X apparently uses fb, and the overhead is likely negligible compared to other costs your program will likely have (don't call X in any tight loop, but instead at the end once you have set up all the pixels for the frame). On the other hand, it can be neat to play around with fb as covered in this comment: Paint Pixels to Screen via Linux FrameBuffer
Check for MPlayer sources.
Under the /libvo directory there are a lot of Video Output plugins used by Mplayer to display multimedia. There you can find the fbdev (vo_fbdev* sources) plugin which uses the Linux frame buffer.
There are a lot of ioctl calls, with the following codes:
FBIOGET_VSCREENINFO
FBIOPUT_VSCREENINFO
FBIOGET_FSCREENINFO
FBIOGETCMAP
FBIOPUTCMAP
FBIOPAN_DISPLAY
It's not like a good documentation, but this is surely a good application implementation.
Look at source code of any of: fbxat,fbida, fbterm, fbtv, directFB library, libxineliboutput-fbe, ppmtofb, xserver-fbdev all are debian packages apps. Just apt-get source from debian libraries. there are many others...
hint: search for framebuffer in package description using your favorite package manager.
ok, even if reading the code is sometimes called "Guru documentation" it can be a bit too much to actually do it.
The source to any splash screen (i.e. during booting) should give you a good start.

Writing Color Calibration Data to a TIFF or PNG file

My custom homebrew photography processing software, running on 64 bit Linux/GNU, writes out PNG and TIFF files. These are to be sent to a quality printing shop to be made into fine art. Working with interior designers - it's important to get the colors just right!
The print shops usually have no trouble with TIFF and PNGs made from commercial software such as Photoshop. Even though i have the TIFF 6.0 specs, PNG specs, and other info in hand, it is not clear how to include color calibration data or implement color management system on linux. My files are often rejected as faulty, without sufficient error reports to make fixes.
This has been a nasty problem for a while for many. Even my contacts at the Hollywood postproduction studios are struggling with this issue. One studio even wanted to hire me to take care of their color calibration, thinking i was the expert - but no, i am just as blind and lost as everyone!
Does anyone know of good code examples, detailed technical information, or have any other enlightenment? Or time to switch to pure Apple?
Take a look at LittleCMS
http://www.littlecms.com/
This page has the code for applying it to TIFF
http://www.littlecms.com/newutils.htm
The basic thing you need to know is that Color profile data is something you need to store in the meta-data of the file itself.
There is a consultant called Charles Poynton who specialises in this area. I work for one of the post production studios you mention (albeit in london not hollywood), and have seen him speak on the subject a couple of times. His website contains a lot of the material he presents and you might find something of use there. He also has a book called Digital Video and HDTV Algorithms and Interfaces which is not as heavy as the title might suggest! While these resources might not answer your question directly, it might provide a spring board to other solutions.
More specifically, which libraries are you using to write the png and tif files - you mention they are homebrew, but how custom are they exactly? Postprocessing the images in an image manipulation program (such as ImageMagick or dcraw) might allow you to inject this information into the header more successfully.
Sorry, I don't have any specific answers, but maybe something that will point you a bit further in the right direction...
As a GNU/Linux user, you’ll want to consider DispcalGUI – http://dispcalgui.hoech.net/ – a GNOME-based GUI that centralizes color management, ICC profile management, and (crucially for your case) device calibration. It can talk to well-known pro- and mid-level hardware, e.g, i1, X-Rite, Spyder, etc.
But before you get into that – you say you are generating your files to spec; are you validating your output using a test suite specific to the format in question? If not, here are three to get you started:
imagetestsuite supports the well-known formats: https://code.google.com/p/imagetestsuite/w/list?can=1&q=
The Luminous* test suite is a JIRA plugin, if that’s your thing: https://marketplace.atlassian.com/plugins/com.luminouslead.plugin.jira.testsuite.LuminousTestSuite
FLOSS Decoder implementations often have one you can use, i.e. OpenJPEG – https://code.google.com/p/openjpeg/wiki/TestSuiteDocumentation
But even barring all of those, it seems like your problem is with embedded ICC data – which is two specs in one. First, there’s the host image-file format, and they all handle embedding differently (meaning the ICC data will likely look totally different when embedded in a TIFF than, say, a JPEG or WebP file). Second, there is the ICC spec itself. It is documented here: http://color.org/v4spec.xalter – and you may also want to look at the source for the aforementioned dispcalGUI, which includes a very legible and hackable ICC profile class in Python: http://sourceforge.net/p/dispcalgui/code/HEAD/tree/trunk/dispcalGUI/ICCProfile.py
Full disclosure: I have contributed to that very ICC profile class, to which I just linked in that last ¶
That’s the basics (many of which you have no doubt covered)... beyond that, if you post more information about what exactly is going wrong, I’d be interested to look it over. Good luck with it either way.
* NB. This project is unrelated to the long-standing photography website, “the Luminous Landscape”

Resources