How to extract metadata and structure from Fireworks PNG files for conversion? - svg

Since Adobe has declared Fireworks dead, it is apparent that many of us are potentially locked into Fireworks in terms of the proprietary layered PNG format that they have used. I have been hunting for options to extract / convert my documents over to either PSD's or some other layered SVG structure (perhaps similar to what Sketch does on OS X).
Anyone have any ideas for extracting the contents of Fireworks' PNGs programmatically? So far I haven't seen any libraries to make this feasible and really no discussion of how the metadata is structured in a Fireworks PNG in a format that would help to reverse engineer this format for proper conversion and extraction.

Related

How to convert an Enhanced Windows Metafile (emf) to a JPG or PNG without loosing quality?

I am using Tableau to do some data representations and the only good quality image export Tableau allows is *.emf
Unfortunately, the online tool I use to put the report together(Canva) does not support emf format.
When I convert the file to jpg or png, the quality is drastically reduced :(
How can I overcome this matter? I tried many things such as opening emf in Illustrator and saving back with CMYK colors and 300dpi and such. But nothing seems to keep the crisp quality of the original emf file.
User Friendly solution:
InkScape opens enhanced windows metafiles, and many other vector-graphical file formats.
It exports to png with choice for output's resolution
It is opensource and available for Linux, windows and Mac OS X.
It is a fact that Tableau's image export feature does not provide many options. In general when I need high quality images, I use one of the below methods depending on the quality I need and the tools available to me at that time:
Screenshot method: If you have a large screen, taking a screenshot directly from Tableau yields better images than the exported ones. If my viz is exported to web, I sometimes enlarge the graphic from my web browser and then take the screenshot.
Converting from PDF: Since PDF can contain vector objects, Tableau's PDF files are in high quality most of the times. If you cannot use these PDF files, you may try converting these files to PNG or JPG files using online or desktop tools. Here is an online tool you may use for this purpose, but be careful about your confidential files when using such online services :)
And there are more ways to convert from PDF but are usually more complicated since they contain some Photoshop steps. I am not sure whether these are easy to apply methods for a lot of files but still you may want to check one of them: https://community.tableau.com/thread/120134

How can I find and extract an image from inside a proprietary file format?

I have cached preview files from Capture One (a photo processing program, similar to Lightroom) where I have lost the originals. Capture One saves previews in their proprietary .cop format and I'm not sure how to go about identifying what's what in there.
There are the strings ETIFFTagInteropIFD and JPEG Embedded TIFF Tags seen in the HEX view which suggests that they are somehow embedding a TIFF in there.
I do have original JPEG files with their corresponding COP-file, but when comparing them there isn't much that's similar - which makes sense I guess, since the preview COP-file is roughly half the size of the original.
What conclusions can I draw from this and what are some good tools for going further?

How to convert Esri ASCII (.asc) to use on a leaflet map?

As part of my software I have to somehow convert and display Esri ASCII (.asc) files on a leaflet map.
The files are in the filesystem and the backend is in nodeJS.
.asc is a raster format and gdal_translate manages to translate the files as I like. Unfortunately I can't use GDAL in node and as far, as I see it, gdal-node is not able to convert the files. Manually converting files is not an option.
My data is always somehow geo referenced, so if I get an image, it has to be placed on the right spot on the worldmap.
Help would be highly appreciated, because I feel kinda overwhelmed by all this GEO stuff.
If you want the normal set of features Leaflet offers, like zooming, what you need is called a map tile service.
There are a few map tile standards that Leaflet supports. Generally though, making map tiles is complicated — particularly if your ASCII grids are in lat/long but you're using a background layer from Google, Bing, OSM that's in "web Mercator" — so you probably don't want to write one in Node yourself.
Look at GeoServer and see if that will fit into your setup.

Retrieve the pixel values of an image with Haskell

Is there a way or a library available that can load an image (jpeg, png, etc) and assign the pixel values of that image into a list or matrix? I'd like to do some experiments with image and pattern recognition.
A little nudge in the right direction would be appreciated.
You can use JuicyPixels, a native Haskell library for image loading. This is rather easy to convert to REPA as well (manually or with JuicyPixesl-repa).
I've used the repa-devil package for this in the past. It lets you work with a bunch of formats using Developer's Image Library (DevIL). You can read and write all the formats you are likely to care about.
The actual image data is given as a Repa array. This is a great library for array operations and makes it very easy to write parallel code.
Try the repa library
.Also there is a small tutorial here
Here is a new Haskell Image Processing library, which uses JuicyPixels for encoding, provides interface for you to read and write all of the supported formats in a very easy manner and manipulate them in any way you can imagine. Just as a simple example on how easy it is:
>>> img <- readImageRGB "image.jpg"
>>> writeImage "image90.png" $ rotate90 img
Above will read a JPG image in RGB color space, rotate it 90 degrees clockwise and save it as a PNG image.
Oh yeah, it also can use Repa, so you will get parallel processing for free as well.
GTK supports loading and saving JPEG and PNG. [AFAIK, no other formats though.] There is a Haskell binding named Gtk2hs. It supports vector graphics very well, but bitmap graphics, while supported, isn't especially easy to figure out. So I wrote AC-EasyRaster-GTK, which wraps GTK in a more friendly interface. (It still needs Gtk2hs though.) The only real down-side is that Gtk2h is a bit fiddly to set up on Windows. (And it's arguably overkill to install an entire GUI toolkit just to load and save image files.)
I gather the "GD" library supports writing several image formats, and is quite small and simple. I believe Hackage has Haskell bindings for GD as well. I haven't tried this personally.
There is a file format called PPM which is deliberately designed to be ridiculously easy to implement (it's a tiny header and then an array of pixels), and consequently there's at least a dozen packages on Hackage which implement it (including my own AC-PPM). There are also lots of programs out there which can display and/or convert images in this format.

visualization of compressed (deflated, gzipped) content structures

I have some ideas I would like to experiment with relating to data compression, but am finding it difficult to decipher some parts of how the standard are applied "in real life". I would like to look at some sample compressed files to observe how the the blocks are arranged and the huffman tree(s) are structured.
Are there any tools in existence which can help visualize this for a given compressed file (zip/gzip/deflate etc)? I'm picturing something like a tree view or some form of graph visualizer.
You might be interested in this (if you are still interested that is :-P)
http://jvns.ca/blog/2013/10/24/day-16-gzip-plus-poetry-equals-awesome/
I made a "entropy image" tool.
The entropy_image tool replaces each pixel
with the (estimated) number of bits
necessary to encode that pixel using range coding or Huffman compression.
I hope this isn't the only compression visualization tool in the world.

Resources