what format of pic can generate mimap in opengl-es - graphics

as the title, I used to use .dds, it did work,now I use type of .png, can it generate mipmap? Here functions what I am using: glTexImage2D(…).Or maybe gluBuild2DMipmaps(…) a better choice?

DDS are an image format that contains precalculated mipmaps. As far as quality goes, precalculated mipmaps offer the best quality, since they can be downsampled offline with advanced filter kernels like Lancozs, without having to care about runtime efficiency.
PNG does not contain additional mipmap levels so you have to compute the mipmaps at runtime. You should however not use gluBuild2DMipmaps for this. For one this function is known to exhibit buggy behavior in certain conditions and furthermore it will unconditionally resample all images to power-of-2 dimensions, although since OpenGL-2 non power-of-2 dimensions are perfectly fine for texture images.
Instead you should load the base level image with glTexImage2D(…) and use glGenerateMipmap(…) (available since OpenGL-3) to build the mipmap image pyramid from there. If you don't use OpenGL-3, you can use the SGIS_generate_mipmap extension, if available.
However be advised that online mipmap generation may yield not as good results as offline generation.
Another possible approach would be the use of JPEG2000 images; the nature of JPEG2000 image encoding results in an image pyramid being readily available. OTOH JPEG2000 is very costly to encode and decode.

Related

Is there a binary kind of SVG?

It just seems to me that when writing code for dynamic data visualization, I end up doing the same things over and over in different languages/platforms. Now if I had a cross platform language(which I do) and something like a binary version of SVG, I could make my code target that and use/create interpreters for whatever platform I currently need to use it on.
The reason I don't want SVG is because the plaintext part makes it too slow for my purposes. I could of course just create my own intermediary format but if there is something already out there that's implemented by various things then the less work for me!
Depending on what you mean by “too slow”, the answer varies:
Filesize too large
Officially, the closest thing SVG has to a binary format is SVGZ, which is a gzipped SVG file with the .svgz extension. All conforming SVG viewers should be able to open it. Making one is simple on *nix systems:
gzip yourfile.svg && mv yourfile.svg.gz yourfile.svgz
You could also try Brotli compression, which tends to have smaller filesize at the cost of more compression time.
Including other assets is inefficient
SVG can only bundle bitmaps and other binary data through base64 encoding, which has a fair amount of overhead.
PDF can include “streams” of raw binary data, and is surprisingly efficient when programmatically generated.
Parsing the text data takes too long
This is tricky. PDF and its brother, Encapsulated PostScript, are also old, well-supported vector graphic formats. Unfortunately, they too are also text at their core, with optional compression.
You could try Computer Graphics Metafiles, which can be compiled ahead of time. But I’m unsure how well-supported they are across consumer devices.
From a comment:
Almost nothing about the performance of SVG other than the transmission cost of sending it over a network is down to it being plaintext
No, that's completely wrong. I worked at CSIRO using XML for massive 3D models. GeoScience Australia did a formal study into the parsing speed - parsing floating point numbers from text is relatively expensive for big data sets, compared to reading a 4 or 8 byte binary representation.
I've spent a lot of time optimising my internal binary formats for Touchgram and am now looking at vector art.
One of the techniques you can use is a combination of
variable-length integer coding and
normalising your points to a scale represented by integers, then storing paths as sequences of deltas
That can yield paths where often only 1 or 2 bytes are used per step, as opposed to the typical 12.
Consider a basic line
<polyline class="Connect" points="100,200 100,100" />
I could represent that with 4 bytes instead of 53.
So far, all I've been able to find in binary SVG is this post about a Go project linking to the project description and repo
Adobe Flash SWF files may work. Due to its previous ubiquity, 'players' and libraries were written for many platforms. The specifications were open and license permitting. For simple 2D graphics, earlier, more compatible versions would do fine.
The files are binary and extraordinarily small.

what is the equivalent of the DirectDraw Surface (DDS) format for opengl on linux?

DDS format has been made for directX right ? so it's should be optimized for it and not for openGL I guess.
So, there is another format(s) ? if yes, what format is a good choice ? what reason(s) ?
also, since I'm working on linux, I'm also concerned by making textures on linux. So I need a format who can be imported/exported by gimp.
The DDS format is useful for storing compressed textures. If you store the file in the same compression as it will be stored in the GPU memory, you don't need to decode and re-encode for GPU storage, instead you can just move it directly to memory.
The DDS format is basically used to store S3 Texture Compression data. The internal DDS formats DTX3 and DTX5 are for example S3TC formats that are also supported by OpenGL:
http://www.opengl.org/wiki/S3_Texture_Compression
DDS also can store pre-calculated mipmaps. Again this is a trade-off (larger file size) for reducing loading times, as the mipmaps could also be calculated at loading time.
As you can see, if you have the right code to parse the DDS file, e.g. the payload will be taken in its compressed form and not decoded on the host machine, then it is perfectly fine to use a DDS.
For an alternative, #CoffeeandCode pointed out the KTX format in his answer. These files use a different compression algorithm (see here). The advantage is that this compression is mandatory in newer OpenGL versions, while S3TC compression was always only available as an extension (and has patent problems). I don't know how they compare in quality and if you can expect OpenGL 4.3 on your target platforms.
My Take: If you are targeting recent hardware and OpenGL support (OpenGL ES 3 or OpenGL 4.3), you should use the KTX format and respective texture formats (libktx will generate the texture objects for you). If you need to be able to run on older hardware or happen to already have a lot of DDS data, you should probably stick with DDS for the time being.
There is nothing particularly D3D-"optimized" about DDS.
Once you read the header correctly, the (optionally) pre-compressed data is binary compatible with S3TC. Both DDS and OpenGL's equivalent (KTX) are capable of storing cubemap arrays and mipmaps in a single image file, that is their primary appeal.
With other formats, you may wind up using the driver to compress and/or generate mipmaps, and the driver usually does a poor job quality wise. The drivers are usually designed to do this quickly because they have a lot of textures that need to be compressed / mipmapped. You can use a higher quality mipmap downsample filter / compression implementation offline since the amount of time required to complete is rather unimportant.
The key benefits of DDS / KTX are:
Offline computation of mipmaps
Offline compression
Store all layers of a texture array/cubemap
Doing (1) and (2) offline can both improve image quality and reduce the overhead of loading textures at run-time. (3) is mostly for convenience, but a welcomed one.
I think the closest equivalent to DDS for DirectX is KTX, but even DDS works fine under OpenGL once parsed.

Retrieve the pixel values of an image with Haskell

Is there a way or a library available that can load an image (jpeg, png, etc) and assign the pixel values of that image into a list or matrix? I'd like to do some experiments with image and pattern recognition.
A little nudge in the right direction would be appreciated.
You can use JuicyPixels, a native Haskell library for image loading. This is rather easy to convert to REPA as well (manually or with JuicyPixesl-repa).
I've used the repa-devil package for this in the past. It lets you work with a bunch of formats using Developer's Image Library (DevIL). You can read and write all the formats you are likely to care about.
The actual image data is given as a Repa array. This is a great library for array operations and makes it very easy to write parallel code.
Try the repa library
.Also there is a small tutorial here
Here is a new Haskell Image Processing library, which uses JuicyPixels for encoding, provides interface for you to read and write all of the supported formats in a very easy manner and manipulate them in any way you can imagine. Just as a simple example on how easy it is:
>>> img <- readImageRGB "image.jpg"
>>> writeImage "image90.png" $ rotate90 img
Above will read a JPG image in RGB color space, rotate it 90 degrees clockwise and save it as a PNG image.
Oh yeah, it also can use Repa, so you will get parallel processing for free as well.
GTK supports loading and saving JPEG and PNG. [AFAIK, no other formats though.] There is a Haskell binding named Gtk2hs. It supports vector graphics very well, but bitmap graphics, while supported, isn't especially easy to figure out. So I wrote AC-EasyRaster-GTK, which wraps GTK in a more friendly interface. (It still needs Gtk2hs though.) The only real down-side is that Gtk2h is a bit fiddly to set up on Windows. (And it's arguably overkill to install an entire GUI toolkit just to load and save image files.)
I gather the "GD" library supports writing several image formats, and is quite small and simple. I believe Hackage has Haskell bindings for GD as well. I haven't tried this personally.
There is a file format called PPM which is deliberately designed to be ridiculously easy to implement (it's a tiny header and then an array of pixels), and consequently there's at least a dozen packages on Hackage which implement it (including my own AC-PPM). There are also lots of programs out there which can display and/or convert images in this format.

how do doctored image exploits for image viewers work? can they be defeated with managed code and random pixel alterations?

I have read that some image viewers were hacked by appropriately doctored images (in a format particularly suited for that? not sure about the details).
So, how could this threat be completely eliminated? For instance, suppose we make a viewer for the affected format written in managed code and have it convert images to a pure BMP (or something else so simple that its viewers cannot be hacked); would the problem go away? How about first convert to BMP and then introduce some pervasive minor random pixel alterations to better disrupt the possible hack?
Let's say this image sanitization converter would be incorporated into the firewall so that only "safe" sanitized images would, by default, end up loaded during regular browsing. Would this solve the problem? Or is my reasoning incorrect due to flawed understanding of the nature of image based exploits?
Your question, i think, boils down to whether the attack lives in the data or in the image. If it's in the data, then merely decoding the image and re-encoding with a known-good encoder it will ensure safety. If it's in the image, then you may need to alter the pixels.
The famous example of this was mishandling of JPEG comment fields with bad lengths, originally a bug in Netscape, but later independently introduced into Windows. This was very much a data rather than an image problem; the part of the data in question isn't even image data, it's metadata. If you decoded the image with a sandboxed decoder, perhaps detecting and recovering from the corrupt comment field, then re-encoded it with a friendly encoder, the result would be safe even for vulnerable decoders. Doing this naively would lead to a loss of quality; it is possible to losslessly transcode JPEGs, but it requires code to specifically do so.
My gut feeling is that image-level attacks are not possible, only data-level attacks. Image formats are well-enough specified, and fundamentally simple enough, that the content of the image really shouldn't affect the decoding process. I certainly can't prove that, though, nor really even argue for it.

Which 3D Model format should I be using?

Im writing a game engine and I'm wondering what 3D model format should I use/load/export? Obj seems universal and easy but it also appears to be unreliable in that most models out there contain errors and it doesn't store anywhere near as much as other formats.
There appear to be formats specifically for games such as MD2/3/5 but Im not sure, I use wings3d if I model, and I don't know what other details beyond purely loading what I need and support from the format Id have to implement, such as would I need to implement IK? and can I use scripted per piece animation rather than Inverse kinematics and bone rigging?
Collada is an open XML based format for 3d models owned by the Khronos group(OpenGL standards body)
From the Collada.org FAQ:
The COLLADA 1.4.x feature set includes:
Mesh geometry
Transform hierarchy (rotation, translation, shear, scale, matrix)
Effects
Shaders (Cg, GLSL, GLES)
Materials
Textures
Lights
Cameras
Skinning
Animation
Physics (rigid bodies, constraints, rag dolls, collision, volumes)
Instantiation
Techniques
Multirepresentations
Assets
User data
Before worrying about what 3D formats you want to support, I think you should really focus on what features you are planning to implement in your engine. Write those down as requirements, and pick the format that supports the most features from the list... as you'll want to showcase your engine (I am assuming you are planning for your engine to be publicly available). You might even want to roll your own format, if your engine has specific features (which is always a good thing to have for a game engine).
After that, support as many of the popular formats as you can (.X, .3DS, .OBJ, .B3D)... the more accessible your engine is, the more people will want to work with it!
Collada is a nice and generic format, but like Nils mentions, it is not an ideal format for final deployment.
I use my own binary format. I've tried to use existing formats but always run into limitations. Some could be worked around, others where showstoppers.
Collada may be worth a look. I don't think that it's that good as a format to be read by a 3D engine. It's fine as a general data-exchange format though.
http://www.collada.org/mediawiki/index.php/Main_Page
+1 for Collada. You may also want a custom native binary format for really fast loading (usually just a binary dump of vertex/index buffer data, plus material and skeleton data, and collision data if appropriate).
One trend in the games industry is to support loading a format like collada in the developer build of the engine, but also have a toolchain that exports an optimized version for release. The developer version can update the mesh dynamically, so as artists save changes, the file is automatically reloaded allowing them an (almost) instant WYSIWYG view of their model, but still providing a fully optimised release format.
support Collada well, and then supply good converters to/from the other formats (this might be the hard part). This will give you maximum flexibility. Take a look at C4 engine
Collada is great, but it lives more on the 3D app side of things. ie it's best used for transferring 3D data between applications, not loading 3D data from within a games engine. Have you looked into Lua? It's widely used in games because its a scripting language that's both ridiculously quick (perfect for games) and very flexible (can be used to represent whatever data you need for your engine).

Resources