Get PNG image file dimension - visual-c++

I would like to get the dimension of a PNG image file inside my local folder on Windows. How can I achieve this using visual c++?

Should be easy, the png file is formed by an 8 byte intro, followed by a header chunk. Inside the header chunk you have the length (4 bytes), type (4 bytes), followed by the width and height.
So basically, the width is the 4 byte number at 8+8=16 bytes in the file, and the height is at 8+8+4=20 bytes in the file. Just read them!

aside from the well-known GDI APIs (I get the feeling you are trying to avoid these) it is worth giving http://msdn.microsoft.com/en-us/library/bb776499%28v=VS.85%29.aspx a shot. Never used it myself though :/

Related

Decoding incomplete audio file

I was given an uncompressed .wav audio file (360 mb) which seems to be broken. The file was recorded using a small usb recorder (I don't have more information about the recorder at this moment). It was unreadable by any player and I've tried GSpot (https://www.headbands.com/gspot/) to detect whether it was perhaps of a different format than wav but to no avail. The file is big, which hints at it being in some uncompressed format. It misses the RIFF-WAVE characters at the start of the file though, which can be an indication this is some other format or perhaps (more likely in this case) the header is missing.
I've tried converting the bytes of the file directly to audio and this creates a VERY noisy audio file, though voices could be made out and I was able to determine the sample rate was probably 22050hz (given a sample size of 8-bits) and a file length of about 4 hours and 45 minutes. Running it through some filters in Audition resulted in a file that was understandable in some places, but still way too noisy in others.
Next I tried running the data through some java code that produces an image out of the bytes, and it showed me lots of noise, but also 3 byte separations every 1024 bytes. First a byte close to either 0 or 255 (but not 100%), then a byte representing a number distributed somewhere around 25 (but with some variation), and then a 00000000 (always, 100%). The first 'chunk header' (as I suppose these are) is located at 513 bytes into the file, again close to a 2-power, like the chunk size. Seems a bit too perfect for coincidence, so I'm mentioning it as it could be important. https://imgur.com/a/sgZ0JFS, the first image shows a 1024x1024 image showing the first 1mb of the file (row-wise) and the second image shows the distribution of the 3 'chunk header' bytes.
Next to these headers, the file also has areas that clearly show structure, almost wave-like structures. I suppose this is the actual audio I'm after, but it's riddled with noise: https://imgur.com/a/sgZ0JFS, third image, showing a region of the file with audio structures.
I also created a histogram for the entire file (ignoring the 3-byte 'chunk headers'): https://imgur.com/a/sgZ0JFS, fourth image. I've flipped the lower half of the range as I think audio data should be centered around some mean value, but correct me if I'm wrong. Maybe the non-symmetric nature of the histogram has something to do with signed/unsigned data or two's-complement. Perhaps the data representation is in 8-bit floats or something similar, I don't know.
I've ran into a wall now. I have no idea what else I can try. Is there anyone out there that sees something I missed. Perhaps someone can give me some pointers what else to try. I would really like to extract the audio data out of this file, as it contains some important information.
Sorry for the bother. I've been able to track down the owner of the voice recorder and had him record me a minute of audio with it and send me that file. I was able to determine the audio was IMA 4-bit ADPCM encoded, 16-bit audio at 48000hz. Looking at the structure of the file I realized simple placing the header of the good file in front of the data of the bad file should be possible, and lo and behold I had a working file again :)
I'm still very much interested how that ADPCM works and if I can write my own decoder, but that's for another day when I'm strolling on wikipedia again. Have a great day everyone!

Node.js readUIntBE arbitrary size restriction?

Background
I am reading buffers using the Node.js buffer native API. This API has two functions called readUIntBE and readUIntLE for Big Endian and Little Endian respectively.
https://nodejs.org/api/buffer.html#buffer_buf_readuintbe_offset_bytelength_noassert
Problem
By reading the docs, I stumbled upon the following lines:
byteLength Number of bytes to read. Must satisfy: 0 < byteLength <= 6.
If I understand correctly, this means that I can only read 6 bytes at a time using this function, which makes it useless for my use case, as I need to read a timestamp comprised of 8 bytes.
Questions
Is this a documentation typo?
If not, what is the reason for such an arbitrary limitation?
How do I read 8 bytes in a row ( or how do I read sequences greater than 6 bytes? )
Answer
After asking in the official Node.js repo, I got the following response from one of the members:
No it is not a typo
The byteLength corresponds to e.g. 8bit, 16bit, 24bit, 32bit, 40bit and 48bit. More is not possible since JS numbers are only safe up to Number.MAX_SAFE_INTEGER.
If you want to read 8 bytes, you can read multiple entries by adding the offset.
Source: https://github.com/nodejs/node/issues/20249#issuecomment-383899009

How to check png file if it's a decompression bomb

I am playing with image uploads to a website and I found out about these decompression bomb attacks that can take place when it's allowed to upload png files (and some other). Since I am going to change the uploaded images, I want to make sure I don't become a victim of this attack. So when it comes to checking if a png file is a bomb, can I just read the file's headers and make sure that width and height are not more than the set limit, like 4000x4000 or whatever? Is it a valid method? Or what is the better way to go?
Besides large width and height, decompression bombs can also have excessively large iCCP chunks, zTXt, chunks, and iTXt chunks. By default, libpng defends against those to some degree.
Your "imagemagick" tag indicates that you are you asking how to do it with ImageMagick. ImageMagick's default width and height limits are very large: "convert -list resource" says
Resource limits: Width: 214.7MP Height: 214.7MP Area: 8.135GP
Image width and height limits in ImageMagick come from the commandline "-limit" option, which I suppose can also be conveyed via some equivalent directive in the various ImageMagick APIs. ImageMagick inherits the limits on iCCP chunks, etc., from libpng.
Forged smaller width and height values in the IHDR chunk don't fool either libpng or ImageMagick. They just issue an "Extra compressed data" warning and skip the remainder of the IDAT data without decompressing it.

Why is the data subchunk of some wav files 2 bytes off?

I have been trying out a wav joiner program in vb.net to join wav files and although it's working fine some of the time, often the resultant wav file doesn't play. After peeking into the original wav files, I noticed that the data subchunk where the word 'data' is was starting at offset 38 instead of 36. That is what's messing up the joiner which assumes offset 36. When I reexported that wav file from audacity, it fixed it up and the data subchunk starts at 36. All programs play the original file fine so I guess it's valid. Why are there two extra 00 bytes values right before the word 'data' in those wav files?
This is a guess, but have you looked at the four-byte number which is at offset 16 in the files where data starts at offset 38?
The fmt sub-chunk is of variable size, and its size is specified in a dword at offset 16 relative to the chunk ID, which is at zero in your files. That dword value is the size of the remainder of the sub-chunk, exclusive of the ID field and of the size field itself. My guess is that if you look there, the ones with the two extra bytes will say that their fmt sub-chunk is 18 bytes long rather than 16 (thanks ooga for catching my error on that).
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
When there's a size field, always use it. There's no need to jump to fixed offsets in the file on faith if the file format will tell you how big things are. And if it is telling you the size of things, take that as a warning that the size may change.

How to estimate size of a Jpeg file before saving it with a certain quality factor?

I'v got a bitmap 24bits, I am writing application in c++, MFC,
I am using libjpeg for encoding the bitmap into jpeg file 24bits.
When this bitmap's width is M, and height is N.
How to estimate jpeg file size before saving it with certain quality factor N (0-100).
Is it possible to do this?
For example.
I want to implement a slide bar, which represent save a current bitmap with certain quality factor N.
A label is beside it. shows the approximate file size when decode the bitmap with this quality factor.
When user move the slide bar. He can have a approximate preview of the filesize of the tobe saved jpeg file.
In libjpeg, you can write a custom destination manager that doesn't actually call fwrite, but just counts the number of bytes written.
Start with the stdio destination manager in jdatadst.c, and have a look at the documentation in libjpeg.doc.
Your init_destination and term_destination methods will be very minimal (just alloc/dealloc), and your empty_output_buffer method will do the actual counting. Once you have completed the JPEG writing, you'll have to read the count value out of your custom structure. Make sure you do this before term_destination is called.
It also depends on the compression you are using and to be more specific how many bits per color pixel are you using.
The quality factor wont help you here as a quality factor of 100 can range (in most cases) from 6 bits per color pixel to ~10 bits per color pixel, maybe even more (Not sure).
so once you know that its really straight forward from there..
If you know the Sub Sampling Factor this can be estimated. That information comes from the start of frame marker.
In the same marker right before the width and height so is the bit depth.
If you let
int subSampleFactorH = 2, subSampleFactorV = 1;
Then
int totalImageBytes = (Image.Width / subSampleFactorH) * (Image.Height / subSampleFactorV);
Then you can also optionally add more bytes to account for container data also.
int totalBytes = totalImageBytes + someConstantOverhead;

Resources