https://stackoverflow.com/a/2574798/159072
public static Bitmap BitmapTo1Bpp(Bitmap img)
{
int w = img.Width;
int h = img.Height;
//
Bitmap bmp = new Bitmap(w, h, PixelFormat.Format1bppIndexed);
BitmapData data = bmp.LockBits(new Rectangle(0, 0, w, h), ImageLockMode.ReadWrite, PixelFormat.Format1bppIndexed);
Why this addition and division?
byte[] scan = new byte[(w + 7) / 8];
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{////Why this condition check?
if (x % 8 == 0)
//Why divide by 8?
scan[x / 8] = 0;
Color c = img.GetPixel(x, y);
//Why this condition check?
if (c.GetBrightness() >= 0.5)
{
// What is going on here?
scan[x / 8] |= (byte)(0x80 >> (x % 8));
}
}
// Why Martial.Copy() called here?
Marshal.Copy(scan, 0, (IntPtr)((long)data.Scan0 + data.Stride * y), scan.Length);
}
bmp.UnlockBits(data);
return bmp;
}
The code uses some basic bit-hacking techniques, required because it needs to set bits and the minimum storage element you can address in C# is a byte. I intentionally avoided using the BitArray class.
int w = img.Width;
I copy the Width and Height properties of the bitmap into a local variable to speed up the code, the properties are too expensive. Keep in mind that w are the number of pixels across the bitmap, it represents the number of bits in the final image.
byte[] scan = new byte[(w + 7) / 8];
The scan variable stores the pixels in one scan line of the bitmap. The 1bpp format uses 1 bit per pixel so the total number of bytes in a scan line is w / 8. I add 7 to ensure the value is rounded up, necessary because integer division always truncates. w = 1..7 requires 1 byte, w = 8..15 requires 2 bytes, etcetera.
if (x % 8 == 0) scan[x / 8] = 0;
The x % 8 expression represents the bit number, x / 8 is the byte number. This code sets all the pixels to Black when it progresses to the next byte in the scan line. Another way to do it would be re-allocating the byte[] in the outer loop or resetting it back to 0 with a for-loop.
if (c.GetBrightness() >= 0.5)
The pixel should be set to White when the source pixel is bright enough. Otherwise it leaves it at Black. Using Color.Brightness is a simple way to avoid dealing with the human eye's non-linear perception of brightness (luminance ~= 0.299 * red + 0.587 * green + 0.114 * blue).
scan[x / 8] |= (byte)(0x80 >> (x % 8));
Sets a bit to White in the scan line. As noted x % 8 is the bit number, it shifts 0x80 to the right by the bit number, they are stored in reverse order in this pixel format.
Related
I am trying to get more comfortable with the math behind fractal coloring and understanding the coloring algorithms much better. I am the following paper:
http://jussiharkonen.com/files/on_fractal_coloring_techniques%28lo-res%29.pdf
The paper gives specific parameters to each of the functions, however when I use the same, my results are not quite right. I have no idea what could be going on though.
I am using the iteration count coloring algorithm to start and using the following julia set:
c = 0.5 + 0.25i and p = 2
with the coloring algorithm:
The coloring function simply returns the number of
elements in the truncated orbit divided by 20
And the palette function:
I(u) = k(u − u0),
where k = 2.5 and u0 = 0, was used.
And with a palette being white at 0 and 1, and interpolating to black in-between.
and following this algorithm:
Set z0 to correspond to the position of the pixel in the complex plane.
Calculate the truncated orbit by iterating the formula zn = f(zn−1) starting
from z0 until either
• |zn| > M, or
• n = Nmax,
where Nmax is the maximum number of iterations.
Using the coloring and color index functions, map the resulting truncated
orbit to a color index value.
Determine an RGB color of the pixel by using the palette function
Using this my code looks like the following:
float izoom = pow(1.001, zoom );
vec2 z = focusPoint + (uv * 4.0 - 2.0) * 1.0 / izoom;
vec2 c = vec2(0.5f, 0.25f) ;
const float B = 2.0;
float l;
for( int i=0; i<100; i++ )
{
z = vec2( z.x*z.x - z.y*z.y, 2.0*z.x*z.y ) + c;
if( length(z)>10.0) break;
l++;
}
float ind = basicindex(l);
vec4 col = color(ind);
and have the following index and coloring functions:
float basicindex(float val){
return val / 20.0;
}
vec4 color(float index){
float r = 2.5 * index;
float g = r;
float b = g;
vec3 v = 0.5 - 0.5 * sin(3.14/2.0 + 3.14 * vec3(r, g, b));
return vec4(1.0 - v, 1.0) ;
}
The paper provides the following image:
https://imgur.com/YIZMhaa
While my code produces:
https://imgur.com/OrxdMsN
I get the correct results by using k = 1.0 instead of 2.5, however I would prefer to understand why my results are incorrect. When extending this to the smooth coloring algorithms, my results are still incorrect so I would like to figure this out first.
Let me know if this isn't the correct place for this kind of question and I can move it to the math stack exchange. I wasn't sure which place was more appropriate.
Your image is perfectly implemented for Figure 3.3 in the paper. The other image you posted uses a different routine.
Your figure seems to have that bit of perspective code there at top, but remove that and they should be the same.
If your objection is the color extremes you set that with the "0.5 - 0.5 * ..." part of your code. This makes the darkest black originally 0.5 when in the example image you're trying to duplicate the darkest black should be 1 and the lightest white should be 0.
You're making the whiteness equal to the distance from 0.5
If you ignore the fractal all together you are getting a bunch of values that can be normalized between 0 and 1 and you're coloring those in some particular ways. Clearly the image you are duplicating is linear between 0 and 1 so putting black as 0.5 cannot be correct.
o = {
length : 500,
width : 500,
c : [.5, .25], // c = x + iy will be [x, y]
maxIterate : 100,
canvas : null
}
function point(pos, color){
var c = 255 - Math.round((1 + Math.log(color)/Math.log(o.maxIterate)) * 255);
c = c.toString(16);
if (c.length == 1) c = '0'+c;
o.canvas.fillStyle="#"+c+c+c;
o.canvas.fillRect(pos[0], pos[1], 1, 1);
}
function conversion(x, y, R){
var m = R / o.width;
var x1 = m * (2 * x - o.width);
var y2 = m * (o.width - 2 * y);
return [x1, y2];
}
function f(z, c){
return [z[0]*z[0] - z[1] * z[1] + c[0], 2 * z[0] * z[1] + c[1]];
}
function abs(z){
return Math.sqrt(z[0]*z[0] + z[1]*z[1]);
}
function init(){
var R = (1 + Math.sqrt(1+4*abs(o.c))) / 2,
z, x, y, i;
o.canvas = document.getElementById('a').getContext("2d");
for (x = 0; x < o.width; x++){
for (y = 0; y < o.length; y++){
i = 0;
z = conversion(x, y, R);
while (i < o.maxIterate && abs(z) < R){
z = f(z, o.c);
if (abs(z) > R) break;
i++;
}
if (i) point([x, y], i / o.maxIterate);
}
}
}
init();
<canvas id="a" width="500" height="500"></canvas>
via: http://jsfiddle.net/3fnB6/29/
I have a virtual scanner that generates a 2.5-D view of a point cloud (i.e. a 2D-projection of a 3D point cloud) depending on camera position. I'm using the vtkCamera.GetProjectionTransformMatrix() to get transformation matrix from world/global to camera coordinates.
However, if the input point cloud has color information for points I would like to preserve it.
Here are the relevant lines:
boost::shared_ptr<pcl::visualization::PCLVisualizer> vis; // camera location, viewpoint and up direction for vis were already defined before
vtkSmartPointer<vtkRendererCollection> rens = vis->getRendererCollection();
vtkSmartPointer<vtkRenderWindow> win = vis->getRenderWindow();
win->SetSize(xres, yres); // xres and yres are predefined resolutions
win->Render();
float dwidth = 2.0f / float(xres),
dheight = 2.0f / float(yres);
float *depth = new float[xres * yres];
win->GetZbufferData(0, 0, xres - 1, yres - 1, &(depth[0]));
vtkRenderer *ren = rens->GetFirstRenderer();
vtkCamera *camera = ren->GetActiveCamera();
vtkSmartPointer<vtkMatrix4x4> projection_transform = camera->GetProjectionTransformMatrix(ren->GetTiledAspectRatio(), 0, 1);
Eigen::Matrix4f mat1;
for (int i = 0; i < 4; ++i)
for (int j = 0; j < 4; ++j)
mat1(i, j) = static_cast<float> (projection_transform->Element[i][j]);
mat1 = mat1.inverse().eval();
Now, mat1 is used to transform coordinates to camera-view:
pcl::PointCloud<pcl::PointXYZ>::Ptr &cloud;
int ptr = 0;
for (int y = 0; y < yres; ++y)
{
for (int x = 0; x < xres; ++x, ++ptr)
{
pcl::PointXYZ &pt = (*cloud)[ptr];
if (depth[ptr] == 1.0)
{
pt.x = pt.y = pt.z = std::numeric_limits<float>::quiet_NaN();
continue;
}
Eigen::Vector4f world_coords(dwidth * float(x) - 1.0f,
dheight * float(y) - 1.0f,
depth[ptr],
1.0f);
world_coords = mat1 * world_coords;
float w3 = 1.0f / world_coords[3];
world_coords[0] *= w3;
world_coords[1] *= w3;
world_coords[2] *= w3;
pt.x = static_cast<float> (world_coords[0]);
pt.y = static_cast<float> (world_coords[1]);
pt.z = static_cast<float> (world_coords[2]);
}
}
I want the virtual scanner to return pcl::PointXYZRGB point cloud with color information.
Any help on how to implement this from someone experienced in VTK would save some of my time.
It's possible that I missed a relevant question already asked here - in that case, please point me to it. Thanks.
If I understand correctly that you want to get the color in which the point was rendered into the win RenderWindow, you should be able to get the data from the rendering buffer by calling
float* pixels = win->GetRGBAPixelData(0, 0, xres - 1, yres - 1, 0/1).
This should give you each pixel of the rendering buffer as an array in the format [R0, G0, B0, A0, R1, G1, B1, A1, R2....]. The last parameter which I wrote as 0/1 is whether the data should be taken from front or back opengl buffers. I presume by default double buffering should be on, so then you want to read from back buffer (use '1'), but I am not sure.
Once you have that, you can get the color in your second loop for all pixels that belong to points (depth[ptr] != 1.0) as:
pt.R = pixels[4*ptr];
pt.G = pixels[4*ptr + 1];
pt.B = pixels[4*ptr + 2];
You should call win->ReleaseRGBAPixelData(pixels) once you're done with it.
I am parsing an obj file which contains the texture coordinates (vt) values. From what I understand, vt values are a mapping into the texture image corresponding to this obj.
Assume, I have image im = 400x300 pixels
and I have a vt value
vt .33345 .8998
The mapping says, in the image, go the coordinate :
imageWidth x .3345 , imageHeight x .8998 and use the value there.
I have loaded the image values in a 2-d array.
The problem is, these mapping coordinates are floating values, how am I suppose to map them to the integer values of the pixel coordinates ? I can always truncate the decimal part, round off etc. But does the standard defines which one of the option is to be done ?
UV coordinates to Pixel coordinates :
pix.x = (uv.x * texture.width) -0.5
pix.y = ((1-uv.y) * texture.height) -0.5
The y axis of uv coordinates is opposite to the Pixel coordinates on an image.
For nearest neighbor interpolation, just round off the pixel coordinates.
For bilinear interpolation, calculate the participation percentage from the four neighbouring pixels and do a weighed average.
When UV coordinates go outside of range, there is a choice on how to handle the "texture wrapping":
Here's some java code for bilinear interpolation with "repeat" texture wrapping:
private static int billinearInterpolation(Point2D uv, BufferedImage texture) {
uv.x = uv.x>0 ? uv.x%1 : 1+(uv.x%1);
uv.y = uv.y>0 ? uv.y%1 : 1+(uv.y%1);
double pixelXCoordinate = uv.x * texture.getWidth() - 0.5;
double pixelYCoordinate = (1-uv.y) * texture.getHeight() - 0.5;
pixelXCoordinate = pixelXCoordinate<0?texture.getWidth()-pixelXCoordinate: pixelXCoordinate;
pixelYCoordinate = pixelYCoordinate<0?texture.getHeight()-pixelYCoordinate : pixelYCoordinate;
int x = (int) Math.floor(pixelXCoordinate);
int y = (int) Math.floor(pixelYCoordinate);
double pX = pixelXCoordinate - x;
double pY = pixelYCoordinate - y;
float[] px = new float[]{(float) (1 - pX), (float) pX};
float[] py = new float[]{(float) (1 - pY), (float) pY};
float red = 0;
float green = 0;
float blue = 0;
float alpha = 0;
for (int i = 0; i < px.length; i++) {
for (int j = 0; j < py.length; j++) {
float p = px[i] * py[j];
if (p != 0) {
int rgb = texture.getRGB((x + i)%texture.getWidth(), (y + j)%texture.getHeight());
alpha += (float) ((rgb >> 24) & 0xFF) * p;
red += (float) ((rgb >> 16) & 0xFF) * p;
green += (float) ((rgb >> 8) & 0xFF) * p;
blue += (float) ((rgb >> 0) & 0xFF) * p;
}
}
}
return (((int) alpha & 0xFF) << 24) |
(((int) red & 0xFF) << 16) |
(((int) green & 0xFF) << 8) |
(((int) blue & 0xFF) << 0);
}
Uv-Coordinates are always in the range [0,1]. This means, you will get the actual pixel coordinates by multiplying them with the image size:
texel_coord = uv_coord * [width, height]
Note, that even here one gets floating point values and there are several ways how to deal with them. The most primitive one is to simply round to the next integer to get the nearest texel. A more sophisticated method would be bilinear filtering.
I am trying to resize (scale down) an image which comes in YUV420sp format. Is it possible to do such image resizing without converting it into RGB, so directly manipulating the YUV420sp pixel array? Where can I find such algorithm?
Thanks
YUV 4:2:0 planar looks like this:
----------------------
| Y | Cb|Cr |
----------------------
where:
Y = width x height pixels
Cb = Y / 4 pixels
Cr = Y / 4 pixels
Total num pixels (bytes) = width * height * 3 / 2
And the subsamling used like this:
Which means that each chroma-pixel-value is shared between 4 luma-pixels.
One approach is just to remove pixels, making sure that corresponding Y-Cb-Cr relationship are kept/recalculated.
Something close to the Nearest-neighbor interpolation but reversed.
Another approach is to first convert the 4:2:0 subsampling to 4:4:4
Here you have a 1 to 1 mapping between luma and chroma data.
This is the correct way to interpolate chroma between 4:2:0 and 4:2:2 (luma is already at correct resolution)
Code in python, follow html-link for c-dito.
Code is not very pythonic, just a direct translation of the c-version.
def __conv420to422(self, src, dst):
"""
420 to 422 - vertical 1:2 interpolation filter
Bit-exact with
http://www.mpeg.org/MPEG/video/mssg-free-mpeg-software.html
"""
w = self.width >> 1
h = self.height >> 1
for i in xrange(w):
for j in xrange(h):
j2 = j << 1
jm3 = 0 if (j<3) else j-3
jm2 = 0 if (j<2) else j-2
jm1 = 0 if (j<1) else j-1
jp1 = j+1 if (j<h-1) else h-1
jp2 = j+2 if (j<h-2) else h-1
jp3 = j+3 if (j<h-3) else h-1
pel = (3*src[i+w*jm3]
-16*src[i+w*jm2]
+67*src[i+w*jm1]
+227*src[i+w*j]
-32*src[i+w*jp1]
+7*src[i+w*jp2]+128)>>8
dst[i+w*j2] = pel if pel > 0 else 0
dst[i+w*j2] = pel if pel < 255 else 255
pel = (3*src[i+w*jp3]
-16*src[i+w*jp2]
+67*src[i+w*jp1]
+227*src[i+w*j]
-32*src[i+w*jm1]
+7*src[i+w*jm2]+128)>>8
dst[i+w*(j2+1)] = pel if pel > 0 else 0
dst[i+w*(j2+1)] = pel if pel < 255 else 255
return dst
Run this twice to get 4:4:4.
Then it's just a matter of removing rows and columns.
Or you can just quadruple the chroma-pixels to go from 4:2:0 to 4:4:4, remove rows and columns and then average 4 Cb/Cr values into 1 to get back to 4:2:0 again, it all depends on how strict you need to be :-)
Here is a Java function I use to scale down a YUV 420 (or NV21) by a factor of two.
The function takes the image in a byte array along with the width and height of the original image as an input and returns an image in a byte array which has width and heigh both equal to the half of the original width and height.
As a basis for my code I used this: Rotate an YUV byte array on Android
public static byte[] halveYUV420(byte[] data, int imageWidth, int imageHeight) {
byte[] yuv = new byte[imageWidth/2 * imageHeight/2 * 3 / 2];
// halve yuma
int i = 0;
for (int y = 0; y < imageHeight; y+=2) {
for (int x = 0; x < imageWidth; x+=2) {
yuv[i] = data[y * imageWidth + x];
i++;
}
}
// halve U and V color components
for (int y = 0; y < imageHeight / 2; y+=2) {
for (int x = 0; x < imageWidth; x += 4) {
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + x];
i++;
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + (x + 1)];
i++;
}
}
return yuv;
}
YUV420sp has the Y in one plane and the U&V in another. If you split the U& V into separate planes, you can then perform the same scaling operation on each of the 3 planes in turn, without first having to go from 4:2:0 -> 4:4:4.
Have a look at the source code for libyuv; it just scales the planes:
https://code.google.com/p/libyuv/source/browse/trunk/source/scale.cc
I need to process the first "Original" image to get something similar to the second "Enhanced" one. I applied some naif calculation and the new image has more contrast and more strong colors but in the higher color regions a color hole appears. I have no idea about image processing, it would be great if you can suggest me which concepts and/or algorithms I could apply to get the result without this problem.
Convert the image to the HSB (Hue, Saturation, Brightness) color space.
Multiply the saturation by some amount. Use a cutoff value if your platform requires it.
Example in Mathematica:
satMult = 4; (*saturation multiplier *)
imgHSB = ColorConvert[Import["http://i.imgur.com/8XkxR.jpg"], "HSB"];
cs = ColorSeparate[imgHSB]; (* separate in H, S and B*)
newSat = Image[ImageData[cs[[2]]] * satMult]; (* cs[[2]] is the saturation*)
ColorCombine[{cs[[1]], newSat, cs[[3]]}, "HSB"]] (* rebuild the image *)
A table increasing the saturation value:
The "holes" that you see in the processed picture are the darker areas of the original picture, which went to negative values with your darkening algorithm. I suspect these out of range values are then written to the new image as positive numbers, so they end up in the higher part of the brightness scale. For example, let's say a pixel value is 10, and you are substracting 12 from all pixels to darken them a bit. This pixel will underflow and become -2. When you write it back to the file, -2 gets represented as 0xfe in hex, and this is 254 if you take it as an unsigned number.
You should use an algorithm that keeps the pixel values within the valid range, or at least you should "clamp" the values to the valid range. A typical clamp function defined as a C macro would be:
#define clamp(p) (p < 0 ? 0 : (p > 255 ? 255 : p))
If you add the above macro to your processing function it will take care of the "holes", but instead you will now have dark colors in those places.
If you are ready for something a bit more advanced, here on Wikipedia they have the brightness and contrast formulas that are used by The GIMP. These which will do a pretty good job with your image if you choose the proper coefficients.
This wikipedia article does a good job of explaining histogram equalization for contrast enhancement.
Code for grayscale images:
unsigned char* EnhanceContrast(unsigned char* data, int width, int height)
{
int* cdf = (int*) calloc(256, sizeof(int));
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
int val = data[width*y + x];
cdf[val]++;
}
}
int cdf_min = cdf[0];
for(int i = 1; i < 256; i++) {
cdf[i] += cdf[i-1];
if(cdf[i] < cdf_min) {
cdf_min = cdf[i];
}
}
unsigned char* enhanced_data = (unsigned char*) malloc(width*height);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
enhanced_data[width*y + x] = (int) round(cdf[data[width*y + x]] - cdf_min)*255.0/(width*height-cdf_min);
}
}
free(cdf);
return enhanced_data;
}