what are offset pixels? - graphics

I am calculating depth of an image. In that calculation I get acquainted with the term offset pixel . But I don't know what is actually a pixel offset.
What are offset pixels?

Related

How to calculate overlapping area of a pixel and a line going through it?

I have two endpoint of a line segment. I calculated the pixels to be colored using Mid Point Line algorithm. Now, I want to apply unweighted antialiasing to those pixels which provides the intensity of a pixel based on the area overlapping of that pixel and line.
I am asking for suggestions on how to find out those percentage of overlapping area?
It is not necessary that pixels are to be considered as rectangle only.

Color brightness changes (RGB, HSV, ITU BT.709)

The I2TU BT.709 standard calculates relative - apparent - luminance. The HSV space, in turn, provides the ability to change the absolute value of the brightness. The question arose of how to relate these values. Let's say in HSV space the brightness of a color x is 90%, while the formula for relative brightness is 70%. How do I calculate the amount of change in brightness in HSV space so that the relative brightness is, say, reduced to 60%?
I could only get to the trivial option - too much, but I want to find a more effective solution.

How to generate color palette based on given image?

I would like to generate color palette based on the given image containig max. 10 colors. Assume that, the given picture is bot bigger then 800x600 px. I've tried the next algorithm:
Generate 500 random X, Y values.
Check the colors' R,G,B values at the (X,Y) position, put colors into an array.
Find similar colors to each color, count how many similar colors have found. (Similar means: +- 10 difference in R, G, B)
Display colors which have the most similar colors.
The result is not what I expect. Any idea how to get the appropriate colors?
An example, I want something like this
You probably want Median Cut or K-means.
With median cut, you'll generate a point cloud of color samples from your source image. Divide the pointcloud in half at its median across the axis with maximum variance, creating two sub-pointclouds. Recursively divide these until you have the desired number of leaf nodes. You can then generate a palette by averaging the color samples in each leaf node.
With K-means, you select k random color samples from your image. These will be the first color samples in k buckets. Then, for each pixel, add its color value to the bucket whose average color is closest to that of the pixel in question-- you may use euclidean distance to determine "closeness". After all pixels have been sampled, the average colors of the k buckets is your palette.
You will get better results if you first convert your color samples to CIE lab color space, where euclidean distance is a better measure of perceptual distance.

Fiji/ImageJ - Measure tool when x and y resolution of image not equal

I have images with unequal resolution in x and y direction. The pixel height is larger than the pixel width. The pixel width and pixel height are saved in Image > Properties.
When I draw an ellipse on the image and try to use the Analyze > Measure tool I get a '0' for both major and minor axis length for the "Fit ellipse" measurement.
I can only get a proper measurement if I remove the scale of the image, or manually change the pixel height and pixel width to an equal number.
I assume this is a bug, but maybe I'm missing something?
As a workaround I was thinking of writing a small macro that saves pixel height and pixel width, then removes the scale of the image, then measures major and minor axis length in pixel and then re-applies pixel height and pixel width to the measurements.
But I can't find the command for reading out the resolution. Any pointers?
The documentation for the Set Measurements... command states in the description of the Fit ellipse parameter:
Note that ImageJ cannot calculate the major and minor axis lengths if
Pixel Aspect Ratio in the Analyze▷Set Scale… dialog is not 1.0.
Your workaround should work, just use the getPixelSize(unit, pixelWidth, pixelHeight) macro function.

How does <MinFilter=Linear> work in DirectX?

The MSDN says:
D3DTEXF_LINEAR Bilinear interpolation filtering used as a texture magnification or minification filter. A weighted average of a 2 x 2 area of texels surrounding the desired
pixel is used.
Is the weight of each texel is always 0.25 when MinFilter=Linear is set and the pixel is larger than the projected texel? If not ,how does DX calculate the weight of each texel?
The weight is not always 0.25, the 4 texels are given appropriate weights based on the position of the sample point. In trilinear filtering (Min, Max and MipFilter=Linear) you get bilinear filtering for both the next larger and next smaller mip level and then you interpolate again with appropriate weights between those results.

Resources