I'd like to determine the mean block of my image using histogram. Let's say my image has 64 by 64 dimension, I need to divide it into 4 by 4 block then determine each block mean (in other word now I will have 4 blocks).
Using opencv, How do I can utilize my IplImage to determine block mean using histogram bins?
The code below is opencv histogram in order to determine whole image mean:
int i, hist_size = 256;
float max_value,min_value;
float min_idx,max_idx;
float bin_w;
float mean =0, low_mean =0, high_mean =0, variance =0;
float range_0[]={0,256};
float *ranges[]={range_0};
IplImage* im = cvLoadImage("killerbee.jpg");
//Create a single planed image of the same size as the original
IplImage* grayImage = cvCreateImage(cvSize(im->width,im->height),IPL_DEPTH_8U, 1);
//convert the original image to gray
cvCvtColor(im, grayImage, CV_BGR2GRAY);
/* Remark this, since wanna evaluate whole area.
//create a rectangular area to evaluate
CvRect rect = cvRect(0, 0, 500, 600 );
//apply the rectangle to the image and establish a region of interest
cvSetImageROI(grayImage, rect);
End remark*/
//create an image to hold the histogram
IplImage* histImage = cvCreateImage(cvSize(320,200), 8, 1);
//create a histogram to store the information from the image
CvHistogram* hist = cvCreateHist(1, &hist_size, CV_HIST_ARRAY, ranges, 1);
//calculate the histogram and apply to hist
cvCalcHist( &grayImage, hist, 0, NULL );
//grab the min and max values and their indeces
cvGetMinMaxHistValue( hist, &min_value, &max_value, 0, 0);
//scale the bin values so that they will fit in the image representation
cvScale( hist->bins, hist->bins, ((double)histImage->height)/max_value, 0 );
//set all histogram values to 255
cvSet( histImage, cvScalarAll(255), 0 );
//create a factor for scaling along the width
bin_w = cvRound((double)histImage->width/hist_size);
for( i = 0; i < hist_size; i++ ) {
//draw the histogram data onto the histogram image
cvRectangle( histImage, cvPoint(i*bin_w, histImage->height),cvPoint((i+1)*bin_w,histImage->height - cvRound(cvGetReal1D(hist->bins,i))),cvScalarAll(0), -1, 8, 0 );
//get the value at the current histogram bucket
float* bins = cvGetHistValue_1D(hist,i);
//increment the mean value
mean += bins[0];
}
//finish mean calculation
mean /= hist_size;
//display mean value onto output window
cout<<"MEAN VALUE of THIS IMAGE : "<<mean<<"\n";
//go back through now that mean has been calculated in order to calculate variance
for( i = 0; i < hist_size; i++ ) {
float* bins = cvGetHistValue_1D(hist,i);
variance += pow((bins[0] - mean),2);
}
//finish variance calculation
variance /= hist_size;
cvNamedWindow("Original", 0);
cvShowImage("Original", im );
cvNamedWindow("Gray", 0);
cvShowImage("Gray", grayImage );
cvNamedWindow("Histogram", 0);
cvShowImage("Histogram", histImage );
//hold the images until a key is pressed
cvWaitKey(0);
//clean up images
cvReleaseImage(&histImage);
cvReleaseImage(&grayImage);
cvReleaseImage(&im);
//remove windows
cvDestroyWindow("Original");
cvDestroyWindow("Gray");
cvDestroyWindow("Histogram");
Really thanks in advance.
You can do that by histograms, but a much more effective way to do it is an integral image, which does almost what you want.
Read here http://en.wikipedia.org/wiki/Summed_area_table and then use it to calculate the sum of all the pixels in every block. Then divide by the number of pixels in each block (4x4=16). Isn't it nice?
OpenCV has a function to calculate the integral image, with the difficult name cv::integral()
And an even easier way to do it is the humble resize().
Call resize(image64_64, image_16_16, Size(16, 16), INTER_AREA), and the result will be a smaller image whose pixel values have exactly the values you're looking for. Isn't it great?
Just do not forget the INTER_AREA flag. It determines the correct algorithm to be used.
Related
I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}
I have a 3D vascular free-hand ultrasound volume containing one vessel, and I am trying to reconstruct the surface of the vessel. The 3D volume is constructed from a stack of 2D images/B-scans, and the contour of the vessel in each B-scan has been segmented; that is, I have an ellipse representing the contour of the vessel in each B-scan in the volume. I have tried to reconstruct the contour of the vessel by following the VTK example of 'GenerateModelsFromLabels.cxx' (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Medical/GenerateModelsFromLabels). However, the result is not a smooth surface from one frame to another as I would have hoped for it to be. It is discontinuous and irregular, and the surface doesn't connect the vessel contours between two adjacent frames in the volume if the displacement between the ellipses is large. In my approach, I basically used DiscreteMarchingCubes -> WindowedSincPolyDataFilter -> GeometryFilter.
I played around with the passband, smoothingIterations and featureAngle parameters, and I was able to obtain the best following result:
As you can see, it is not a smooth continuous surface with a lot of uninterpolated "holes" between adjacent frames, but it is all right. Can it be made better? I also tried using a 3D Delaunay triangulation, but it only gave me the convex hull, which is not the output I expected. I would like to know if there is a better approach towards reconstructing a surface that closely follows the contour of the vessel from one B-scan to the next in a volume?
A minimal working example is shown below:
vtkSmartPointer<vtkImageData> vesselVolume =
vtkSmartPointer<vtkImageData>::New();
int totalImages = 210;
for (int z = 0; z < totalImages; z++)
{
std::string strFile = "E:/datasets/vasc/rendering/contour/" + std::to_string(z + 1) + ".png";
cv::Mat im = cv::imread(strFile, CV_LOAD_IMAGE_GRAYSCALE);
if (z == 0)
{
vesselVolume->SetExtent(0, im.cols, 0, im.rows, 0, totalImages - 1);
vesselVolume->SetSpacing(1, 1, 1);
vesselVolume->SetOrigin(0, 0, 0);
vesselVolume->AllocateScalars(VTK_UNSIGNED_CHAR, 0);
}
std::vector<cv::Point2i> locations; // output, locations of non-zero pixels
cv::findNonZero(im, locations);
for (int nzi = 0; nzi < locations.size(); nzi++)
{
unsigned char* pixel = static_cast<unsigned char*>(vesselVolume->GetScalarPointer(locations[nzi].x, locations[nzi].y, z));
pixel[0] = 255;
}
}
vtkSmartPointer<vtkDiscreteMarchingCubes> discreteCubes =
vtkSmartPointer<vtkDiscreteMarchingCubes>::New();
discreteCubes->SetInputData(vesselVolume);
discreteCubes->GenerateValues(1, 255, 255);
discreteCubes->ComputeNormalsOn();
vtkSmartPointer<vtkWindowedSincPolyDataFilter> smoother =
vtkSmartPointer<vtkWindowedSincPolyDataFilter>::New();
unsigned int smoothingIterations = 10;
double passBand = 2;
double featureAngle = 360.0;
smoother->SetInputConnection(discreteCubes->GetOutputPort());
smoother->SetNumberOfIterations(smoothingIterations);
smoother->BoundarySmoothingOff();
//smoother->FeatureEdgeSmoothingOff();
smoother->FeatureEdgeSmoothingOn();
smoother->SetFeatureAngle(featureAngle);
smoother->SetPassBand(passBand);
smoother->NonManifoldSmoothingOn();
smoother->BoundarySmoothingOn();
smoother->NormalizeCoordinatesOn();
smoother->Update();
vtkSmartPointer<vtkThreshold> selector =
vtkSmartPointer<vtkThreshold>::New();
selector->SetInputConnection(smoother->GetOutputPort());
selector->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_CELLS,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkMaskFields> scalarsOff =
vtkSmartPointer<vtkMaskFields>::New();
// Strip the scalars from the output
scalarsOff->SetInputConnection(selector->GetOutputPort());
scalarsOff->CopyAttributeOff(vtkMaskFields::POINT_DATA,
vtkDataSetAttributes::SCALARS);
scalarsOff->CopyAttributeOff(vtkMaskFields::CELL_DATA,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkGeometryFilter> geometry =
vtkSmartPointer<vtkGeometryFilter>::New();
geometry->SetInputConnection(scalarsOff->GetOutputPort());
geometry->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(geometry->GetOutputPort());
mapper->ScalarVisibilityOff();
mapper->Update();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(renderer);
renderer->SetBackground(.2, .3, .4);
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
renderer->AddActor(actor);
renderer->ResetCamera();
renderWindow->Render();
renderWindowInteractor->Start();
Assuming that your problem is hand shaking between slices, one possible way to improve your result is to apply slice to slice registration. It should be easy to try using ImageJ. Use the transforms between slices to also transform your labeled images. Then run your transformed label images through your current pipeline.
I am trying to color my polygons in jogl. I have stored the vertices in an array, an index array for the triangle order and a color array. The code is as follows, but the problem that I am facing is that the triangle are white, and not the color from the color buffer.
float f[] = {1000,2000,-4000,-2000,-2000,-4000,2000,-2000,-4000,1000,-4000,-4000};
FloatBuffer buffer = GLBuffers.newDirectFloatBuffer(12);
this.coordCount = 12;
buffer.put(f);
buffer.rewind();
int indx[] = {0,1,2,1,3,2};
IntBuffer indxBuffer = GLBuffers.newDirectIntBuffer(6); //Total number of vertices
this.indexCount = 6;
indxBuffer.put(indx);
indxBuffer.rewind();
float color[] = {1,0,1,0,0,0,0,0,0,1,0,0};
FloatBuffer colorBuffer = GLBuffers.newDirectFloatBuffer(12);
colorBuffer.put(color);
colorBuffer.rewind();
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glEnableClientState(GLPointerFunc.GL_COLOR_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glFrontFace(GL.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, buffer);
gl.glColorPointer(3, GL.GL_FLOAT, 0, colorBuffer);
gl.glDrawElements(GL.GL_TRIANGLES, this.indexCount, GL.GL_UNSIGNED_INT, indxBuffer);
gl.glDisableClientState(GLPointerFunc.GL_COLOR_ARRAY);
gl.glDisableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnable(GL.GL_TEXTURE_2D);
I am doing this rendering on NASA world wind globe. But I don't think that should cause any problems. Can someone help me figure out the problem? I am stuck on this for a while.
Thanks,
Got the solution, Just had to enable color and material.
gl.glEnable(GL2.GL_COLOR_MATERIAL);
gl.glColorMaterial(GL2.GL_FRONT_AND_BACK, GL2.GL_DIFFUSE);
I am trying to use:
cv::Mat source;
const int histSize[] = {intialframes, initialWidth, initialHeight};
source.create(3, histSize, CV_8U);
for saving multiple images in one matrix. However when i do so, it gives me dims = 3 and -1 in rows and cols.
Is it correct?
If not what is the bug in it?
if yes how can I access my images one by one?
Reading the documentation of the class cv::Mat ->doc
You can see that cv::Mat.rows and cv::Mat.cols are the number of rows and cols in a 2D array -1 otherwise.
With source.create(3, histSize, CV_8U); you are creating a 3D array.
In the cv::Mat doc is written how to access the elements.
With the create method the matrix is continuos and in a plane-by-plane organized fashion.
EDIT
The first part of text in the documentation after the code of the class definition tells you how to access each element of the matrix using the step[] parameter of the matrix:
If you want to access the pixel (u, v) of the image i you need to get a pointer to the data and use pointer's arithmetic to reach the desired pixel:
int sizes[] = { 10, 200, 100 };
cv::Mat M(3, sizes, CV_8UC1);
//get a pointer to the pixel
uchar *px = M.data + M.step[0] * i + M.step[1] * u + M.step[2] * v;
//get the pixel intensity
uchar intensity = *px;
I need to process the first "Original" image to get something similar to the second "Enhanced" one. I applied some naif calculation and the new image has more contrast and more strong colors but in the higher color regions a color hole appears. I have no idea about image processing, it would be great if you can suggest me which concepts and/or algorithms I could apply to get the result without this problem.
Convert the image to the HSB (Hue, Saturation, Brightness) color space.
Multiply the saturation by some amount. Use a cutoff value if your platform requires it.
Example in Mathematica:
satMult = 4; (*saturation multiplier *)
imgHSB = ColorConvert[Import["http://i.imgur.com/8XkxR.jpg"], "HSB"];
cs = ColorSeparate[imgHSB]; (* separate in H, S and B*)
newSat = Image[ImageData[cs[[2]]] * satMult]; (* cs[[2]] is the saturation*)
ColorCombine[{cs[[1]], newSat, cs[[3]]}, "HSB"]] (* rebuild the image *)
A table increasing the saturation value:
The "holes" that you see in the processed picture are the darker areas of the original picture, which went to negative values with your darkening algorithm. I suspect these out of range values are then written to the new image as positive numbers, so they end up in the higher part of the brightness scale. For example, let's say a pixel value is 10, and you are substracting 12 from all pixels to darken them a bit. This pixel will underflow and become -2. When you write it back to the file, -2 gets represented as 0xfe in hex, and this is 254 if you take it as an unsigned number.
You should use an algorithm that keeps the pixel values within the valid range, or at least you should "clamp" the values to the valid range. A typical clamp function defined as a C macro would be:
#define clamp(p) (p < 0 ? 0 : (p > 255 ? 255 : p))
If you add the above macro to your processing function it will take care of the "holes", but instead you will now have dark colors in those places.
If you are ready for something a bit more advanced, here on Wikipedia they have the brightness and contrast formulas that are used by The GIMP. These which will do a pretty good job with your image if you choose the proper coefficients.
This wikipedia article does a good job of explaining histogram equalization for contrast enhancement.
Code for grayscale images:
unsigned char* EnhanceContrast(unsigned char* data, int width, int height)
{
int* cdf = (int*) calloc(256, sizeof(int));
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
int val = data[width*y + x];
cdf[val]++;
}
}
int cdf_min = cdf[0];
for(int i = 1; i < 256; i++) {
cdf[i] += cdf[i-1];
if(cdf[i] < cdf_min) {
cdf_min = cdf[i];
}
}
unsigned char* enhanced_data = (unsigned char*) malloc(width*height);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
enhanced_data[width*y + x] = (int) round(cdf[data[width*y + x]] - cdf_min)*255.0/(width*height-cdf_min);
}
}
free(cdf);
return enhanced_data;
}