I have a 3D vascular free-hand ultrasound volume containing one vessel, and I am trying to reconstruct the surface of the vessel. The 3D volume is constructed from a stack of 2D images/B-scans, and the contour of the vessel in each B-scan has been segmented; that is, I have an ellipse representing the contour of the vessel in each B-scan in the volume. I have tried to reconstruct the contour of the vessel by following the VTK example of 'GenerateModelsFromLabels.cxx' (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Medical/GenerateModelsFromLabels). However, the result is not a smooth surface from one frame to another as I would have hoped for it to be. It is discontinuous and irregular, and the surface doesn't connect the vessel contours between two adjacent frames in the volume if the displacement between the ellipses is large. In my approach, I basically used DiscreteMarchingCubes -> WindowedSincPolyDataFilter -> GeometryFilter.
I played around with the passband, smoothingIterations and featureAngle parameters, and I was able to obtain the best following result:
As you can see, it is not a smooth continuous surface with a lot of uninterpolated "holes" between adjacent frames, but it is all right. Can it be made better? I also tried using a 3D Delaunay triangulation, but it only gave me the convex hull, which is not the output I expected. I would like to know if there is a better approach towards reconstructing a surface that closely follows the contour of the vessel from one B-scan to the next in a volume?
A minimal working example is shown below:
vtkSmartPointer<vtkImageData> vesselVolume =
vtkSmartPointer<vtkImageData>::New();
int totalImages = 210;
for (int z = 0; z < totalImages; z++)
{
std::string strFile = "E:/datasets/vasc/rendering/contour/" + std::to_string(z + 1) + ".png";
cv::Mat im = cv::imread(strFile, CV_LOAD_IMAGE_GRAYSCALE);
if (z == 0)
{
vesselVolume->SetExtent(0, im.cols, 0, im.rows, 0, totalImages - 1);
vesselVolume->SetSpacing(1, 1, 1);
vesselVolume->SetOrigin(0, 0, 0);
vesselVolume->AllocateScalars(VTK_UNSIGNED_CHAR, 0);
}
std::vector<cv::Point2i> locations; // output, locations of non-zero pixels
cv::findNonZero(im, locations);
for (int nzi = 0; nzi < locations.size(); nzi++)
{
unsigned char* pixel = static_cast<unsigned char*>(vesselVolume->GetScalarPointer(locations[nzi].x, locations[nzi].y, z));
pixel[0] = 255;
}
}
vtkSmartPointer<vtkDiscreteMarchingCubes> discreteCubes =
vtkSmartPointer<vtkDiscreteMarchingCubes>::New();
discreteCubes->SetInputData(vesselVolume);
discreteCubes->GenerateValues(1, 255, 255);
discreteCubes->ComputeNormalsOn();
vtkSmartPointer<vtkWindowedSincPolyDataFilter> smoother =
vtkSmartPointer<vtkWindowedSincPolyDataFilter>::New();
unsigned int smoothingIterations = 10;
double passBand = 2;
double featureAngle = 360.0;
smoother->SetInputConnection(discreteCubes->GetOutputPort());
smoother->SetNumberOfIterations(smoothingIterations);
smoother->BoundarySmoothingOff();
//smoother->FeatureEdgeSmoothingOff();
smoother->FeatureEdgeSmoothingOn();
smoother->SetFeatureAngle(featureAngle);
smoother->SetPassBand(passBand);
smoother->NonManifoldSmoothingOn();
smoother->BoundarySmoothingOn();
smoother->NormalizeCoordinatesOn();
smoother->Update();
vtkSmartPointer<vtkThreshold> selector =
vtkSmartPointer<vtkThreshold>::New();
selector->SetInputConnection(smoother->GetOutputPort());
selector->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_CELLS,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkMaskFields> scalarsOff =
vtkSmartPointer<vtkMaskFields>::New();
// Strip the scalars from the output
scalarsOff->SetInputConnection(selector->GetOutputPort());
scalarsOff->CopyAttributeOff(vtkMaskFields::POINT_DATA,
vtkDataSetAttributes::SCALARS);
scalarsOff->CopyAttributeOff(vtkMaskFields::CELL_DATA,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkGeometryFilter> geometry =
vtkSmartPointer<vtkGeometryFilter>::New();
geometry->SetInputConnection(scalarsOff->GetOutputPort());
geometry->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(geometry->GetOutputPort());
mapper->ScalarVisibilityOff();
mapper->Update();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(renderer);
renderer->SetBackground(.2, .3, .4);
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
renderer->AddActor(actor);
renderer->ResetCamera();
renderWindow->Render();
renderWindowInteractor->Start();
Assuming that your problem is hand shaking between slices, one possible way to improve your result is to apply slice to slice registration. It should be easy to try using ImageJ. Use the transforms between slices to also transform your labeled images. Then run your transformed label images through your current pipeline.
Related
I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}
I am trying to color my polygons in jogl. I have stored the vertices in an array, an index array for the triangle order and a color array. The code is as follows, but the problem that I am facing is that the triangle are white, and not the color from the color buffer.
float f[] = {1000,2000,-4000,-2000,-2000,-4000,2000,-2000,-4000,1000,-4000,-4000};
FloatBuffer buffer = GLBuffers.newDirectFloatBuffer(12);
this.coordCount = 12;
buffer.put(f);
buffer.rewind();
int indx[] = {0,1,2,1,3,2};
IntBuffer indxBuffer = GLBuffers.newDirectIntBuffer(6); //Total number of vertices
this.indexCount = 6;
indxBuffer.put(indx);
indxBuffer.rewind();
float color[] = {1,0,1,0,0,0,0,0,0,1,0,0};
FloatBuffer colorBuffer = GLBuffers.newDirectFloatBuffer(12);
colorBuffer.put(color);
colorBuffer.rewind();
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glEnableClientState(GLPointerFunc.GL_COLOR_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glFrontFace(GL.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, buffer);
gl.glColorPointer(3, GL.GL_FLOAT, 0, colorBuffer);
gl.glDrawElements(GL.GL_TRIANGLES, this.indexCount, GL.GL_UNSIGNED_INT, indxBuffer);
gl.glDisableClientState(GLPointerFunc.GL_COLOR_ARRAY);
gl.glDisableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnable(GL.GL_TEXTURE_2D);
I am doing this rendering on NASA world wind globe. But I don't think that should cause any problems. Can someone help me figure out the problem? I am stuck on this for a while.
Thanks,
Got the solution, Just had to enable color and material.
gl.glEnable(GL2.GL_COLOR_MATERIAL);
gl.glColorMaterial(GL2.GL_FRONT_AND_BACK, GL2.GL_DIFFUSE);
I'm trying to get a screen position of a vertex in pixels inside a vertex shader,
I saw some others posts here but I can't find answer that works for me.
this is what I've got in my vertex Shader:
#version 400
layout (location = 0) in vec3 inPosition;
uniform mat4 MVP; // modelViewProjection
uniform vec2 window;
void main()
{
// vertex in screen space
vec2 fake_frag_coord = (MVP * vec4(inPosition,1.0)).xy;
float X = (fake_frag_coord.x*window.x/2.0) + window.x;
float Y = (fake_frag_coord.y*window.y/2.0) + window.y;
}
It's not working very well and I know it's a strange think to do inside a vertex shader but I want to multiply my vertex offset by a 2d texture, so I need to find the pixel the vertex is on top to be able to multiply it by the pixel of the texture.
thanks!
Luiz
I have corrected your vertex shader with proper terms, and shown you the exact sequence of transformations that actually happens when GL computes gl_FragCoord (window-space).
#version 400
layout (location = 0) in vec4 inPosition; // Always use vec4, it makes life easier!
uniform mat4 MVP; // modelViewProjection
uniform vec2 window;
void main()
{
// Vertex in clip-space
vec4 fake_frag_coord = (MVP * inPosition); // Range: [-w,w]^4
// Vertex in NDC-space
fake_frag_coord.xyz /= fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.w = 1.0 / fake_frag_coord.w; // Invert W
// Vertex in window-space
fake_frag_coord.xyz *= vec3 (0.5) + vec3 (0.5); // Rescale: [0,1]^3
fake_frag_coord.xy *= window; // Scale and Bias for Viewport
// Assume depth range: [0,1] --> No need to adjust fake_frag_coord.z
[...]
}
Texture coordinates and window-space coordinates are very different things, however. Generally you need normalized coordinates for traditional texture fetches, that means you want the coordinates in the range [0,1].
Luckily window-space and texture-space share the same origin convention (0,0) = bottom-left, so you can cut out the line below to get the appropriate texture coordinates:
fake_frag_coord.xy *= window; // Scale and Bias for Viewport
I think Andon M. Coleman's answer is fine. However, I like to point out a more general issue with the approach discussed in the question: there might be no meaningful screen space position for a vertex at all.
The vertex might lie utside the viewing frustum. This will not be a a problem if the vertices you draw are guaranteed to lie in the frustum, or if you are drawing only points.
But it will fail if you have primitives intersecting the near plane. You might think that in such a case, you just get some coordinates which are outside [-1,1] in NDC space, and if you just use them to assign some output value for the vertex, the clipping state will make it right. But that assumption is wrong. You might values which are pefectly in [-1,1] in NDC space even for vertices which are outside the frustum, and it it will appear as if the vertices lie in front of the camera for all vertices wich actually lie behind the camera. And no subsequent clipping stage is able to fix this.
The only way to get this right would be to actually carry out the clipping operation, before doing the divide by w. And this is something you don't want to do in a vertex shader.
If you want to get this working on the js part of things, this is how I adapted Andon M. Coleman's reply:
var winW = window.innerWidth;
var winH = window.innerHeight;
camera.updateProjectionMatrix();
// Not sure about the order of these! I was using orthographic camera so it didn't matter but double check the order if it doesn't work!
var MVP = camera.projectionMatrix.multiply(camera.matrixWorldInverse);
// position to vertex clip-space
var fake_frag_coord = position.applyMatrix4(MVP); // Range: [-w,w]^4
// vertex to NDC-space
fake_frag_coord.x = fake_frag_coord.x / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.y = fake_frag_coord.y / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.z = fake_frag_coord.z / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.w = 1.0 / fake_frag_coord.w; // Invert W
// Vertex in window-space
fake_frag_coord.x = fake_frag_coord.x * 0.5;
fake_frag_coord.y = fake_frag_coord.y * 0.5;
fake_frag_coord.z = fake_frag_coord.z * 0.5;
fake_frag_coord.x = fake_frag_coord.x + 0.5;
fake_frag_coord.y = fake_frag_coord.y + 0.5;
fake_frag_coord.z = fake_frag_coord.z + 0.5;
// Scale and Bias for Viewport (We want the window coordinates, so no need for this)
fake_frag_coord.x = fake_frag_coord.x / winW;
fake_frag_coord.y = fake_frag_coord.y / winH;
I'd like to determine the mean block of my image using histogram. Let's say my image has 64 by 64 dimension, I need to divide it into 4 by 4 block then determine each block mean (in other word now I will have 4 blocks).
Using opencv, How do I can utilize my IplImage to determine block mean using histogram bins?
The code below is opencv histogram in order to determine whole image mean:
int i, hist_size = 256;
float max_value,min_value;
float min_idx,max_idx;
float bin_w;
float mean =0, low_mean =0, high_mean =0, variance =0;
float range_0[]={0,256};
float *ranges[]={range_0};
IplImage* im = cvLoadImage("killerbee.jpg");
//Create a single planed image of the same size as the original
IplImage* grayImage = cvCreateImage(cvSize(im->width,im->height),IPL_DEPTH_8U, 1);
//convert the original image to gray
cvCvtColor(im, grayImage, CV_BGR2GRAY);
/* Remark this, since wanna evaluate whole area.
//create a rectangular area to evaluate
CvRect rect = cvRect(0, 0, 500, 600 );
//apply the rectangle to the image and establish a region of interest
cvSetImageROI(grayImage, rect);
End remark*/
//create an image to hold the histogram
IplImage* histImage = cvCreateImage(cvSize(320,200), 8, 1);
//create a histogram to store the information from the image
CvHistogram* hist = cvCreateHist(1, &hist_size, CV_HIST_ARRAY, ranges, 1);
//calculate the histogram and apply to hist
cvCalcHist( &grayImage, hist, 0, NULL );
//grab the min and max values and their indeces
cvGetMinMaxHistValue( hist, &min_value, &max_value, 0, 0);
//scale the bin values so that they will fit in the image representation
cvScale( hist->bins, hist->bins, ((double)histImage->height)/max_value, 0 );
//set all histogram values to 255
cvSet( histImage, cvScalarAll(255), 0 );
//create a factor for scaling along the width
bin_w = cvRound((double)histImage->width/hist_size);
for( i = 0; i < hist_size; i++ ) {
//draw the histogram data onto the histogram image
cvRectangle( histImage, cvPoint(i*bin_w, histImage->height),cvPoint((i+1)*bin_w,histImage->height - cvRound(cvGetReal1D(hist->bins,i))),cvScalarAll(0), -1, 8, 0 );
//get the value at the current histogram bucket
float* bins = cvGetHistValue_1D(hist,i);
//increment the mean value
mean += bins[0];
}
//finish mean calculation
mean /= hist_size;
//display mean value onto output window
cout<<"MEAN VALUE of THIS IMAGE : "<<mean<<"\n";
//go back through now that mean has been calculated in order to calculate variance
for( i = 0; i < hist_size; i++ ) {
float* bins = cvGetHistValue_1D(hist,i);
variance += pow((bins[0] - mean),2);
}
//finish variance calculation
variance /= hist_size;
cvNamedWindow("Original", 0);
cvShowImage("Original", im );
cvNamedWindow("Gray", 0);
cvShowImage("Gray", grayImage );
cvNamedWindow("Histogram", 0);
cvShowImage("Histogram", histImage );
//hold the images until a key is pressed
cvWaitKey(0);
//clean up images
cvReleaseImage(&histImage);
cvReleaseImage(&grayImage);
cvReleaseImage(&im);
//remove windows
cvDestroyWindow("Original");
cvDestroyWindow("Gray");
cvDestroyWindow("Histogram");
Really thanks in advance.
You can do that by histograms, but a much more effective way to do it is an integral image, which does almost what you want.
Read here http://en.wikipedia.org/wiki/Summed_area_table and then use it to calculate the sum of all the pixels in every block. Then divide by the number of pixels in each block (4x4=16). Isn't it nice?
OpenCV has a function to calculate the integral image, with the difficult name cv::integral()
And an even easier way to do it is the humble resize().
Call resize(image64_64, image_16_16, Size(16, 16), INTER_AREA), and the result will be a smaller image whose pixel values have exactly the values you're looking for. Isn't it great?
Just do not forget the INTER_AREA flag. It determines the correct algorithm to be used.
I know how to rotate image on any angle with drawTexturePath:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] x = new int[] { 0, 0, displayHeight, displayHeight };
int angle = Fixed32.toFP( 45 );
int dux = Fixed32.cosd(angle );
int dvx = -Fixed32.sind( angle );
int duy = Fixed32.sind( angle );
int dvy = Fixed32.cosd( angle );
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
but what I need is a 3d projection of simple image with 3d transformation (something like this)
Can you please advice me how to do this with drawTexturedPath (I'm almost sure it's possible)?
Are there any alternatives?
The method used by this function(2 walk vectors) is the same as the oldskool coding tricks used for the famous 'rotozoomer' effect. rotozoomer example video
This method is a very fast way to rotate, zoom, and skew an image. The rotation is done simply by rotating the walk vectors. The zooming is done simply by scaling the walk vectors. The skewing is done by rotating the walkvectors in respect to one another (e.g. they don't make a 90 degree angle anymore).
Nintendo had made hardware in their SNES to use the same effect on any of the sprites and or backgrounds. This made way for some very cool effects.
One big shortcoming of this technique is that one can not perspectively warp a texture. To do this, every new horizontal line, the walk vectors should be changed slightly. (hard to explain without a drawing).
On the snes they overcame this by altering every scanline the walkvectors (In those days one could set an interrupt when the monitor was drawing any scanline). This mode was later referred to as MODE 7 (since it behaved like a new virtual kind of graphics mode). The most famous games using this mode were Mario kart and F-zero
So to get this working on the blackberry, you'll have to draw your image "displayHeight" times (e.g. Every time one scanline of the image). This is the only way to achieve the desired effect. (This will undoubtedly cost you a performance hit since you are now calling the drawTexturedPath function a lot of times with new values, instead of just one time).
I guess with a bit of googling you can find some formulas (or even an implementation) how to calc the varying walkvectors. With a bit of paper (given your not too bad at math) you might deduce it yourself too. I've done it myself too when I was making games for the Gameboy Advance so I know it can be done.
Be sure to precalc everything! Speed is everything (especially on slow machines like phones)
EDIT: did some googling for you. Here's a detailed explanation how to create the mode7 effect. This will help you achieve the same with the Blackberry function. Mode 7 implementation
With the following code you can skew your image and get a perspective like effect:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] y = new int[] { 0, 0, displayHeight, displayHeight };
int dux = Fixed32.toFP(-1);
int dvx = Fixed32.toFP(1);
int duy = Fixed32.toFP(1);
int dvy = Fixed32.toFP(0);
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
This will skew your image in a 45º angle, if you want a certain angle you just need to use some trigonometry to determine the lengths of your vectors.
Thanks for answers and guidance, +1 to you all.
MODE 7 was the way I choose to implement 3D transformation, but unfortunately I couldn't make drawTexturedPath to resize my scanlines... so I came down to simple drawImage.
Assuming you have a Bitmap inBmp (input texture), create new Bitmap outBmp (output texture).
Bitmap mInBmp = Bitmap.getBitmapResource("map.png");
int inHeight = mInBmp.getHeight();
int inWidth = mInBmp.getWidth();
int outHeight = 0;
int outWidth = 0;
int outDrawX = 0;
int outDrawY = 0;
Bitmap mOutBmp = null;
public Scr() {
super();
mOutBmp = getMode7YTransform();
outWidth = mOutBmp.getWidth();
outHeight = mOutBmp.getHeight();
outDrawX = (Display.getWidth() - outWidth) / 2;
outDrawY = Display.getHeight() - outHeight;
}
Somewhere in code create a Graphics outBmpGraphics for outBmp.
Then do following in iteration from start y to (texture height)* y transform factor:
1.create a Bitmap lineBmp = new Bitmap(width, 1) for one line
2.create a Graphics lineBmpGraphics from lineBmp
3.paint i line from texture to lineBmpGraphics
4.encode lineBmp to EncodedImage img
5.scale img according to MODE 7
6.paint img to outBmpGraphics
Note: Richard Puckett's PNGEncoder BB port used in my code
private Bitmap getMode7YTransform() {
Bitmap outBmp = new Bitmap(inWidth, inHeight / 2);
Graphics outBmpGraphics = new Graphics(outBmp);
for (int i = 0; i < inHeight / 2; i++) {
Bitmap lineBmp = new Bitmap(inWidth, 1);
Graphics lineBmpGraphics = new Graphics(lineBmp);
lineBmpGraphics.drawBitmap(0, 0, inWidth, 1, mInBmp, 0, 2 * i);
PNGEncoder encoder = new PNGEncoder(lineBmp, true);
byte[] data = null;
try {
data = encoder.encode(true);
} catch (IOException e) {
e.printStackTrace();
}
EncodedImage img = PNGEncodedImage.createEncodedImage(data,
0, -1);
float xScaleFactor = ((float) (inHeight / 2 + i))
/ (float) inHeight;
img = scaleImage(img, xScaleFactor, 1);
int startX = (inWidth - img.getScaledWidth()) / 2;
int imgHeight = img.getScaledHeight();
int imgWidth = img.getScaledWidth();
outBmpGraphics.drawImage(startX, i, imgWidth, imgHeight, img,
0, 0, 0);
}
return outBmp;
}
Then just draw it in paint()
protected void paint(Graphics graphics) {
graphics.drawBitmap(outDrawX, outDrawY, outWidth, outHeight, mOutBmp,
0, 0);
}
To scale, I've do something similar to method described in Resizing a Bitmap using .scaleImage32 instead of .setScale
private EncodedImage scaleImage(EncodedImage image, float ratioX,
float ratioY) {
int currentWidthFixed32 = Fixed32.toFP(image.getWidth());
int currentHeightFixed32 = Fixed32.toFP(image.getHeight());
double w = (double) image.getWidth() * ratioX;
double h = (double) image.getHeight() * ratioY;
int width = (int) w;
int height = (int) h;
int requiredWidthFixed32 = Fixed32.toFP(width);
int requiredHeightFixed32 = Fixed32.toFP(height);
int scaleXFixed32 = Fixed32.div(currentWidthFixed32,
requiredWidthFixed32);
int scaleYFixed32 = Fixed32.div(currentHeightFixed32,
requiredHeightFixed32);
EncodedImage result = image.scaleImage32(scaleXFixed32, scaleYFixed32);
return result;
}
See also
J2ME Mode 7 Floor Renderer - something much more detailed & exciting if you writing a 3D game!
You want to do texture mapping, and that function won't cut it. Maybe you can kludge your way around it but the better option is to use a texture mapping algorithm.
This involves, for each row of pixels, determining the edges of the shape and where on the shape those screen pixels map to (the texture pixels). It's not so hard actually but may take a bit of work. And you'll be drawing the pic only once.
GameDev has a bunch of articles with sourcecode here:
http://www.gamedev.net/reference/list.asp?categoryid=40#212
Wikipedia also has a nice article:
http://en.wikipedia.org/wiki/Texture_mapping
Another site with 3d tutorials:
http://tfpsly.free.fr/Docs/TomHammersley/index.html
In your place I'd seek out a simple demo program that did something close to what you want and use their sources as base to develop my own - or even find a portable source library, I´m sure there must be a few.