Greetings all,
Please refer to image at :
http://i48.tinypic.com/316qb78.jpg
We are developing an application to extract cell edges from MRC images from electron microscope.
MRC file format stores volumetric pixel data (http://en.wikipedia.org/wiki/Voxel) and we simply use 3D char array(char***) to load and store data (gray scale values) from a MRC file.
As shown in the image,there are 3 viewers to display XY,YZ and ZX planes respectively.
Scrollbars on the top of the viewers use to change the image slice along an axis.
Here is the steps we do when user changes the scrollbar position.
1) get the new scrollbar value.(this
is the selected slice)
2) for the relavant plane (YZ,XY or
ZX), generate (char* slice;) array for
the selected slice by reading 3D char
array (char***)
3) Create a new QImage*
(Format_RGB888) and set pixel values
by reading 'slice' (using
img->setPixel(x,y,c);)
4) This new QImage* is painted in the
paintEvent() method.
We are going to execute "edge-detection" process in a seperate thread since it is an intensive process.During this process we need to draw detected curve (set of pixels) on top of above QImage*.(as a layer).This means we need to call drawPoint() methods outside the QT thread.
Is it the best wayto use QImage for this case?
What is the best way to execute QT drawing methods from another thread?
thanks in advance,
From documentation of QImage:
Because QImage is a QPaintDevice subclass, QPainter can be used to draw directly onto images. When using QPainter on a QImage, the painting can be performed in another thread than the current GUI thread.
Just create a QPainter on your image and draw what you need.
Related
I have a Qt3D scene in which I have only one 3D object. I would like to set the center of rotation of the scene(camera) to the center of this 3D object. Currently, the 3D model goes out of view when the scene is rotated with the mouse.
There is also a OrbitCameraController, which has the purpose to look at a certain position. You could let the camera position track your object's position.
QML example code:
Camera {
id: myCamera
viewCenter: YOUROBJECTPOSITION
}
OrbitCameraController { camera: myCamera }
// FirstPersonCameraController { camera: myCamera }
I'm not using pyqt like you do. Hope this helps.
I am using a Mesh with a custom 3D model as "source" the qt docs don't
really indicate any property of the Mesh object that will return its
3Dvector position.
If you import your obj file in a scene the origin of your mesh is placed at the origin of the scene. If you didn't transform it that origin is where you want the camera to look at.
If you used a transform, then use that new position to look at.
Qt3D uses an ECS (Entitiy Component System) Basically, you make an enitity and add components to it like a mesh and transform in your case. That's why the mesh doesn't have a property that reflects it's position. The transform component holds that information.
I suggest you read the following in the Qt docs: Qt3D Architecture
The above solution is just in case you have smaller objects and your camera is far enough. But if you import a larger mesh fi. like a spaceship you might need to get the coordinates of the spot you want to look at. You can get those coordinates by using an object picker.
As I understand it, if I create a vtk render window, then I can add different renderers to it and for each renderer renders from a different perspective. No to actually render the scene I use the vtk render window method render() to render all renderers in parallel. Now there is a vtk render window method called GetZbufferData which apparently returns an array containing the zbuffer. So my question is, to which renderer does this zbuffer correspond to?
Thanks for any clarification.
If you have all renderers in the same window, then they will share the same framebuffer, so also the same z-buffer. So a simple answer to your question is "to all of them". To get the individual z-values, it depends on what you are exactly doing with the renderers.
If you are doing some kind of a "tiled view", you want to assign different viewports (vtkRenderer::SetViewport(), like here) to each of the renderers. Then you can access the z data for a given "tile" (renderer) by passing appropriate x,y coordinates to the GetZBufferData function. For example, to get the whole part of the z buffer that belongs to renderer ren1 of vtkRenderWindow renWin:
double x1 = ren1->GetViewport()[0] * (renWin->GetSize()[0] - 1);
double y1 = ren1->GetViewport()[1] * (renWin->GetSize()[1] - 1);
double x2 = ren1->GetViewport()[2] * (renWin->GetSize()[0] - 1);
double y2 = ren1->GetViewport()[3] * (renWin->GetSize()[1] - 1);
float *ren1Z = renWin->GetZbufferData(
static_cast<int>(x1),static_cast<int>(y1),static_cast<int>(x2),
static_cast<int>(y2));
If you have the same viewport, it would be more complicated. You can have a renderwindow with multiple "layers", by setting vtkRenderWindow::SetNumberOfLayers(int) and then you can assign each renderer to a different layer (vtkRenderer::SetLayer(0-based layer index)). The window then renders from layer 0 to the last layer over each other. If you are interested in getting only one specific renderer's z-data, you should get it if you have it render in the last layer. However, I am not sure if the z-buffer is cleaned in between individual renderer's renders, I would actually bet on that it is not, so you might also get some inconsistent mess.
I would like to complement tomj answer:
Any of the vtkRenderWindow::GetZbufferData() methods query the framebuffer for Z-values, which is contained in the vtkRenderWindow, but there is a slight remark:
You need to set this in your renderers: vtkRenderer::PreserveDepthBufferOn(). This is because as the documentation says:
"By default, the depth buffer is reset for each renderer.
If this flag is true, this renderer will use the existing depth buffer for its rendering."
So, that bring us to the vtkRenderers. There is a layering of vtkRenderers, which tells which "chain" or "precedence order" to make the drawing. Check the method vtkRenderer::SetLayer().
So, you first need to set up your layered vtkRenderers, attach them to the vtkRenderWindow, and then set up correctly if you want to preserve some depth buffers or not.
Notice that if the z-buffer has not been set (first draw of the first vtkRenderer), it will return 1.0. I'm still figuring out why, but currently that is the situation.
I want to have a custom MKOverlay that's a circle anchored to the user location annotation that the user can resize by pinching. I was able to successfully achieve this using MKOverlayPathRenderer and a custom MKOverlay object by overriding the createPath method and making an arc. The resizing and moving of the overlay was handled by using KVO on the radius and coordinate properties of my overlay. However the resizing was incredibly choppy and the boundingMapRect wasn't correctly calculated.
I've also tried using an image and instead of subclassing MKOverlayPathRenderer just MKOverlayRenderer, overriding - (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context but when I resize my CPU percentage jumps to 160% usage (not great yeah?) and the boundingRect is again being drawn incorrectly.
I really think the way to do it is with MKOverlayPathRenderer and maybe having an atomic counter of some kind so that a redraw only gets called say every 5 or 10 times the pinch gesture is triggered.
Does anyone have any suggestions? I've also considered but haven't tried making a UIView and adding it as a subview to the map view and putting the pinch gesture on that but that seems hacky and dirty.
When you computed new boundingMapRect on the Overlay, you must invoke invalidatePath on your Renderer. After that, system will invoke createPath for you when appropriate.
I have to insert an image to MFC dialog and print points on it when user checks a check box. Is it possible to draw points on an image in MFC?
Thanks.
Try creating your own CStatic owner drawn based control for displaying your bitmap. When you get a DrawItem request load the original bitmap into a compatible DC. You can then draw on the DC your modifications and when finished BitBlt the DC to the actual screen DC provided in the DRAWITEMSTRUCT info.
Step by step.
Create a new MFC control based on CStatic called CMyPic
Put a Picture control on your dialog (as a place holder for your control)
Change the name of the picture control from IDC_STATIC to IDC_MYPIC
Change the Type of the control from 'Frame' to 'Owner Draw'
Right click on the control and 'Add variable'. Make it a control variable called something like m_mypic and change the variable type to CMyPic.
In CMyPic add an override for DrawItem
In DrawItem you can draw your bitmap (in my case I'm drawing a PNG and overlaying some text) something like this:
void CMyPic::DrawItem(LPDRAWITEMSTRUCT lpDrawItemStruct){
CPngImage img;
img.Load( IDB_PNG1 );
CDC dcScreen;
dcScreen.Attach( lpDrawItemStruct->hDC );
CDC dcMem;
dcMem.CreateCompatibleDC( &dcScreen );
CBitmap * pold = (CBitmap*)dcMem.SelectObject( img );
dcMem.DrawText( L"Hi", &lpDrawItemStruct->rcItem, NULL );
dcScreen.BitBlt( 0, 0, lpDrawItemStruct->rcItem.right, lpDrawItemStruct->rcItem.bottom, &dcMem, 0, 0, SRCCOPY );
dcMem.SelectObject( pold );
dcScreen.Detach( );
}
It's possible, but I'd strongly discourage doing it directly.
Generally, a dialog box should only act as a container for controls.
As such, what you probably want is some sort of layered drawing control that can display a bitmap as the background, and other objects (points, possibly lines, curves, etc.) in front of that. Writing an ActiveX control in MFC to do that borders on trivial. It's a little harder using ATL, but not much -- and the result will almost inevitably be "better" from the viewpoint of being smaller and (probably) faster.
Any suggestions for a good way to do this?
I want to be able to draw lots of 2D things in XNA- often in offset position. eg if something is in position (X,Y) then ideally I'd like to be able to pass it a modified SpriteBatch which, when Draw(X,Y) is called would take account of the offset and draw the thing at (X+OffsetX, Y+OffsetY).
I don't want to pass this offset to the children and have to deal with it separately in each child- that could screw up and would also screw up my interfaces!
Firstly I thought of having a Decorator to a SpriteBatch which if I call Decorator.Draw for something in position (X,Y) would route this to the original SpriteBatch as (X+offsetX, y+offsetY). But then I can't override the Draw methods in the SpriteBatch class, and even if I created my own "Decorator.DrawOffset", the Decorator seems to need "SpriteBatch.Begin()" called and stuff which seems to break... :(
I then thought of Extension Methods, but I think they'd need the offset passed to them as a variable each time draw() is called? Which still requires me to pass the offset down through the children...
Another option would be to draw the children to a RenderTarget (or whatever they are in XNA4) and then render this to the screen in an offset position... but that seems hideously inefficient?
Thanks for any comments!
you should use a Transformation Matrix.
Matrix Transform = Matrix.CreateTranslation(offsetX, offsetY, 0);
SpriteBatch.Begin(...,...,...., Transform);