Capture and draw a screenshot using directX - graphics

I am trying to get DirectX (DX9) to grab a screenshot of the desktop and immediately draw it back out (in smaller dimensions) to my form.
I have DirectX working to the capacity that the device is created along with a few surfaces and I can render them to screen. I am using one surface F3F3Surf9_SS to get the desktop Screenshot.
Here is my declaration and initialization of varaibles
F3D3Surf9_SS : IDirect3DSurface9; //Surface SS
F3D3Surf9_A : IDirect3DSurface9; //Surface A
F3D3Surf9_B : IDirect3DSurface9; //Surface B
...
FDirect3D9.CreateDevice(D3DADAPTER_DEFAULT,D3DDEVTYPE_HAL,Form1.Handle,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,#D3DPresentParams,
FDirect3DDevice9);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_A,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_A,nil,nil,'D:\Images\Pillar.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_B,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_B,nil,nil,'D:\Images\Niagra.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(Screen.Width,Screen.Height,D3DFMT_A8R8G8B8,
D3DPOOL_SCRATCH,F3D3Surf9_SS,nil);
Here is the code I use to grab and then render the screenshot
FDIrect3DDevice9.BeginScene;
FDirect3DDevice9.Clear(0,0,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,0,255),0,0);
FDirect3DDevice9.GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO, BackBuffer);
FDirect3DDevice9.GetFrontBufferData(0,F3D3Surf9_SS); //Get the screen shot
FDirect3DDevice9.StretchRect(F3D3Surf9_SS,nil,BackBuffer,nil,D3DTEXF_NONE); //Draw it
FDIrect3DDevice9.EndScene;
FDirect3DDevice9.Present(nil,nil,0,nil);
However this does not work.
The image does not get drawn to screen. If I draw surface A or B to screen, that works but it doesn't work for Surface SS. However I know Surface SS has the screenshot in it since if I call D3DXSaveSurfaceToFile the resulting bitmap I put on the hard disk is a valid screen shot.
Any thoughts on the proper way to do this?

The reason this would not work is that the F3D3Surf9_SS was declared in system memory by D3DPOOL_SCRATCH and cannot be drawn directly to the back buffer as I was trying to.
So my solution was to use the F3D3Surf9_A surface and use UpdateSurface to copy the screenshot in system memory into the surface A in video memory.
The only other change I had to make to get this to work was create Surface A in the same format as the screenshot surface: D3DFMT_A8R8G8B8. Also had to make sure the the destination surface in UpdateSurface was larger than the source surface.
NOTE:
This is slow since we are reading from video memory to system memory and then right back to video memory.
I needed this for my application since I want to capture everything the OS and other application put on screen but if you are just worried about your own application then there are better alternatives.
If you know of a way to GetFrontBufferData without putting it to system memory (which is the only way I could see it working) please let me know.

Related

Mapbox GL JS : Do large images overlay accurately - if so, how?

I feel like I have been stuck perpetually with this problem. I have a single geoTIF file of weather radar data that will not overlay on Mapbox correctly. The spatial area is of the entire US. It should be a simple task, but there seems to be some sort of weird distortion causing the overlay to not be correct, even though I am very certain my Mapbox coordinates in the linked HTML file below are correct and match the geoTIF.
I uploaded the geoTIF to a website called "geotiff.io" (which uses leaflet to show the files) and it renders the image perfectly, but I cannot emulate it using Mapbox. The storms always are off in Mapbox.
This is a link to my Mapbox map with the image overlay where it is incorrect
This is a dropbox link to a zip file with the geoTIF and colorization file, which I used gdaldem with
I would like to explain more, so it's simplified. Here is an image showing part of a storm that is out of place (left side) and how it is too far north. On the right side was a screenshot taken from geotiff.io - and how it's perfect. What is going on here?!
The geoTIF image was not set as a web mercator projection before being displayed in Mapbox. I assume the geotiff.io service corrected this automatically and made me think there was an issue with the code or Mapbox, when it was not.

How to use manipulation mode to translate an image without clipping

I'm trying to create a simple application that allows the user to move an image with a TranslateTransform inside an Imagecontrol, using the ManipulationDeltaevent Handler.
However, performs the translate manipulation, the image is clipped in a weird way. In fact, it seems that it is the entire viewport which is translated instead of the image inside the viewport.
I have reproduced this behavior in this very simple application:
The code is straightforward and looks like so:
<Image
x:Name="Image"
HorizontalAlignment="Stretch"
VerticalAlignment="Stretch"
Stretch="None"
ManipulationMode="TranslateX, TranslateY"
ManipulationStarted="Image_ManipulationStarted"
ManipulationCompleted="Image_ManipulationCompleted"
ManipulationDelta="Image_ManipulationDelta"
Source="/Assets/image.png"
>
<Image.RenderTransform>
<TranslateTransform x:Name="Translation" />
</Image.RenderTransform>
</Image>
This is the code behind:
private void Image_ManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{
Translation.X += e.Delta.Translation.X;
Translation.Y += e.Delta.Translation.Y;
}
This application displays a rectangular 600x600 plain red image. I have resized the application so that the image is way bigger than the available client area. When the user slightly moves the image in the upper left direction, this is what the application looks like:
It seems, as I explained, that the entire viewport has been translated.
If I virtually add where the image should be to the screen shot above, this is what I would have expected:
In the screenshot above, one can see that the image is taller than the application so the entire height of the client area should be covered with a red portion of the image. Likewize, since the image has been moved to the left, only a narrow portion of the right screen should be white.
I hope these screenshots help give sense to my explanations.
Please, is it possible to obtain the desired behaviour when using the ManipulationModes?
Please, is it possible to obtain the desired behaviour when using the ManipulationModes?
I think the answer is no, it's not possible to do this in an UWP app, you will need to create a desktop app to obtain the desired behaviour.
In an UWP app, the Page is in a Frame, then the controls or shapes are in the Page, so basically the controls or shapes are all in a Frame container. What you want to do is actually getting the Rectangle out from its container, in this case we need another container for this Rectangle in an UWP app.
So what I could think about for this is creating a new window to hold this Rectangle. But problems will come with this new window, the app's title is along with a new window unless this window is in the full screen mode. Obviously a full screen mode window will not solve the problem.
This function can be done with Win32 APIs, you may refer to C# Drag-and-Drop: Show the dragged item while dragging, but this APIs are not supported in an UWP app. They are for desktop app only. For example, you may refer to CreateIconIndirect function. Besides, I can't find any supported APIs for UWP app which the same functions have.

How can I capture an Image for post processing in OSG

I need to capture an Image from my viewer and need to do some post processing and display it back on it.
Right now am more interested in the first part of it. That is to capture Image from the viewer.
While going through the OSG I came across ScreenCaptureHandler.
But am not able to get an Image out of it.
I am still working to get it done but in case any of you have any other way of how it can be done or an Example for screencapturehandler that you can share.
To capture the rendered view into an image I use a custom osg::Camera::DrawCallback.
To capture the view at any point, set the drawCallback on the camera, force rendering, restore to a NULL callback.
Notice that the following code is part of a member function of a custom viewer (that's probably not your case):
osgViewer::View::getCamera()->setFinalDrawCallback(new ViewCaptureCallback(img));
osgViewer::Viewer::renderingTraversals();
osgViewer::View::getCamera()->setFinalDrawCallback(NULL);
The ViewCaptureCallback basically uses image->readPixels() to read from the backbuffer.
glReadBuffer( GL_BACK );
osg::GraphicsContext* gc = renderInfo.getState()->getGraphicsContext();
// Here you should process the backbuffer's size and format
image->readPixels(0, 0, gc->getTraits()->width, gc->getTraits()->height, pixelFormat, GL_UNSIGNED_BYTE);
Hope it helps

How to set the rotation center of an MKAnnotationView

I've got a problem with my MKAnnotationViews when MKUserTrackingModeFollowWithHeading is enabled on the MKMapView.
I positioned my images using the centerOffset property of the MKAnnotationView. Specifying the coordinates of the pin's tip relative to the coordinate system at the center of the image is somewhat counter-intutive, but I came up with the following formula:
annotationView.centerOffset = CGPointMake(imageWidth/2.0 - tipXCoordinate, imageHeight/2.0 - tipYCordinate);
This works fine for zooming the map in and out. The tips of the pins keep their relative position on the map.
However, when I enable MKUserTrackingModeFollowWithHeading, it won't work anymore. The Pins rotate around the center of the image, instead of the tip. So when the map rotates, the tips do no point to the locations they are supposed to annotate.
I've played around a bit with the frameand centerproperties of the MKAnnotationView, but I feel, they are having no effect on the alignement of the pins whatsoever.
Interestingly, the MKPinAnnotationView does not seem to use centerOffset at all, but a shifted frame instead. However, I was unable to reproduce this. Changing the frame of my custom view did not move it at all.
Thanks for any insights you can provide :-)
Solution:
Don't use centerOffset! Use annotationView.layer.anchorPoint instead. The coordinate system of achor point is much nicer, too. Coordinates range from 0.0 (top/left) to 1.0 (bottom/right) of the image rectangle:
annotationView.layer.anchorPoint = CGPointMake(tipXCoordinate/imageWidth, tipYCordinate/imageHeight);
A friend asks me to let you know that you should "try this for instance":
self.layer.anchorPoint = CGPointMake (0.5f, 1.0f);

How to display high resolution images in iOS4 using UIImageView

I wanted to know how should I use high res images in iOS4 sdk using UIImaageView.
blackBox = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"alert_bg.png"]];
blackBox.frame = CGRectMake(98.0f, 310.0f, 573.0f, 177.0f);
When I use this code I get strange results... the image does not get the correct size. It is looking very big on iPhone 4 screen.
Should I use 326 ppi images?
I have read http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/SupportingResolutionIndependence/SupportingResolutionIndependence.html this but I am very confuse.
Thanks
Saurabh
The key thing to understand about supporting the Retina Display is that, in your code, the screen is always 320x480. You don't need to double the resolution of anything but your image resources themselves. In this case, you just need to put two resources in your app bundle: an alert_bg.png that fits on a 320x480 screen—in this case, I'd guess that'd be 286x88—and an alert_bg#2x.png, exactly double the size of the other, that fits on a 640x960 one. If you ask UIKit for [UIImage imageNamed:#"alert_bg"], it'll automatically pick the correct-resolution resource for the current screen.
You should provide a 480x320 pixels image for the 3G, 3GS and original iPhone, named "alert_bg.png" and another 960x640 px one, named "alert_bg#2x.png" for the iPhone 4.
The "#2x" in the name is automatically added by iOS and loads the image automatically if it finds it, instead of the standard resolution one.

Resources