How can I capture an Image for post processing in OSG - openscenegraph

I need to capture an Image from my viewer and need to do some post processing and display it back on it.
Right now am more interested in the first part of it. That is to capture Image from the viewer.
While going through the OSG I came across ScreenCaptureHandler.
But am not able to get an Image out of it.
I am still working to get it done but in case any of you have any other way of how it can be done or an Example for screencapturehandler that you can share.

To capture the rendered view into an image I use a custom osg::Camera::DrawCallback.
To capture the view at any point, set the drawCallback on the camera, force rendering, restore to a NULL callback.
Notice that the following code is part of a member function of a custom viewer (that's probably not your case):
osgViewer::View::getCamera()->setFinalDrawCallback(new ViewCaptureCallback(img));
osgViewer::Viewer::renderingTraversals();
osgViewer::View::getCamera()->setFinalDrawCallback(NULL);
The ViewCaptureCallback basically uses image->readPixels() to read from the backbuffer.
glReadBuffer( GL_BACK );
osg::GraphicsContext* gc = renderInfo.getState()->getGraphicsContext();
// Here you should process the backbuffer's size and format
image->readPixels(0, 0, gc->getTraits()->width, gc->getTraits()->height, pixelFormat, GL_UNSIGNED_BYTE);
Hope it helps

Related

How to put a Binding on a video

I've stored a video to a BitmapImage and want the user to be able to replay it. However, when I run my program it treats my BitmapImage like it's empty when it's not. I've tried doing the same thing with a picture saved to a BitmapImage and any picture will appear and fill the screen like I tell it to, but videos just don't show up. Why is this?
You cant store a video in a bitmapimage. Videos should be stored in StorageFile.
Use in your dictionary:
<MediaElement x:Key="Video1" Source="ms-appx:///Assets/Videos/Video.mp4"/
and then on page:
<MediaElement Source="{StaticResource Video1}"/>
remember to start width AutoPlay="True".

Images in J2me Lwuit

I have developed an LWUIT app. I have two types of images dispayed in the app. One coming from server side that need to displayed (like a photo posted and saved to server side) and one packaged in my jar and displayed mainly as icons (like a music icon, loading animation gif etc). I need to display all images according to the sreen size and resolution. The first kind is displayed by taking the screen display height and width and then use scale method and show a scaled version of the image. But however I have no idea how to show the second kind. i.e. icons. Example, my loading image looks good in most of the phones but for some phones like samsung, it looks blurred and over-sized. How to do this. My basic idea is to keep 3 types of images of icons like icon_width_lowXheight_low.png, icon_width_mediumXheight_medium.png and image_width_highXheight_high.png and show it based on the screen size. Please let me know the bets way to achieve this?
Thanks,
Parvathy
You should use MultiImages which were added in LWUIT 1.5. I don't have a link for this in LWUIT but our work in Codename One is pretty close to this so check out the How Do I? on multi images (and I suggest migration to Codename One regardless).
I think that you will need to use this
Image i = Image.createImage("your image path here");
i = i.scaled(widthValue, heightValue);
And put this values in relation to the Display.getInstance().getDisplayHeight() and Display.getInstance().getDisplayWidth()
Right?

User uploaded images drag/drop to a Raphael SVG then resize, rotate and fill path

This may be a bit long, but thank you in advance for any assistance.
I am trying to develop a web app that will allow the user to interact with a wireframe 'drawing' of a chosen product and customize each path with either an uploaded image, color/pattern or add whatever text...or all, if they choose (something similar to customizing a greeting card)...for THIS question, I will start with the image part....
Here is what I have so far: http://jsfiddle.net/rednevednav/C9aDm/
What's the most effective way to able to use any image...one that has been uploaded by the user...to append the fill of a selected path and then be able to drag it around, resize it and rotate BEHIND the selected path (so they can control the part of their image that gets 'cropped'? (*Note: I've searched for months and haven't found anyone else doing this outside of Flash....MyPublisher.com gets really close, but it's all squares and no SVG...I've looked at using ImageMagick and 'dst_in' on the server side...but after hard coding an image into svg...like on my jsfiddle...it seems that this could be done client side)
Should I be using Raphael for this application in the first place? Or?
**Hoping to stay within the Raphael framework (if using at all) in order to maintain the IE support afforded out-of-the-box that it provides; I understand that too much java hacking kills this. Of course, the 'finished' product will need to be downloaded as a .pdf....but that's another question for another time.
EDITED: # Thanks to an answer to my question HERE, I've update my JSFiddle with how to get URL from an uploaded image and use to fill path in Raphael paper. So that leaves 2 questions on this subject that I'm still struggling to resolve:
1. How to use this uploaded image to be able to drag and drop onto path to update fill?
2. How to first select which path I want the uploaded image to fill? (for when drag/drop is not available)
Thank you again in advance of any assistance!

Capture and draw a screenshot using directX

I am trying to get DirectX (DX9) to grab a screenshot of the desktop and immediately draw it back out (in smaller dimensions) to my form.
I have DirectX working to the capacity that the device is created along with a few surfaces and I can render them to screen. I am using one surface F3F3Surf9_SS to get the desktop Screenshot.
Here is my declaration and initialization of varaibles
F3D3Surf9_SS : IDirect3DSurface9; //Surface SS
F3D3Surf9_A : IDirect3DSurface9; //Surface A
F3D3Surf9_B : IDirect3DSurface9; //Surface B
...
FDirect3D9.CreateDevice(D3DADAPTER_DEFAULT,D3DDEVTYPE_HAL,Form1.Handle,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,#D3DPresentParams,
FDirect3DDevice9);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_A,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_A,nil,nil,'D:\Images\Pillar.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_B,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_B,nil,nil,'D:\Images\Niagra.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(Screen.Width,Screen.Height,D3DFMT_A8R8G8B8,
D3DPOOL_SCRATCH,F3D3Surf9_SS,nil);
Here is the code I use to grab and then render the screenshot
FDIrect3DDevice9.BeginScene;
FDirect3DDevice9.Clear(0,0,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,0,255),0,0);
FDirect3DDevice9.GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO, BackBuffer);
FDirect3DDevice9.GetFrontBufferData(0,F3D3Surf9_SS); //Get the screen shot
FDirect3DDevice9.StretchRect(F3D3Surf9_SS,nil,BackBuffer,nil,D3DTEXF_NONE); //Draw it
FDIrect3DDevice9.EndScene;
FDirect3DDevice9.Present(nil,nil,0,nil);
However this does not work.
The image does not get drawn to screen. If I draw surface A or B to screen, that works but it doesn't work for Surface SS. However I know Surface SS has the screenshot in it since if I call D3DXSaveSurfaceToFile the resulting bitmap I put on the hard disk is a valid screen shot.
Any thoughts on the proper way to do this?
The reason this would not work is that the F3D3Surf9_SS was declared in system memory by D3DPOOL_SCRATCH and cannot be drawn directly to the back buffer as I was trying to.
So my solution was to use the F3D3Surf9_A surface and use UpdateSurface to copy the screenshot in system memory into the surface A in video memory.
The only other change I had to make to get this to work was create Surface A in the same format as the screenshot surface: D3DFMT_A8R8G8B8. Also had to make sure the the destination surface in UpdateSurface was larger than the source surface.
NOTE:
This is slow since we are reading from video memory to system memory and then right back to video memory.
I needed this for my application since I want to capture everything the OS and other application put on screen but if you are just worried about your own application then there are better alternatives.
If you know of a way to GetFrontBufferData without putting it to system memory (which is the only way I could see it working) please let me know.

Web image display in iPhone app

I am new to developing on the iPhone so am sorry if this is an easy question, but it has had me stumped for a little while.
Basically the app displays data retrieved from an XML feed. In that feed is an element that contains the path to an image. eg http://www.myserver.com/myimage.jpeg.
I want to be able to display that image in the list view of my iPhone app.
Most importantly, I don't want to stop the list drawing for each image, the rest of the data should be displayed immediately and then each image downloads and displays as quickly as data speed etc make it available.
What is the best way of downloading that image and displaying it?
Ideally can someone point to some working example code.
Thanks
Stephen
Displaying the image: you could create a UIWebView and just point it to the path - and then it's fully zoomable too.
Displaying in the list view:
//Somehow download your image...
cell.image = Your Image

Resources