I am new to DirectX11 and learning it. (I come from openGL background).
I am confused and trying to understand what exactly does the following API calls do and what is the difference between them:
ID3D11Texture2D* pBackBuffer = NULL;
hr = g_pSwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), ( LPVOID* )&pBackBuffer );
and
hr = g_pd3dDevice->CreateRenderTargetView( pBackBuffer, NULL, &g_pRenderTargetView );
pBackBuffer->Release();
What does GetBuffer really do ? How are we then using the pBackBuffer in CreateRenderTargetView ? Also, can someone explain or point me to a link that explains, what is a render target view ? The msdn doc didn't make much sense to me.
As i recall, the GetBuffer() returns the pointer to the internal backbuffer that devices use.
your then from there create a RenderTarget that you can bind as your "Real backbuffer" target.
think of it as :
pBackbuffer = glBindFramebuffer(GL_FRAMEBUFFER, 0);
thats how i remember it. ( was some time ago i did this with dx11 )
edit*
And a rendertargetview, is a framebuffer. it´s a texture that you can bind to be read and writen to.
Related
I have a program where I want to draw data that is constantly being updated (it is microhpone-line-data incidentally). The data is an 8000-length array of doubles, I don't really care about 'losing' data which is overriden between takes of the paint method.
In my naive implementation it became obvous that there's synchronisation issues, where the audio-data is updated while the painting routine is under-way.
I'm also aware I'm slightly out-of-date on Java and the Concurrency package it has, but my first response was to just put synchronised blocks around the shared data code. Unsurprisingly this blocks the graphics thread sometimes, so I'm thinking there's probaly a much better way of doing this.
Essentially I just dont have much experience with synchronisation and am screwing things up a bit somewhere. I wonder if someone with a better understanding of these matters might be able to suggest a more elegant solution that doesn't block the graphics thread?
My naive code:
Object lock = new Object();
double[] audio = new double[8000]
// array size is always exactly 8000
public update( double[] audio ) {
synchronized( lock ) {
this.audio=audio; // and some brief processing
}
repaint();
}
public void paint( Graphics g ) {
synchronized( lock ) {
// draw the contents of this.audio
}
}
Self-answer, unless some more intelligent person can offer something better, I just save a reference to the audio array at the beginning of the routine and draw from that, then any updates to the audio buffer do their calculations in a separate array and then assign this.audio to the new array in a single step.
It appears to work, although the paint routine does get an occasional flicker it is nothing like it was, where it was flashing very noticeably about 10% of the time due to synchronised blocking. The audio-data does not update half-way through the drawing routine either. so... problem solved. probably.
double[] audio = new double[8000]
// array size is always exactly 8000
public update( double[] audio ) {
// do any brief processing
this.audio=audio; // the reference is re-assigned in one step
repaint();
}
public void paint( Graphics g ) {
audioNow = this.audio; // save the reference
// draw the contents of audioNow (not this.audio)
}
I am trying to find my way using SVGKit (https://github.com/SVGKit/SVGKit) for an iOS project dealing with geographical maps.
At this point, I can access a particular area on a map using a CALayer object. That lets me access the rectangle surrounding the area.
Here is the code I use for this:
CALayer *layer=[svgView.document layerWithIdentifier:#"myLayerID"];
[layer setBackgroundColor:[UIColor orangeColor].CGColor];
if( [layer isKindOfClass:[CAShapeLayer class]] )
{
CAShapeLayer* shapeLayer = (CAShapeLayer*) layer;
NSLog(#"That is good so far!");
layer.mask=shapeLayer;
}
But I need to access the precise area of the map; not only the surrounding rectangle, in order to highlight it.
I have kind of read I should use the CGPathRef and a mask.
How exactly can I do this?
Thanks for any tip.
When you find the CALayer, cast it to a CAShapeLayer (if you can; if you have the right layer, this should work fine).
if( [layer isKindOfClass:[CAShapeLayer class]] )
{
CAShapeLayer* shapeLayer = (CAShapeLayer*) layer;
// Now you have access to lots more Apple methods
}
Then you can chnage the line width, fill color, etc - all sorts of funky stuff.
Also look into CALayer.shadow* - various features from Apple there that will automatically hilight the visible parts of a layer.
I have an view that have mutiples views inside it, and an image presentation (aka. 'cover flow') into that too... And I need to make a screenshot programatically !
Since docs says that "renderInContext:" will not render 3d animations :
"Important The Mac OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of Mac OS X may add support for rendering these layers and properties."
source: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html
I have searched a lot, and my 'best' solution (that is not good at all), is to create my own CGContext and record all CG animations into it. But I really do not want to do it, because I will need to re-write most of my animations codes and it will be very expensive for memory... I found other solutions (some of then unmakable) as use openGL or capture through AVSessions, but no one that can help me...
What are my options ? Any with that problem ?
Thanks for your time !
have you actually tried it? I'm currently working on a project with several 3D transforms, and when I try to programmatically make this screenshot it works just fine :)
Here is the code I use:
-(UIImage *)getScreenshot
{
CGFloat scale = 1.0;
if([[UIScreen mainScreen]respondsToSelector:#selector(scale)])
{
CGFloat tmp = [[UIScreen mainScreen]scale];
if (tmp > 1.5)
{
scale = 2.0;
}
}
if(scale > 1.5)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(self.frame.size);
}
//SELF HERE IS A UIVIEW
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
I got it working with protocols.... I'm implementing a protocol in all UIViews classes that make 3D transforms. So when I request a screenshot, it make all subviews screenshot, and generate one UIImage.. Not so good for lots of views, but I'm doing in a few views.
#pragma mark - Protocol implementation 'TDITransitionCustomTransform'
//Conforms to "TDITransitionCustomTransform" protocol, return currrent image view state , by current layer
- (UIImage*)imageForCurrentState {
//Make print
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return printed image
return screenShot;
}
I was thinking it may works now because I'm doing that render in the transformed view layer, which have being transformed it self...
And it wasn't working because "renderInContext:" doesn't get layers of it subviews, may it possible ?
Anyone interest in a bit more code of this solution, can be found here . in the apple dev forum.
It may be a function bug, or it just not being design for this purpose ...
May Be You can use Core Graphaic instead of CATransform3DMakeRotation :)
CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
Which get effet on the renderInContext
I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.
After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.
I have two questions, and I'd be very grateful for assistance with either:
I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0
buffer related commands. How do I combine these to identify the color
of a pixel in the EAGLContext I'm tapping on-screen?
Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader
which firstly doesn't involve modifying buffered vertex (colour)
attributes, and secondly doesn't involve the addition of an IF
statement into the vertex shader?
I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.
Any suggestions would be gratefully received. Thank you for your time
UPDATE
Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.
Still, any tips as regards the back buffer would be much appreciated.
Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)
In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:
#interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>
After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):
// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];
Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.
Anyway, here's my target action for the tap gesture recognizer:
-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
const CGPoint loc = [recognizer locationInView:[self view]];
[self pickAtX:loc.x Y:loc.y];
}
The pick method called in there is one I've defined inside my GLKViewController subclass:
-(void)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
[snapshot pickPixelAtX:x Y:y];
}
This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.
What's important to note is a comment in the snapshot API documentation, which states:
This method should be called whenever your application explicitly
needs the contents of the view; never attempt to directly read the
contents of the underlying framebuffer using OpenGL ES functions.
This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.
You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:
#interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
#end
Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:
#implementation UIImage (NDBExtensions)
- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 b = data[offset+0];
UInt8 g = data[offset+1];
UInt8 r = data[offset+2];
UInt8 a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%i G:%i B:%i A:%i",r,g,b,a);
}
}
#end
I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.
That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:
R:24 G:46 B:244 A:255
Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
I've recently started learning HLSL after deciding that I wanted better lighting than what BasicEffect offered. After going through many tutorials, I found this and decided to learn from it:
http://brooknovak.wordpress.com/2008/11/13/hlsl-per-pixel-point-light-using-phong-blinn-lighting-model/
It seems that the shader above doesn't work very well in my game though, because my game uses a tile based approach, which means multiple models in a grid-like formation.
What happens is that each of my tiles gets shaded separately from the others. Please see this image for a visual reference:
http://i.imgur.com/1Sfi2.png
I understand that this is because each tile has it's own model and the shader doesn't take into account other models as it's executing on the meshes of a model.
Now, for the question. How does one go about to shade all the tiles together? I understand that I may have to write a shader from scratch to accomplish this, but if anyone could give me some tips on how to achieve the effect I want, I'd really appreciate it.
It's late so there's a possibility that I've forgotten something. If you need more information, please tell me and I'll add it.
Thanks, Merigrim
EDIT:
Here is my code for drawing a model:
public void DrawModel(Model model, Matrix modelTransform, Matrix[] absoluteBoneTransforms, Vector3 color, float alpha = 1.0f, Texture2D texture = null)
{
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = effect;
Matrix world = absoluteBoneTransforms[mesh.ParentBone.Index] * modelTransform;
effect.Parameters["World"].SetValue(absoluteBoneTransforms[mesh.ParentBone.Index] * modelTransform);
effect.Parameters["View"].SetValue(camera.view);
effect.Parameters["Projection"].SetValue(camera.projection);
effect.Parameters["CameraPos"].SetValue(camera.cameraPosition);
Vector3 lookAt = camera.cameraPosition + camera.cameraDirection;
effect.Parameters["LightPosition"].SetValue(new Vector3(lookAt.X, 1.0f, lookAt.Z - 5.0f));
effect.Parameters["LightDiffuseColor"].SetValue(new Vector3(0.45f, 0.45f, 0.45f));
effect.Parameters["LightSpecularColor"].SetValue(new Vector3(0.45f, 0.45f, 0.45f));
effect.Parameters["LightDistanceSquared"].SetValue(40.0f);
effect.Parameters["DiffuseColor"].SetValue(color);
effect.Parameters["AmbientLightColor"].SetValue(Color.Black.ToVector3());
effect.Parameters["EmissiveColor"].SetValue(Color.White.ToVector3());
effect.Parameters["SpecularColor"].SetValue(Color.White.ToVector3());
effect.Parameters["SpecularPower"].SetValue(10.0f);
if (texture != null)
{
effect.Parameters["DiffuseTexture"].SetValue(texture);
}
mesh.Draw();
}
}
pass.Apply();
}
}
It seems that the normals were the villain this time around. After correcting the normals in Blender, everything seems to work now.
I want to thank meds and Andrew Russell. Without your help I wouldn't have figured it out!
So now I know, when you have problems with your lighting, always check the normals first.