Direct3D11 GL_RGBA4 equivalent? - direct3d

Is there an equivalent GL_RGBA4 texture format for D3D11, I can't seem to find it.
There is DXGI_FORMAT_B5G6R5_UNORM and DXGI_FORMAT_B5G5R5A1_UNORM 16 bit formats, but not the 4444 one.
Even D3D9 has all of them, so I don't understand why D3D11 would not...

Never mind i'm using the old D3D11 SDK and it is called "DXGI_FORMAT_B4G4R4A4_UNORM" in the Windows 8 SDK.

Related

Direct3D 9 Z-Buffer Precision Bug occuring only in release build

I'm currently experiencing a weird issue that looks like Z-Fighting with Direct3D 9. I suspect that my problem is actually a Z buffer precision issue.
I noticed that absolutely no depth artifacts appear in Debug builds (I'm using Visual Studio 2012). The bug only occurs in Release builds.
The depth buffer format I'm currently using is 24-bits padded with 8 (D3DFMT_D24X8). When I use only 16-bits, the exact artifacts appear in both Debug AND Release builds. So what does that mean? Is DirectX rejecting 24-bits depth buffers? And if that's the case, why would you even do this?
Aside from all that, I tried setting 32-bits, but it just crashes and returns a null-pointer for the D3D device.
Many thanks in advance.
Here's a screenshot of my problem :
Ok, so I eventually found a work-around. I divided my scene into regions of depth, and I'm rendering all of them one by one after clearing the Z-buffer between each passes.
I currently have two passes (0.1m to 5m, and 5m to 10km). This seems to work pretty well for now.

What is the purpose of /drawable-v14 or /drawable-v11?

I've seen that some Google's or other open source projects have resource directories like /drawable-v14 or /drawable-hdpi-v11.
Now, I understand what this means: all devices with SDK larger or equal than v11/v14 should use these images.
But what is the purpose of this? Why and when should I use them? Why devices of HDPI resolution and SDK v11 should ever use images different than HDPI devices and SDK 10?
I just cannot see when I will ever use one image for SDK 10 and another for SDK 17, for example. Makes no sense to me.
As a side note, the usage of resources /values-v{11/14/17} is logical and has the practical benefit.
This can be use in order to style your icons to the current UI guidelines on the given Android version.
Android has had a lot of evolution on its GUI style from its beginning. In Cupcake, icons had to show a 3D effect with a shadow. With ICS, there is more flat icons. And it will keep on changing with android 5 and more... (Let's watch the Google i/o 2014 to know more about it! ... by the way: its today!)
So basically you can stick to the GUI guidelines even from different Android versions. It's probably not the only use case but it is one of them.

How can I access the frames captured by Camera from the video buffer in Android 4.1

I wondering if I can do that in Android space.
The version I am using is Android 4.1
In general, I think you can use the JNI and if you know what classes to use you can get the buffer you ask for.
Try chewing on the 2 links: here and here
Note that before ICS, the stack is very different from what is presented in these 2 links. So, what you come up with for ICS wont work on 2.3.3 or on 3.x.x.

Open-GL difficulties in linux

i have had this problem for 3-4 months. OpenGL codes do not run that good as they should in windows. I have a project that i need to run it in linux, with times, pipes, ... that use the Windows API. I need to migrate the code but it doesn't look good. For example they are flashing on the screen! is it from my graphics card on linux? or is it some other difficulties?
Also i have ATI HD3470 on VAIO-FW13GU/H laptop running Debian5. Are there any good(i have seen some drivers but not so good :-S) drivers for ati hd series?
Try creating some simple demo program that uses the OpenGL features you're using in your code. Try isolating which features causes the problem. If all of them worked as you expected, there is a chance that the bug is in your code you may be assuming some platform specific behavior that get borked in linux.
I have had a bug when porting a Windows C++ code, where the 3D mesh parsing code doesn't correctly handle windows-style line ending and that caused the mesh to produce ugly colors since it passes a number string to a home-brewn string-to-int function (which I promptly replaced with atoi()), which gets silently borked when it meets the extra line end character.

If I build and link an OpenGL application using only OpenGL ES 1.x calls, will it still work?

I am writing an OpenGL game which will hopefuflly be for both linux and iphoneOS, I basically want to be able to build using the OpenGL ES 1.5 headers and run it on my linux desktop. Can I do this? IE, I want to only use the subset of API calls common between OpenGL and OpenGL-ES.
Doing the above and linking with normal libGL.a from my system gets me my screen but I seem to be able to do nothing but change the scene background colour.
I've done exactly that, and it worked well for me.
There are a bunch OpenGL|ES extensions that aren't available on standard OpenGL but very nice to have on a low spec platform. glDrawTexImage is such an extension. Emulating these extensions using a hand full of desktop OpenGL-calls is not a big deal though.
Also OpenGL|ES supports the fixed-point data-format for most entrypoints. Take glClearColorx for example. These aren't available for the desktop OpenGL, so you have to write a wrapper if you want to use them. It's a bit more work if you also store your vertex data in this format.
Oh - and note that OpenGL|ES does not come with the glu-library. You can use it on the desktop, but if you do you'll have to reimplement them later (see the 100 questions about gluLookAt and gluUnproject).
There is no such thing as OpenGL ES 1.5. Did you mean 1.1 ?
Also, how do you get a window ? This is platform specific.
In any case, you still should compile against the header that corresponds to the lib you will link against. You don't know for sure what the header sets up (e.g. on windows, which you don't care about but still, calling conventions are specified in there).
There are also some calls that don't map well between the 2. E.g. APIs that are only using doubles in GL are float in GLES (from the ES spec):
The double-precision only commands
DepthRange, Frustum, and Ortho are
replaced with single-precision or
fixed-point variants
So in short, there is a bit more work than just using the same code, although the work in question is still minimal if you stick to GL ES subset.

Resources