While porting an application from iphone 4s to iPhone 5, I got the error GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS after calling this code:
glBindFramebuffer(GL_FRAMEBUFFER, 1);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, 1);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, 2);
According to the OpenGLES spec the error is caused by "Attachments do not
have the same width and height", but I'm using 1136 x 640 for both color and depth buffer.
The same code runs well on iphone 4s (with 960 x 640 buffers).
Depth and color buffer have diff size.
For get real size color buffer:
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer]
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &w);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &h);
Also you need set scale to CAEAGLLayer:
layer.contentsScale = [[UIScreen mainScreen] scale];
You can see how it's done in Ogre3D, SDL, Cocos2d-x.
Related
I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.
I'm trying to convert an iphone app that consists of 3 different asset sizes for three different screen sizes (base-iphone(320x480), mid-iphone(640x960) ipad(768x1024),high-ipad3) into one for android that utilizes these different assets based on the resolution of different devices.
The code utilizes the ipad/iphone Idioms and apportable overrides the UIDevice methods for this using the VerdeConfigIsTablet() method. It is very unclear how this is done. Is there any good resource to understand how each resolution is assigned and scaled?
Thanks
See the Apportable UIScreen docs.
Also, potentially useful is [[UIScreen mainScreen] bounds]:
(gdb) p [UIScreen mainScreen]
$2 = (struct objc_object *) 0x6acd5490
(gdb) p [$2 bounds]
$3 = {origin = {x = 0, y = 0}, size = {width = 800, height = 1205}}
I have a standard scene with the init method as follows:
#interface CommodoreScene : CCLayerColor
#end
#implementation CommodoreScene
- (id) init {
if (( self=[super initWithColor:ccc4(255, 255, 255, 255)] )) {
}
return self;
}
#end
However when I run the scene on my iPhone4, or the simulator half the screen goes black, the other half my layer color (white).
I'm currently running the scene with [_director pushScene:[CommodoreScene node]]; and running cocos2d-iphone 2.0.0.
- (id) init {
if (( self=[super initWithColor:ccc4(255, 255, 255, 255) width:480 height:320] )) {
}
return self;
}
Try that. The important part is of course the change in the call to super. You might still have an issue with [[CCDirector] winSize] reading properly in the -init method if you're in landscape (which judging from the bug you're experiencing - you are), at least I did on my tests. As I recall, this is a known bug of some kind, as it seems to read the winSize as portrait instead of landscape. You can fix this by doing the following:
CGRect rect = CGRectMake(0, 0, 480, 320);
CGSize size = rect.size;
// position the label on the center of the screen
label.position = ccp( size.width /2 , size.height/2 );
Using that newly created size instead of winSize in your -init method fixes the issue. Notice that if you create and position an element after initialization, such as in onEnter, this problem goes away and the winSize from the director reads properly. Odd bug, it is.
You can change content size of your color layer;
CGSize boxSize = CGSizeMake(sGame.winWidth, sGame.winHeight);
CCLayerColor *box = [CCLayerColor layerWithColor:ccc4(155,155,200,255)];
box.position = ccp(0.0f, 0.0f);
box.contentSize = boxSize;
[self addChild:box z:-6];
I have an SVG element defined with a width and height of "297mm" and "210mm", and no viewbox. I do this because I want to work inside an A4 viewport, but I want to create elements in pixels.
At some point, I need to scale up an object defined in pixels so that it fits into part of the A4 page. For example, I might have an object which is 100 px wide, that I want to scale to 30mm horizontally.
Is this possible? I think I need a second container with a new co-ordinate system, but I can't find any way to do this.
EDIT
It seems that I (or SVG) has misunderstood what a pixel is. I realised that I could size a line to 100%, and then get it's pixel size with getBBox to find the scaling required. I wrote this code and ran it on 2 clients, one with a 1280x1024 monitor (80dpi), and one with a 1680x1050 LCD (90dpi):
function getDotsPerInch() {
var hgroup, vgroup, hdpi, vdpi;
hgroup = svgDocument.createElementNS(svgNS, "g");
vgroup = svgDocument.createElementNS(svgNS, "g");
svgRoot.appendChild(hgroup);
svgRoot.appendChild(vgroup);
drawLine(hgroup, 0, 100, "100%", 100, "#202020", 1, 1);
drawLine(vgroup, 100, 0, 100, "100%", "#202020", 1, 1);
hdpi = hgroup.getBBox().width * 25.4 / WIDTH;
vdpi = vgroup.getBBox().height * 25.4 / HEIGHT;
drawText(hgroup, "DPI, horizontal: " + hdpi.toFixed(2), 100, 100);
drawText(hgroup, "DPI, vertical: " + vdpi.toFixed(2), 100, 120);
}
IE9, FF, Opera, and Chrome all agree that both monitors are 96 dpi horizontally and vertically (although Opera's slightly inaccurate), and Safari reports 0 dpi on both monitors. So svg just appears to have defined "pixels" as "96dpi". some quick Googling appears to confirm this, though I haven't found anything definitive, and most hits give 90dpi, with 96dpi as the FF variant.
You can nest <svg> elements and use the viewBox attribute on the child element to get a new coordinate system. Give the child <svg> element x, y, width and height attributes of where you want it to appear on the parent in the parent's co-ordinate system.
I have
VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) on Ubuntu 10.10 Linux.
I'm rendering statically one VBO per frame. This VBO has 30,000 triangles, with 3 lights and one texture, and I'm getting 15 FPS.
Are intel cards so bad, or am I doing sth wrong?
Drivers are standard, open source drivers from intel.
My code:
void init() {
glGenBuffersARB(4, vbos);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, vertXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 4, colorRGBA, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, normXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 2, texXY, GL_STATIC_DRAW_ARB);
}
void draw() {
glPushMatrix();
const Vector3f O = ps.getPosition();
glScalef(scaleXYZ[0], scaleXYZ[1], scaleXYZ[2]);
glTranslatef(O.x() - originXYZ[0], O.y() - originXYZ[1], O.z()
- originXYZ[2]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glColorPointer(4, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glNormalPointer(GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
texture->bindTexture();
glDrawArrays(GL_TRIANGLES, 0, verticesNum);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //disabling VBO
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
}
EDIT: maybe it's not clear - initialization is in different function, and is only called once.
A few hints:
Using that number of vertices you should interleave the arrays. Vertex caches usually don't hold more than 1000 entries. Interleaving the data of course implies that the data is hold by a single VBO.
Using glDrawArrays is suboptimal if there are a lot of shared vertices, which is likely the case for a (static) terrain. Instead draw using glDrawElements. You can use the index array to implement some cheap LOD
Experiment with the number of vertices in the index buffer given to glDrawArrays. Try batches of at most 2^14, 2^15 or 2^16 indices. This is again to relieve cache pressure.
Oh and in your code the last two lines
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
I think you meant those to be glDisableClientState.
Make sure your system has OpenGL acceleration enabled:
$ glxinfo | grep rendering
direct rendering: Yes
If you get 'no', then you don't have OpenGL acceleration.
Thanks fo answers.
Yeah, I have direct rendering on, according to glxinfo. In glxgears I get sth like 150 FPS, and games like Warzone or glest works fast enough. So probably problem is in my code.
I'll buy some real graphic card eventually anyway, but I wanted my game to work on integrated graphic cards too, that's why I posted this question.