I'm almost there with my little "window background from PNG image" project in Linux. I use pure X11 API and the minimal LodePNG to load the image. The problem is that the background is the negative of the original PNG image and I don't know what could be the problem.
This is basically the code that loads the image then creates the pixmap and applies the background to the window:
// required headers
// global variables
Display *display;
Window window;
int window_width = 600;
int window_height = 400;
// main entry point
// load the image with lodePNG (I didn't modify its code)
vector<unsigned char> image;
unsigned width, height;
//decode
unsigned error = lodepng::decode(image, width, height, "bg.png");
if(!error)
{
// And here is where I apply the image to the background
Screen* screen = NULL;
screen = DefaultScreenOfDisplay(display);
// Creating the pixmap
Pixmap pixmap = XCreatePixmap(
display,
XDefaultRootWindow(display),
width,
height,
DefaultDepth(display, 0)
);
// Creating the graphic context
XGCValues gr_values;
gr_values.function = GXcopy;
gr_values.background = WhitePixelOfScreen(display);
// Creating the image from the decoded PNG image
XImage *ximage = XCreateImage(
display,
CopyFromParent,
DisplayPlanes(display, 0),
ZPixmap,
0,
(char*)&image,
width,
height,
32,
4 * width
);
// Place the image into the pixmap
XPutImage(
display,
pixmap,
gr_context,
ximage,
0, 0,
0, 0,
window_width,
window_height
);
// Set the window background
XSetWindowBackgroundPixmap(display, window, pixmap);
// Free up used resources
XFreePixmap(display, pixmap);
XFreeGC(display, gr_context);
}
The image is decoded (and there's the possibility to be badly decoded) then it is applied to the background but, as I said, the image colors are inversed and I don't know why.
MORE INFO
After decoding I encoded the same image into a PNG file which is identical to the decoded one, so it looks like the problem is not related to LodePNG but to the way I play with XLib in order to place it on the window.
EVEN MORE INFO
Now I compared the inverted image with the original one and found out that somewhere in my code the RGB is converted to BGR. If one pixel on the original image is 95, 102, 119 on the inverted one it is 119, 102, 95.
I found the solution here. I am not sure if is the best way but the simpler for sure.
Related
I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.
I am trying to do some basic drawing with skia. Since I'm working on grayscale images I want to use the corresponding color type. The minimal Example I want to use is:
int main(int argc, char * const argv[])
{
int width = 1000;
int heigth = 1000;
float linewidth = 10.0f;
SkImageInfo info = SkImageInfo::Make(
width,
heigth,
SkColorType::kAlpha_8_SkColorType,
SkAlphaType::kPremul_SkAlphaType
);
SkBitmap img;
img.allocPixels(info);
SkCanvas canvas(img);
canvas.drawColor(SK_ColorBLACK);
SkPaint paint;
paint.setColor(SK_ColorWHITE);
paint.setAlpha(255);
paint.setAntiAlias(false);
paint.setStrokeWidth(linewidth);
paint.setStyle(SkPaint::kStroke_Style);
canvas.drawCircle(500.0f, 500.0f, 100.0f, paint);
bool success = SkImageEncoder::EncodeFile("B:\\img.png", img,
SkImageEncoder::kPNG_Type, 100);
return 0;
}
But the saved image does not contain the circle that was drawn. If I replace kAlpha_8_SkColorType with kN32_SkColorType I get the expected result. How can I draw the circle onto a 8 bit grayscale image? I'm working with Visual Studio 2013 on a 64bit Windows machine.
kN32_SkColorType type result
kAlpha_8_SkColorType result
You should use kGray_8_SkColorType than kAlpha_8_SkColorType.
The kAlpha_8_SkColorType used for bitmap mask.
I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.
I have a standard scene with the init method as follows:
#interface CommodoreScene : CCLayerColor
#end
#implementation CommodoreScene
- (id) init {
if (( self=[super initWithColor:ccc4(255, 255, 255, 255)] )) {
}
return self;
}
#end
However when I run the scene on my iPhone4, or the simulator half the screen goes black, the other half my layer color (white).
I'm currently running the scene with [_director pushScene:[CommodoreScene node]]; and running cocos2d-iphone 2.0.0.
- (id) init {
if (( self=[super initWithColor:ccc4(255, 255, 255, 255) width:480 height:320] )) {
}
return self;
}
Try that. The important part is of course the change in the call to super. You might still have an issue with [[CCDirector] winSize] reading properly in the -init method if you're in landscape (which judging from the bug you're experiencing - you are), at least I did on my tests. As I recall, this is a known bug of some kind, as it seems to read the winSize as portrait instead of landscape. You can fix this by doing the following:
CGRect rect = CGRectMake(0, 0, 480, 320);
CGSize size = rect.size;
// position the label on the center of the screen
label.position = ccp( size.width /2 , size.height/2 );
Using that newly created size instead of winSize in your -init method fixes the issue. Notice that if you create and position an element after initialization, such as in onEnter, this problem goes away and the winSize from the director reads properly. Odd bug, it is.
You can change content size of your color layer;
CGSize boxSize = CGSizeMake(sGame.winWidth, sGame.winHeight);
CCLayerColor *box = [CCLayerColor layerWithColor:ccc4(155,155,200,255)];
box.position = ccp(0.0f, 0.0f);
box.contentSize = boxSize;
[self addChild:box z:-6];
So I am coding in DirectX 9 and whenever I place a sprite inside of a 2D world. There is a white colored "halo" that appears around the sprite image p. I am using PNGs and the background behind the sprite is transparent. I have also tried using a pink background as well. It seems that the halo only appears on straight lines of pixels but only on some edges. Any help is greatly appreciated!
m_d3d = Direct3DCreate9(D3D_SDK_VERSION); // create the Direct3D interface
D3DPRESENT_PARAMETERS d3dpp; // create a struct to hold various device information
ZeroMemory(&d3dpp, sizeof(d3dpp)); // clear out the struct for use
d3dpp.Windowed = windowed; // is program fullscreen, not windowed?
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD; // discard old frames
d3dpp.hDeviceWindow = hWnd; // set the window to be used by Direct3D
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8; // set the back buffer format to 32-bit
d3dpp.BackBufferWidth = screenWidth; // set the width of the buffer
d3dpp.BackBufferHeight = screenHeight; // set the height of the buffer
d3dpp.EnableAutoDepthStencil = TRUE; // automatically run the z-buffer for us
d3dpp.AutoDepthStencilFormat = D3DFMT_D16; // 16-bit pixel format for the z-buffer
// create a device class using this information and the info from the d3dpp stuct
m_d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
hWnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&m_d3ddev);
D3DXCreateSprite(m_d3ddev, &m_d3dspt); // create the Direct3D Sprite object
LPDIRECT3DTEXTURE9 texture;
D3DXCreateTextureFromFileEx(m_d3ddev, "DC.png", D3DX_DEFAULT, D3DX_DEFAULT,
D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT,
D3DX_DEFAULT, D3DCOLOR_XRGB(255, 0, 255), NULL, NULL, &texture);
m_d3ddev->BeginScene();
m_d3dspt->Begin(D3DXSPRITE_ALPHABLEND); // begin sprite drawing with transparency
D3DXVECTOR3 center(0.0f, 0.0f, 0.0f), position((appropriate x), (appropriate y), 1);
m_d3dspt->Draw(texture, NULL, ¢er, &position, D3DCOLOR_ARGB(255, 255, 255, 255));
m_d3dspt->End(); // end sprite drawing
m_d3ddev->EndScene();
m_d3ddev->Present(NULL, NULL, NULL, NULL);
Thanks
Peter
This occurs when you screw up your texture co-ordinates from sprite atlasing and you accidentally run off the texture or on to another texture.
Most commonly, anyway, AFAIK.