I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.
Related
I develop offscreen Vulkan based render server to perform 2D scene drawing per request.
Target platform: Ubuntu 18.04 into Docker container
Physical device: llvmpipe (LLVM 11.0.1, 256 bits)
The scene consists of the same type of meshes and textures of different sizes. Each mesh is bound to its own texture. The maximum number of scene elements is 200.
I have just 1 material (vertex + fragment shaders) so I use just 1 pipeline.
High level description of my workfllow:
1) Setup framebuffer and readback image
2) Load all meshes (VBOs and IBOs)
3) Load all textures (images, views, samplers)
4) Create descriptor set for material exposes (mesh transform and texture sampler)
5) Put per-mesh parameters to storage buffer (transform matrices)
6) Update fixed array of texture samplers.
7) Draw each mesh
8) Send readback image to response.
Thats works great on dedicated GPU, llvmpipe does not support VK_EXT_descriptor_indexing and shaderSampledImageArrayDynamicIndexing feature. Its mean I cant indexing (in shaders) texture samler array by value from PushConstants.
#version 450
layout(set = 0, binding = 2) uniform sampler2D textures[200];
layout(push_constant) uniform Constants
{
uint id;
} meta;
void main()
{
// ...
vec4 t = texture(textures, uv); // failed on llvmpipe
// ...
}
To use only one sampler I need:
clear(framebuffer)
for mesh in meshes
{
bind(mesh.vbo)
bind(mesh.ibo)
bind(descriptorset)
update(sampler) // write current mesh texture
submit()
}
read(readback)
...
I dont understand how to setup renderpass to perform this steps. submit() in middle of this approach is confuse me.
Could you help me ?
I tried another approach that is based on StorageTexelBuffers.
1. Get max size of texel storage from device limits
(maxTexelBufferElements)
2. Split scene data ito chunks limited by maxTexelBufferElements.
3. Setup framebuffer and clear it
4. Draw a chunk[i]
5. Read back result
In this case samplers usage are not required.
I put N images in 1D array and pass it to fragment shader. In the shader I calculate index of the specific texel and gather it by imageLoad(...)
layout(location = 0) in vec2 uv;
layout(set = 0, binding = 2, rgba32f) uniform imageBuffer texels;
layout(push_constant) uniform Constants
{
uint id;
uint textureStart;
uint textureWidth;
uint textureHeight;
} meta;
void main()
{
// calculate specific texel real coordinates
uint s = uint(uv.x * float(meta.textureWidth));
uint t = uint(uv.y * float(meta.textureHeight));
// calculate texel index in global array
int index = int(meta.textureStart + s + t * meta.textureWidth);
outColor = imageLoad(noise, tx);
}
Start of the texture is passed in PushConstants.
I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.
After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.
I'm almost there with my little "window background from PNG image" project in Linux. I use pure X11 API and the minimal LodePNG to load the image. The problem is that the background is the negative of the original PNG image and I don't know what could be the problem.
This is basically the code that loads the image then creates the pixmap and applies the background to the window:
// required headers
// global variables
Display *display;
Window window;
int window_width = 600;
int window_height = 400;
// main entry point
// load the image with lodePNG (I didn't modify its code)
vector<unsigned char> image;
unsigned width, height;
//decode
unsigned error = lodepng::decode(image, width, height, "bg.png");
if(!error)
{
// And here is where I apply the image to the background
Screen* screen = NULL;
screen = DefaultScreenOfDisplay(display);
// Creating the pixmap
Pixmap pixmap = XCreatePixmap(
display,
XDefaultRootWindow(display),
width,
height,
DefaultDepth(display, 0)
);
// Creating the graphic context
XGCValues gr_values;
gr_values.function = GXcopy;
gr_values.background = WhitePixelOfScreen(display);
// Creating the image from the decoded PNG image
XImage *ximage = XCreateImage(
display,
CopyFromParent,
DisplayPlanes(display, 0),
ZPixmap,
0,
(char*)&image,
width,
height,
32,
4 * width
);
// Place the image into the pixmap
XPutImage(
display,
pixmap,
gr_context,
ximage,
0, 0,
0, 0,
window_width,
window_height
);
// Set the window background
XSetWindowBackgroundPixmap(display, window, pixmap);
// Free up used resources
XFreePixmap(display, pixmap);
XFreeGC(display, gr_context);
}
The image is decoded (and there's the possibility to be badly decoded) then it is applied to the background but, as I said, the image colors are inversed and I don't know why.
MORE INFO
After decoding I encoded the same image into a PNG file which is identical to the decoded one, so it looks like the problem is not related to LodePNG but to the way I play with XLib in order to place it on the window.
EVEN MORE INFO
Now I compared the inverted image with the original one and found out that somewhere in my code the RGB is converted to BGR. If one pixel on the original image is 95, 102, 119 on the inverted one it is 119, 102, 95.
I found the solution here. I am not sure if is the best way but the simpler for sure.
I need to know active screen DPI on Linux and Mac OS. I think on linux xlib might be useful, but I can't find a way how to get currect DPI.
I want this information to get real screen size in inches.
Thanks in advance!
On a mac, use CGDisplayScreenSize to get the screen size in millimeters.
In X on Linux, call XOpenDisplay() to get the Display, then use DisplayWidthMM() and DisplayHeightMM() together with DisplayWidth() and DisplayHeight() to compute the DPI.
On the Mac, there's almost certainly a more native API to use than X. Mac OS X does not run X Window by default, it has a native windowing environment.
I cobbled this together from xdpyinfo...
Compile with: gcc -Wall -o getdpi getdpi.c -lX11
/* Get dots per inch
*/
static void get_dpi(int *x, int *y)
{
double xres, yres;
Display *dpy;
char *displayname = NULL;
int scr = 0; /* Screen number */
if( (NULL == x) || (NULL == y)){ return ; }
dpy = XOpenDisplay (displayname);
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy,scr)) * 25.4) /
((double) DisplayWidthMM(dpy,scr)));
yres = ((((double) DisplayHeight(dpy,scr)) * 25.4) /
((double) DisplayHeightMM(dpy,scr)));
*x = (int) (xres + 0.5);
*y = (int) (yres + 0.5);
XCloseDisplay (dpy);
}
You can use NSScreen to get the dimensions of the attached display(s) in pixels, but this won't give you the physical size/PPI of the display and in fact I don't think there are any APIs that will be able to do this reliably.
You can ask a window for its resolution like so:
NSDictionary* deviceDescription = [window deviceDescription];
NSSize resolution = [[deviceDescription objectForKey:NSDeviceResolution] sizeValue];
This will currently give you an NSSize of {72,72} for all screens, no matter what their actual PPI. The only thing that make this value change is changing the scaling factor in the Quartz Debug utility, or if Apple ever turns on resolution-independent UI. You can obtain the current scale factor by calling:
[[NSScreen mainScreen] userSpaceScaleFactor];
If you really must know the exact resolution (and I'd be interested to know why you think you do), you could create a screen calibration routine and have the user measure a line on-screen with an actual physical ruler. Crude, yes, but it will work.
Here's a platform independent way to get the screen DPI:
// Written in Java
import java.awt.Toolkit;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.border.EmptyBorder;
public final class DpiTest {
public static void main(String[] args) {
JFrame frame = new JFrame("DPI");
JLabel label = new JLabel("Current Screen DPI: " + Toolkit.getDefaultToolkit().getScreenResolution());
label.setBorder(new EmptyBorder(20, 20, 20, 20));
frame.add(label);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setVisible(true);
}
}
You can download a compiled jar of this from here. After downloading, java -jar dpi.jar will show you the DPI.