How do I turn off Direct3D output filtering - antialiasing

I'm using DirectX9 for rendering video output onto the screen .
The library used is SlimDX.
The software created is used for marking bad pixels from the output device so it's vital that no texture filtering / smoothing is done.
I disabled all the texture filtering options , disabled anti-aliasing , and aligned the texture to the screen at 1:1 ratio between the backbuffer and the rendered texture.
Thing is , on some devices , DirectX seems to be doing some bilinear filtering / blurring on the output.
I need all the output to be blocky , with -0- filtering.
Since all the resize code is operated on the control itself ( no backbuffer resizing etc' ) , I don't have the option of resizing the backbuffer.
Weird thing is , this only happens on some devices , not all.
How do I tell DirectX not to smooth what it renders to the control ? ( disable whatever texture filtering is done to the back/front buffer )
Thanks in advance for any help (:
For those who do not understand what I'm trying to get rid of -
when the resolution of the rendered image is lower than the resolution of the area drawn to , Direct3D creates a smooth transition between pixels.
What I want is for each pixel to be drawn as a simple rectangle , with absolutely no filtering , where can I find the settings that control this behavior?

I looked over it online , and it seems there is no proper explanation of this behavior anywhere .
What was going on :
The default DirectX behavior on WindowsXP and earlier was to smooth all rendering output ( if the control / screen you're rendering to isn't the exact same size as the backbuffer )
From WindowsVista onward the default behavior is to leave pixels as-is ( which basically means a stretched rectangle representing each pixel )
Since this is the behavior of the driver itself the user has absolutely no control over this :
The only solution is to reset the backbuffer each time the control is resized to conserve per-pixel rendering , there is no way around it - otherwise DirectX has control over the resizing technique

Related

How to make image fit screen in Godot

I am new in godot engine and I am trying to make mobile game (portrait mode only). I would like to make background image fit screen size. How do I do that? Do i have to import images with specific sizes and implement them all for various screens? If I import image to big, it will just cut out parts that don't fit screen.
Also, while developing, which width and height values should I use for these purposes?
With Godot 3, I am able to set size and position of sprite / other UI elements using script. I am not using the stretch mode for display window.
Here is how you can easily make the sprite to match viewport size -
var viewportWidth = get_viewport().size.x
var viewportHeight = get_viewport().size.y
var scale = viewportWidth / $Sprite.texture.get_size().x
# Optional: Center the sprite, required only if the sprite's Offset>Centered checkbox is set
$Sprite.set_position(Vector2(viewportWidth/2, viewportHeight/2))
# Set same scale value horizontally/vertically to maintain aspect ratio
# If however you don't want to maintain aspect ratio, simply set different
# scale along x and y
$Sprite.set_scale(Vector2(scale, scale))
Also for targeting mobile devices I would suggest importing a PNG of size 1080x1920 (you said portrait).
Working with different screen sizes is always a bit complicated. Especially for mobile games due to the different screen sizes, resolutions and aspect ratios.
The easiest way I can think of, is scaling of the viewport. Keep in mind that your root node is always a viewport. In Godot you can stretch the viewport in the project settings (you have to enable the stretch mode option). You can find a nice little tutorial here.
However, viewport stretching might result in an image distortion or black bars at the edges.
Another elegant approach would be to create an image that is larger than you viewport and just define an area that has to be shown on every device no matter whats the resolution. Here is someone showing what I am meaning.
I can't really answer your second question about the optimal width and height but I would look for the most typical mobile phone resolutions and ratios and go with that settings. In the end you probably should start with using the width and height ratio of the phone you want to use for testing and debugging.
Hope that helps.

Auto Layout Compatibility

I thought I had a grasp on auto layout, but when I test it on my 3.5 iPhone it does not look like a UI should.
The question is in the Simulated Metrics What should the size of the device be ? Inferred , Freeform , Detail ??
Keep in mind I am building for all iPhone models 3.5 , 4 , 4.7 , 5.5...
Thanks much.
JZ
I think, setting constrains through auto layout doesn't require you to concentrate on the Simulated Metrics. It could be freedom or fixed. What you need to know is your stimulator frame's size to calculate your component's contains.
For example, lets assume, you have an ImageView and you want that image view to consume 50% of your devices screen. You can use aspect ratio to do that but to calculate the ratio you need to know the stimulator's frame size.
If you select freedom, xCode will show you what is your stimulator's width and height. If it is 400X600 then you know that to be 50% of your screen size, your image view needs to be 200X300. So, you can then just go and set your constrains accordingly to make your image view look like 200X300 in your stimulator and then when you will test in any device, if you have set the constrains properly, you will have a constant behaviour.

Direct3D 9 Backbuffer sampling

I'm locking the backbuffer in direct3D 9 and copying an image to it. I noticed on one computer that when the image is stretched to the screen, it becomes blurry. On another computer I tested on, it's completely unfiltered (pixelated). Is there a way to specify how the backbuffer is sampled to the screen, or is it controlled by something else?
I've tried
Device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_POINT);
However it had no effect; I think it only affects textures.
SetSamplerState does not affect how the backbuffer is drawn to the screen. AFAIK most drivers will use point sampling, which means pixels will be lost or doubled, resulting in bad quality. BTW, what was the GPU/driver on the machine where it looked fine (you can't/shouldn't depend on this behavior everywhere)?
The right way to do this is to copy the image to a texture and render a screen aligned quad so you can use hardware sampling to smooth the result for you.
If for whatever reason you cannot use a texture + rendering pass, you can use IDirect3DDevice9::StretchRect to filter the image when copying to the backbuffer. To actually load the image from system memory, you'll have to use another surface, either locking and copying it or using D3DXLoadSurfaceFromMemory.

Alpha compositing wrong when rendering to QGLFrameBufferObject vs screen

I'm having a problem when rendering even simple shapes with partial opacity to QGLFrameBufferObjects in Qt.
I have reduced the problem down to this:
When I render a simple quad to a QGLFrameBufferObject with color set to (1,0,0,.5), and then blit that to the screen, I get a result that is way too light a red for 50% opacity. If I draw the same quad with the same color (same code, in fact) directly to the screen, I get the correct color value. If I render the quad with opacity == 1.0, then the results are the same...I get a full, deep red in both cases. I've confirmed that the color is really wrong in the buffer by dumping the buffer to disk directly with buffer.toImage().save("/tmp/blah.tif").
In both cases, I've cleared the output buffer to (1,1,1,1) before performing the operation.
Why are things I draw that are partially transparent coming out lighter when drawn to an offscreen buffer than if I draw them right to the screen? There must be some state that I have to set on the FBO or something, but I can't figure out what it is.
Alpha does not mean "transparent." It doesn't mean anything at all. It only takes up a meaning when you give it one. It only mean "transparent" when you set up a blend mode that uses alpha to control transparency. So if you didn't set up a blend mode that creates the effect of transparency, then alpha is just another color component that will be written exactly as is to the framebuffer.

My iOS Views are off half a pixel?

My graphics are looking blurry unless I add or subtract a half pixel to the Y coordinate.
I know this is a symptom that usually happens when the coordinates are set to sub-pixel values. Which leads me to believe one of my views must be off or something.
But I inspected the window, view controller and subviews, and I don't see any origins or centers with sub-pixel values.
I am stumped, any ideas?
See if somewhere you are using the center property of a view. If you assign that to other subviews, depending on their sizes they may position themselves in half pixel values.
Also, if you are using code to generate the UI I would suggest using https://github.com/domesticcatsoftware/DCIntrospect. This tools allows you in the simulator to look at all the geometry of visible widgets. Half pixel views are highlighted in red vs blue for integer coordinates. It helps a lot.

Resources