HLSL blur glitch with channel selection - graphics

After playing around with blur i copied from visual studio shader graph, i put one filter condition (by any channel, doesn't matter), then it resulted into
this. How is that can be explained?

Related

How can I configure MRTK to work with touch input in editor and on mobile devices?

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.

How to load textures from a spritesheet in Godot 3

I've just got started with Godot yesterday, and I'm starting a game. I drew a few spritesheets for it. It seems much more efficient to pack all of the frames of an animation into a single image file, right?
Anyway, in Godot I have an AnimatedSprite, who of course has a SpriteFrames property, or whatever it's called. I want to split my spritesheet up into multiple images so that I can use each image as a separate frame in the animation, but as far as I can see Godot provides no such feature. Is this the case?
I've been searching for an answer on the web for a while now, and I can't find anything relevant.
I'd be very surprised if I can't do this in Godot, since I can do it in just about every other game engine I've seen.
Thanks!
(Just to clarify, I want to (programmatically or otherwise,) split a spritesheet into multiple textures, within Godot.)
Click New SpriteFrames in the Frames property menu of the AnimatedSprite node. Now click on the just created SpriteFrames next to the name of the Frames property. Animation Frames window should appear.
Click this Add frames from a Sprite Sheet button. Select your spite sheet file, set grid sizes and finally select individual frames from that sprite sheet.
(This works for me in Godot v3.2.2)
As it says: "Sprite node that can use multiple textures for animation."
Source: http://docs.godotengine.org/en/3.0/classes/class_animatedsprite.html
What you are searching for is using normal Sprite(2D), and set regions for it if you are using AnimationPlayer at each frame.
Example: https://www.youtube.com/watch?v=IGHcscKpA7Y
If you want to do all programmically, you just have to set the sprite to Sprite(2D), and then:
func _ready():
set_region(true)
set_region_rect(Rect2(positionx,positiony,width,height))
but i guess using AnimationPlayer is better option.
This is not tested because i should be sleeping right now, but it should work.

Propagating all events from a X window

I'a currently working on a small utility, it's my first ever X project. The utility is used to draw a small circle around your mouse pointer. I use an app called Pinpoint to do the same on my Mac, it helps me find my mouse as I'm visually impaired.
The utility creates an transparent X window and draw a circle inside, it then moves that window with the mouse pointer so that the circle follows the mouse.
It currently works, except for one detail. Mouse events are not propagated up to the underlying windows. Basically, the utility makes the mouse useless.
As far as I can tell from the Xlib docs, if not otherwise specified, new windows should propagate all events. How can I fix this?
The code can be found on GitHub: https://github.com/blubber/circle-cursor it's a bit messy currently, becaue it is just a proof of concept.
I would suggest doing via cursor image as well, there are many ways when you won't be able to receive mouse events and only possible source would be polling with XQueryPointer.
With xfixes extension you can subscribe to all cursor image changed events and get most recent shape of the cursor, and whit XRender you can set your own ( possibly animated cursor )

Cross ( mobile ) platform image transitions

I'm writing a game that asks the user to click on an image, which then reveals a different image. I'd like to make the transition between the images look like a playing card being turned over on both Android and IOS.
I've done a bit of research, but it all seems to indicate that the "curl" visual effect will do what I want, but is only available on IOS ( I can't test this as I don't have access to a MAC at the moment. )
Is there a cross platform way of doing this "turning a playing card over" sort of transition?
You might scale the (front) image control vertically until it is only 1 line and then scale the second (backside) image from 1 vertical line to its original size.
Only very few visual effects are cross platform. One of them is the reveal up/down/left/right effect. You might use this effect to display a neutral, e.g. gray or blue picture after hiding the front side image and before showing the back side image. Something like this:
lock screen for visual effect
hide img "front"
show img "intermediary"
unlock screen with visual effect reveal left fast
lock screen for visual effect
hide img "intermediary"
show img "back"
unlock screen with visual effect reveal right fast
I know it isn't ideal, but if you want it to be cross platform, you need to find a workaround. Why don't you check for the platform and write a different conditional routine for each platform?
I think the effect you want is flip and yes it's only available on iOS at the moment. There are a couple of iOS visual effects that push the image into a UIView and animate that with native methods. This blog post indicates it would be possible to implement something similar on android but it would need to be in the engine: http://www.techrepublic.com/blog/software-engineer/use-androids-scale-animation-to-simulate-a-3d-flip/

Flicker-free dialogs with custom controls

I have a problem with MFC dialog boxes that are drawn using derived MFC classes for custom drawing of controls.
One of our customers has a real slow PC with a poor graphics card and even normal Windows dialogs paint quite slow. In our case, the problem is far worse. Each individual control (e.g. buttons, group boxes, labels) can be seen to draw seperately.
In most cases I've overridden/implemented the OnPaint() handlers, thinking that drawing on whatever device context I'm provided should be the way to go.
Ideally, what I would like to do is have all controls painted on an off-screen buffer so that when a dialog repaint is required - bang - it just copies the single rendered image to the screen, rather than painting each control to the screen one by one.
Can somebody please advise me how I can achieve this kind of double-buffering?
I've sort of found the solution to my problem.. By setting the dialog extended style to WS_EX_COMPOSITED, the drawing works nicely.. The problem I'm having now concerns a continuous stream of WM_PAINT and WM_ERASEBKGND messages that I keep getting when this style is enabled.
Does anyone know how I can stop the WM_PAINT/WM_ERASEBKGND messages from continously occurring?

Resources