Does QNativeGesture works on Linux / Windows, given that the hardware supports it? - linux

I want to add swipe gestures in my application based on Qt5. The application is intended to run primarily on Linux (laptop). I was just not able to write code because I couldn't understand how to use the class (no example code was available).
The touchpad driver supports swipe gestures (libinput). Also, synaptics (on my system at least) support multi-finger touch.
Can someone please guide me about how to use the API, and provide with some example code?

Do you mean QNativeGestureEvent? You probably shouldn't mess with QNativeGestureEvent and use the higher-level framework: see Gestures in Widgets and Graphics View. The QNativeGestureEvent used to be only available on OS X; it's not really an API that's meant for wide consumption.
Alas, the QNativeGestureEvents are delivered to the widgets themselves. You would react to them by reimplementing QObject::event and acting on the event:
class MyWidget : public QWidget {
typedef base_type = QWidget;
...
bool QWidget::event(QEvent * ev) override {
if (ev->type() == QEvent::NativeGesture) {
auto g = static_Cast<QNativeGestureEvent*>(ev);
...
return true; // if event was recognized and processed
}
return base_type::event(ev);
}
};
But don't do that.

Related

How to start building a GUI toolkit for wayland

I want to create a GUI toolkit for my desktop environment (because neither gtk nor qt don't fit to my needs) but I don't know how to start. I don't want a cross-platform or display-server independent library, theming options, configurable icons etc. just a few basic widgets for making wayland clients. (widgets like button, entry, label, window and images... and I want to use CSD if it's important)
My problem is that I can't understand how graphics work in wayland (in X you only need to create a window and use XDrawLine etc. right?) also I don't know how to write a graphical toolkit. Can you give me some articles or recommendations on how can I do this?
The easiest way to create a wayland client is to use the wayland-client library. Basically, it abstracts the wire format.
example:
#include <stdio.h>
#include <wayland-client.h>
int main(void)
{
struct wl_display *display = wl_display_connect(NULL);
if (display) {
printf("Connected!\n");
} else {
printf("Error connecting ;(\n");
return 1;
}
wl_display_disconnect(display);
return 0;
}

Pre Spatializer Effect - Unity and Google VR Audio

I am using GVR Audio within unity to provide HRTF's for my audio sources, my project involves modelling the acoustics of the virtual environment which needs to happen before the HRTF filters.
On a default unity audio source there is an option to spatialise post effects, meaning I can insert my own effect there. However on the GVR Audio source there is no such option, what is the recomended way to spatialize post effects with GVR?
GvrAudioSource uses Unity's AudioSource under the hood. This means, it is possible to apply pre-spatialization processing using the OnAudioFilterRead method - as you'd normally write for audio sources in your script.
Alternatively, for other audio effect components that would require the spatializePostEffects option, you could simply enable the option through the script by adding the corresponding line below to Awake() function in GvrAudioSource.cs:
void Awake () {
...
audioSource.spatialBlend = 1.0f;
audioSource.spatializePostEffects = true; // Add this line.
OnValidate();
...
}
Please also note that, this unfortunately does not currently allow you to add Unity's stock AudioEffect components (e.g. AudioLowPassFilter) in the Editor, as it'd complain about the lack of an AudioSource component in that game object. This is, however, only a UI limitation, i.e., adding a component with such restrictions in run time should still work as expected.
Hope this answers your question.
Cheers

Haxe/OpenFL app targeting flash/html5/air(desktop). How to handle filesystem data writing/reading?

I'm making an app that needs to target flash/air and ideally html5. This app has a feature of writing data in filesystem.
I've started using Haxe/OpenFL (this is my first experience).
I understand that flash and html5 won't be able to access local user's filesystem to write and read data. So those version would have this feature swapped by storing data in the cloud.
But AIR always had a possibility to operate the filesystem.
So I have two quetions now:
Is it possible to package Haxe/OpenFL project into AIR app, and
make a proper .air installer? Or should I be using some sorts of
cpp/c++ alternatives? What's the best practice in such a case?
Which API of OpenFL/Haxe gives me the possibility to
write data on filesystem?
Different platforms have different implementations to save/load user data. You should #if all the possible options for all the target platforms.
E.g.:
#if cpp
import haxe.io.Eof;
import sys.FileSystem;
import sys.io.File;
import sys.io.FileOutput;
import sys.io.FileInput;
#else if flash
import flash.net.SharedObject;
import flash.net.SharedObjectFlushStatus;
import flash.events.NetStatusEvent;
#end
class FileIO {
public function SaveData() : Void {
#if cpp
var Fout : FileOutput = null;
try {
myFout = sys.io.File.write("c:\mypath\myfile.name", false);
myFout.writeInt32(100);
[.....]
#else if flash
var shObj : SharedObject = null;
try {
shObj = SharedObject.getLocal("mySHODATA");
var ret : String = shObj.flush(MinFileSize);
[......]
#end
}
}
and so on. flash AIR, long story short is just -pretty much- a flash wrapper for desktop execution. Imagine it as a swf container for desktop (instead of instantiating an swf container in a web browser). So AIR-wrapped app == flash/swf app. If you want to use AIR implementations for file access, you can do that by following these steps.If you use haxe, it has it's own implementation for different targets, i.e. "#if" will find "flash" branch true, even if you plan to AIR-wrap you flash-app; so you'll have to use sharedobject.
I think it's wiser to compile cpp against desktop target, instead of "flash + wrap around with AIR" later.
It's supposedly doable, haven't tried it though.
Check out sys.FileSystem and sys.io.
this guide should walk you through the process of publishing for iOS and Android with AIR http://www.openfl.org/archive/community/installing-openfl/building-air/

Is there some kind of equivalent to .NET's BackgroundWorker in Vala?

I'm trying to learn Vala so I'm making a small GUI application. My main language before has been C# so things are going pretty well.
However, I've hit the wall now. I need to connect to an external network server (using GIO) which doesn't answer my client immediately. This makes the GUI freeze up while the program is connecting and doing its thing.
In C# I would probably use a BackgroundWorker in this case. I can't seem to find anything like it for Vala though.
Basically, I have a MainWindow.vala where I have hooked up a signal for clicking a certain button to a method that is creating a new instance of ProcessingDialog.vala. This shows a dialog over the MainWindow that I want the user to see while the program is doing the work (connecting to the server, communicating).
What are my alternatives to make this scenario work?
GIO offers async methods, see an async client for example: https://live.gnome.org/Vala/GIONetworkingSample
If you are not aware of async methods in Vala, try looking at the tutorial: https://live.gnome.org/Vala/Tutorial#Asynchronous_Methods
lethalman's answer above probably makes the most sense, an async request is really going to be your best bet if you're doing a network call. In other cases, you can use Vala's built in thread support to accomplish a background task. It looks like soon enough, there will be a better library available, but this is what's stable.
// Create the function to perform the task
public void thread_function() {
stdout.printf("I am doing something!\n");
}
public int main( string[] args ) {
// Create the thread to start that function
unowned Thread<void*> my_thread = Thread.create<void*>(thread_function, true);
// Some time toward the end of your application, reclaim the thread
my_thread.join();
return 1;
}
Remember to compile with the "--thread" option.

j2me screen flicker when switching between canvases

I'm writing a mobile phone game using j2me. In this game, I am using multiple Canvas objects.
For example, the game menu is a Canvas object, and the actual game is a Canvas object too.
I've noticed that, on some devices, when I switch from one Canvas to another, e.g from the main menu to the game, the screen momentarily "flickers". I'm using my own double buffered Canvas.
Is there anyway to avoid this?
I would say, that using multiple canvases is generally bad design. On some phones it will even crash. The best way would really be using one canvas with tracking state of the application. And then in paint method you would have
protected void paint(final Graphics g) {
if(menu) {
paintMenu(g);
} else if (game) {
paintGame(g);
}
}
There are better ways to handle application state with screen objects, that would make the design cleaner, but I think you got the idea :)
/JaanusSiim
Do you use double buffering? If the device itself does not support double buffering you should define a off screen buffer (Image) and paint to it first and then paint the end result to the real screen. Do this for each of your canvases. Here is an example:
public class MyScreen extends Canvas {
private Image osb;
private Graphics osg;
//...
public MyScreen()
{
// if device is not double buffered
// use image as a offscreen buffer
if (!isDoubleBuffered())
{
osb = Image.createImage(screenWidth, screenHeight);
osg = osb.getGraphics();
osg.setFont(defaultFont);
}
}
protected void paint(Graphics graphics)
{
if (!isDoubleBuffered())
{
// do your painting on off screen buffer first
renderWorld(osg);
// once done paint it at image on the real screen
graphics.drawImage(osb, 0, 0, Tools.GRAPHICS_TOP_LEFT);
}
else
{
osg = graphics;
renderWorld(graphics);
}
}
}
A possible fix is by synchronising the switch using Display.callSerially(). The flicker is probably caused by the app attempting to draw to the screen while the switch of the Canvas is still ongoing. callSerially() is supposed to wait for the repaint to finish before attempting to call run() again.
But all this is entirely dependent on the phone since many devices do not implement callSerially(), never mind follow the implementation listed in the official documentation. The only devices I've known to work correctly with callSerially() were Siemens phones.
Another possible attempt would be to put a Thread.sleep() of something huge like 1000 ms, making sure that you've called your setCurrent() method beforehand. This way, the device might manage to make the change before the displayable attempts to draw.
The most likely problem is that it is a device issue and the guaranteed fix to the flicker is simple - use one Canvas. Probably not what you wanted to hear though. :)
It might be a good idea to use GameCanvas class if you are writing a game. It is much better for such purpose and when used properly it should solve your problem.
Hypothetically, using 1 canvas with a sate machine code for your application is a good idea. However the only device I have to test applications on (MOTO v3) crashes at resources loading time just because there's too much code/to be loaded in 1 GameCanvas ( haven't tried with Canvas ). It's as painful as it is real and atm I haven't found a solution to the problem.
If you're lucky to have a good number of devices to test on, it is worth having both approaches implemented and pretty much make versions of your game for each device.

Resources