ncurses app checks if terminal has been resized. If the size is less than 80x25, then blank the screen and show error message.
If the app has N windows, should all of them be removed with delwin(), or calling clear() would be enough? On the other hand, can already existing windows be later reused after clear() - to refresh and display contents if terminal size became satisfactory - or they should be recreated?
Clearing the windows sounds like the application's behavior rather than ncurses as such. The ncurses library (see resizeterm) will clear areas if the windows increase in size.
The best policy when resizing really depends on what you have inside the windows. ncurses is making reasonably safe changes, but since it has no information about your intention in making some things close on the screen, and others separated, all it can do is to attempt to resize windows so that their contents are preserved. The application can still clear them and start over again — as well as moving windows around on the screen.
It's your decision whether it is simpler to recreate the windows or reuse them.
As long as all of the rebuilding is done before the next repainting of the screen (e.g., with wrefresh), ncurses will make the updates with as little activity as it can.
Related
My question is about the delay between calling the present method in DirectX9 and the update appearing on the screen.
On a Windows system, I have a window opened using DirectX9 and update it in a simple way (change the color of the entire window, then call the IDirect3DSwapChain9's present method). I call the swapchain's present method with the flag D3DPRESENT_DONOTWAIT during a vertical blank interval. There is only one buffer associated with the swapchain.
I also obtain an external measurement of when the CRT screen I use actually changes color through a photodiode connected to the center of the screen. I obtain this measurement with sub-millisecond accuracy and delay.
What I found was that the changes appear exactly in the third refresh after the call to present(). Thus, when I call present() at the end of the vertical blank, just before the screen refreshing, the change will appear on the screen exactly 2*screen_duration + 0.5*refresh_duration after the call to present().
My question is a general one:
in how far can I rely on this delay (changes appearing in the third refresh) being the same on different systems ...
... or does it vary with monitors (leaving aside the response times of LCD and LED monitors)
... or with graphics-cards
are there other factors influencing this delay
An additional question:
does anybody know a way of determining, within DirectX9, when a change appeared on the screen (without external measurements)
There's a lot of variables at play here, especially since DirectX 9 itself is legacy and is effectively emulated on modern versions of Windows.
You might want to read Accurately Profiling Direct3D API Calls (Direct3D 9), although that article doesn't directly address presentation.
On Windows Vista or later, once you call Present to flip the front and back buffers, it's passed off to the Desktop Windows Manager for composition and eventual display. There are a lot of factors at play here including GPU vendor, driver version, OS version, Windows settings, 3rd party driver 'value add' features, full-screen vs. windowed mode, etc.
In short: Your Mileage May Vary (YMMV) so don't expect your timings to generalize beyond your immediate setup.
If your application requires knowing exactly when present happens instead of just "best effort" as is more common, I recommend moving to DirectX9Ex, DirectX 11, or DirectX 12 and taking advantage of the DXGI frame statistics.
In case somebody stumbles upon this with a similar question: I found out the reason why my screen update appears exactly on the third refresh after calling present(). As it turns out, the Windows OS by default queues exactly 3 frames before presenting them, and so changes appear on the third refresh. As it stands, this can only be "fixed" by the application starting with Directx10 (and Directx9Ex); for Directx9 and earlier, one has to either use the graphics card driver or the Windows registry to reduce this queueing.
How to get screenshot of graphical application programmatically? Application draw its window using EGL API via DRM/KMS.
I use Ubuntu Server 16.04.3 and graphical application written using Qt 5.9.2 with EGLFS QPA backend. It started from first virtual terminal (if matters), then it switch display to output in full HD graphical mode.
When I use utilities (e.g. fb2png) which operates on /dev/fb?, then only textmode contents of first virtual terminal (Ctrl+Alt+F1) are saved as screenshot.
It is hardly, that there are EGL API to get contents of any buffer from context of another process (it would be insecure), but maybe there are some mechanism (and library) to get access to final output of GPU?
One way would be to get a screenshot from within your application, reading the contents of the back buffer with glReadPixels(). Or use QQuickWindow::grabWindow(), which internally uses glReadPixels() in the correct way. This seems to be not an option for you, as you need to take a screenshot when the Qt app is frozen.
The other way would be to use the DRM API to map the framebuffer and then memcpy the mapped pixels. This is implemented in Chromium OS with Python and can be translated to C easily, see https://chromium-review.googlesource.com/c/chromiumos/platform/factory/+/367611. The DRM API can also be used by another process than the Qt UI process that does the rendering.
This is a very interesting question, and I have fought this problem from several angles.
The problem is quite complex and dependant on platform, you seem to be running on EGL, which means embedded, and there you have few options unless your platform offers them.
The options you have are:
glTexSubImage2D
glTexSubImage2D can copy several kinds of buffers from OpenGL textures to CPU memory. Unfortunatly it is not supported in GLES 2/3, but your embedded provider might support it via an extension. This is nice because you can either render to FBO or get the pixels from the specific texture you need. It also needs minimal code intervertion.
glReadPixels
glReadPixels is the most common way to download all or part of the GPU pixels which are already rendered. Albeit slow, it works on GLES and Desktop. On Desktop with a decent GPU is bearable up to interactive framerates, but beware on embedded it might be really slow as it stops your render thread to get the data (horrible framedrops ensured). You can save code as it can be made to work with minimal code modifications.
Pixel Buffer Objects (PBO's)
Once you start doing real research PBO's appear here and there because they can be made to work asynchronously. They are also generally not supported in embedded but can work really well on desktop even on mediocre GPU's. Also a bit tricky to setup and require specific render modifications.
Framebuffer
On embedded, sometimes you already render to the framebuffer, so go there and fetch the pixels. Also works on desktop. You can enven mmap() the buffer to a file and get partial contents easily. But beware in many embedded systems EGL does not work on the framebuffer but on a different 'overlay' so you might be snapshotting the background of it. Also to note some multimedia applications are run with UI's on the EGL and media players on the framebuffer. So if you only need to capture the video players this might work for you. In other cases there is EGL targeting a texture which is copied to the framebuffer, and it will also work just fine.
As far as I know render to texture and stream to a framebuffer is the way they made the sweet Qt UI you see on the Ableton Push 2
More exotic Dispmanx/OpenWF
On some embedded systems (notably the Raspberry Pi and most Broadcom Videocore's) you have DispmanX. Whichs is really interesting:
This is fun:
The lowest level of accessing the GPU seems to be by an API called Dispmanx[...]
It continues...
Just to give you total lack of encouragement from using Dispmanx there are hardly any examples and no serious documentation.
Basically DispmanX is very near to baremetal. So it is even deeper down than the framebuffer or EGL. Really interesting stuff because you can use vc_dispmanx_snapshot() and really get a snapshot of everything really fast. And by fast I mean I got 30FPS RGBA32 screen capture with no noticeable stutter on screen and about 4~6% of extra CPU overhead on a Rasberry Pi. Night and day because glReadPixels got was producing very noticeable framedrops even for 1x1 pixel capture.
That's pretty much what I've found.
When I press and hold a key, a first symbol is typed, then there is a little delay, and then other symbols are typed fast. Something like this:
Same happens in Terminal. Same happens in linux console (tty), even though this delay is smaller there.
I'm working on a console app in Python that uses curses, and it processes the presses of arrow keys and this delay is present there as well.
I want to get rid of this delay, so that when I press and hold a key - it would send signals with uniformly, without any specific delays after first (or whichever) symbol.
How can I do it? Should I use something from the arsenal of curses? Or tinker with some system-wide settings?
EDIT1: I think I've found one way. I can go to keyboard settings and set delay of autorepeat. But it changes it globally and only for my graphical interface. It doesn't change anything in linux console. So, I'm looking fo a way to do it in console as well, and also so it would affect only my app, not the whole system.
and a way for linux console: https://unix.stackexchange.com/questions/58651/adjusting-keyboard-sensitivity-in-a-command-line-terminal
but still looking for an app-only way.
I want to make Linux Os which run only one application as a full screen without showing any Login window at start up or title-bar and minimize/maximize/close button.
Is there any way to do this? It's Embedded platform and I already build Linux os for it and I also have application.
In the nutshell - X System is extremely flexible.
When you system starts up, it does the following steps:
Loads and runs the Kernel (and related initrd, if any, but it's irrelevant)
Starts init (process 1)
Starts system services, networking, etc.
Starts X server
Starts Window Manager (the application responsible for resizing windows etc)
Starts your application.
What you need to do, is to first disable GUI login and session (easiest to disable X) - you'll be able to log in through the console terminal (You can always access it with Ctrl-Alt-F1)
Then, launch something along the lines of
X &
DISPLAY=:0 ./yourapp.exe
If your app can handle making itself fullscreen, that would be it.
Add this to your startup scripts and you're there.
More explanation
The purpose of the Window Manager.. is to manage windows. It's that simple :)
Basically, there are 3 components of your typical X session.
X Server - the piece of software that provides an abstraction layer around the hardware (GPU drivers, Keyboard, Mouse, Touchscreen, etc). It has a concept of windows - areas where X Clients can draw into.
X Clients - Everything else. Your software, if it draws something, is likely one. So is the web browser, etc. The connect to the X Server, and draw.
Window Manager - A special type of an X Client, this piece of software provides the ability to control the windows on your screen. It often draws window decorations (minimize, maximize buttons), sometimes draws a taskbar, etc.
You are free to mix and match them entirely as you please. Simpler, minimalisting window managers, such as my ratpoison which I prefer for many prototype embedded systems only have the notion of fullscreen windows, and can switch between fullscreen apps (think Windows 8 Metro). Others draw window decorations, and allow overlapping and cascading windows.
Since developing a Window Manager is a simple and modular task, there are literally hundreds to chose from. You also can choose not to use one at all, at which point your windows have to self-manage (you'll not be able to move them around by default). Many applications respect the -geometry 1920x1080+0+0 parameter, telling them to open a window in 1920x1080 resolution at the 0,0 corner - effectively fullscreen.
I have a touch screen device that is running on windows CE. after 30 second the screen goes off to save power and will come back on it you touch it.
The problem is that randomly when the screen goes off the device will not come back on simply by touching the screen. I have a done a bunch of tests and there is no noticeable pattern to when this happens.
It appears to be performing the same action as when you press the suspend button from the main menu.
I have done some research and found there are 4 power saving settings in the registry and I think I need to disable one to stop the device from "suspending". I never want the device to turn off except for the screen going off, it is always connected to power.
Does anyone know how I can do this or why it is randomly suspending ?
And the entire device is in Chinese So really precise instructions would be appreciated. My application runs on top of the CE.
I know you're after precise instructions, but it's not that simple. The device OEM defined and implemented the power management system for the device, Microsoft only provided the structure for it. The OEM could have implemented power management in any way they sought fit,, and in fact they could have completely ignore the Microsoft-provided framework (wouldn't be the first time an OEM did that). Really you need to get a hold of the OEM and ask them how to prevent the behavior you're seeing or to get something different.
Barring that, you could always play around with the registry entries, but again, there's no guarantee any of them will work. You might look at adjusting power state or the activity timer registry entries.
Playing with the power manager control panel applet might also help (it's probably labelled 电源管理)
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\Timeouts]
"BattSuspend"=dword:0