Performance issue in a 3000 TextFields JavaFX application - javafx-2

I have a standalone javafx application that contains approximately 3000 Textfields (used for user input) divided in 10 pages created with FXML.
The navigation between pages is done either via Tabs or via ToggleButtons.
The problem is the switch time between the pages:
If I load all the controllers in memory, I get:
~2 seconds for an Intel Celeron at 1.6Ghz
<1 second for an i5
If I load the corresponding controller at switch time, ~1 second adds up in every situation.
When switching the pages I use:
borderPane.setCenter(controller.getNode()), so I don't reload everything, just the textfields grid is changed with another one.
No other computation is done at switch time.
can I improve somehow the switch time?
if I'm thinking at adding a loading indicator, how do I know when the page is ready and the indicator can be closed? (Something equivalent to onDomReady() in webbrowsers)
other ideas?

Related

First Contentful Paint measured with almost 6 s although content is visible in second screen

I did everything I could to optimize my Wordpress site and got rid of most of the recommendations by PageSpeed Insights. I use WP Rocket caching plugin, Optimole image optimization and Cloudflare CDN.
Google PageSpeed Insights got somewhat better but still, especially on mobile, results are far from good - although all of the recommendations that were there in the beginning and that I could get rid of (without custom coding and without breaking the site) are now gone.
There is one thing that strikes me as odd about the PageSpeed Insights results. That is that First Contentful Paint is measured with something between 5 and 6 seconds although the screenshots of the page that Google presents clearly show that there is contentful paint in the second frame already. See image 1.
Any ideas on this?
The only remaining points on your suggestions are 1. Remove Unused css, and 2.Defer non critical resources( I think, cus the text is in German)
Point 2 affects time to first paint the most.
The screenshots you see in PSI are not in real time.
Also there is a slight (bug?) discrepancy between screenshots and actual performance as PSI uses a simulated slow-down of the page rather than an applied slowdown (so it loads the page at full speed then adjusts the figures to account for bandwidth and Round Trip Time to the server caused by higher latency).
If you run a Lighthouse audit (Google Chrome -> F12 -> audits 0> run audits) with the throttling set to 'applied' rather than 'simulated' you will see it is about 5 seconds before a meaningful paint.
Lighthouse is the engine that powers Page Speed Insights now so the same information should be in both.
With regards to your site speed you have a load of blank SVGs being loaded for some reason and your CSS and JS files need combining (do it manually, plugins don't tend to do a good job here) to reduce the number of requests your site makes. (It can only make 8 requests at a time and on 4G the round-trip / latency to your server means these add up quickly e.g. 40 files = 5 * 8 round trips at 100ms latency = 500ms of dead time waiting for a response)

DirectX9: delay between present() and actual screen update

My question is about the delay between calling the present method in DirectX9 and the update appearing on the screen.
On a Windows system, I have a window opened using DirectX9 and update it in a simple way (change the color of the entire window, then call the IDirect3DSwapChain9's present method). I call the swapchain's present method with the flag D3DPRESENT_DONOTWAIT during a vertical blank interval. There is only one buffer associated with the swapchain.
I also obtain an external measurement of when the CRT screen I use actually changes color through a photodiode connected to the center of the screen. I obtain this measurement with sub-millisecond accuracy and delay.
What I found was that the changes appear exactly in the third refresh after the call to present(). Thus, when I call present() at the end of the vertical blank, just before the screen refreshing, the change will appear on the screen exactly 2*screen_duration + 0.5*refresh_duration after the call to present().
My question is a general one:
in how far can I rely on this delay (changes appearing in the third refresh) being the same on different systems ...
... or does it vary with monitors (leaving aside the response times of LCD and LED monitors)
... or with graphics-cards
are there other factors influencing this delay
An additional question:
does anybody know a way of determining, within DirectX9, when a change appeared on the screen (without external measurements)
There's a lot of variables at play here, especially since DirectX 9 itself is legacy and is effectively emulated on modern versions of Windows.
You might want to read Accurately Profiling Direct3D API Calls (Direct3D 9), although that article doesn't directly address presentation.
On Windows Vista or later, once you call Present to flip the front and back buffers, it's passed off to the Desktop Windows Manager for composition and eventual display. There are a lot of factors at play here including GPU vendor, driver version, OS version, Windows settings, 3rd party driver 'value add' features, full-screen vs. windowed mode, etc.
In short: Your Mileage May Vary (YMMV) so don't expect your timings to generalize beyond your immediate setup.
If your application requires knowing exactly when present happens instead of just "best effort" as is more common, I recommend moving to DirectX9Ex, DirectX 11, or DirectX 12 and taking advantage of the DXGI frame statistics.
In case somebody stumbles upon this with a similar question: I found out the reason why my screen update appears exactly on the third refresh after calling present(). As it turns out, the Windows OS by default queues exactly 3 frames before presenting them, and so changes appear on the third refresh. As it stands, this can only be "fixed" by the application starting with Directx10 (and Directx9Ex); for Directx9 and earlier, one has to either use the graphics card driver or the Windows registry to reduce this queueing.

Massive live video streaming using Linux

I am considering different ways to stream a massive number of live videos to the screen in linux/X11, using multiple independent processes.
I started the project initially with openGL/GLX and openGL textures, but that was a dead end. The reason: "context switching". It turns out that (especially nvidia) performs poorly when several (independent multi-)processes are manipulating at fast pace textures, using multiple contexts. This results in crashes, freezes, etc.
( see the following thread: https://lists.freedesktop.org/archives/nouveau/2017-February/027286.html )
I finally turned into Xvideo and it seems to work very nicely. My initial tests show that Xvideo handles video dumping ~ 10 times more effectively than openGL and does not crash. One can demonstrate this running ~ 10 vlc clients with 720p#25fps and trying both Xvideo and OpenGL output (remember to put all fullscreen).
However, I am suspecting that Xvideo uses, under the hood, openGL, so let's see if I am getting this right ..
Both Xvideo and GLX are extension modules of X11, but:
(A) Dumping video through Xvideo:
XVideo considers the whole screen as a device port and manipulates it directly (it has these god-like powers, being an extension to X11)
.. so it only needs a single context from the graphics driver. Lets call it context 1.
Process 1 requests Xvideo services for a certain window .. Xvideo manages it into a certain portion of the screen, using context 1.
Process 2 requests Xvideo services for a certain window .. Xvideo manages it into a certain portion of the screen, using context 1.
(B) Dumping video "manually" through GLX and openGL texture dumping:
Process 1 requests a context from glx, gets context 1 and starts dumping textures with it.
Process 2 requests a context from glx, gets context 2 and starts dumping textures with it.
Am I getting this right?
Is there any way to achieve, using openGL directly, situation (A) ?
.. one might have to drop GLX completely, which starts to be a bit hard-core.
It's been a while but I finally got it sorted out, using OpenGL textures and multithreading. This seems to be the optimal way:
https://elsampsa.github.io/valkka-core/html/process_chart.html
(disclaimer: I did that)

Android static UI is taking to much time in loading

I have developed checklist app for government internal use.
It has 250 fields in different tabular format.
What is happening is that loading of screen takes 10 second or more in 1.4 ghz quad core processor with 1 gb ram device also.
How to improve loading of static device or at least show indicator to user that user form is being load through some indicator?
Can I load static xml file with async task?
Does it improve perfomance?
IS there any option to load static UI incrementally once user scroll down?
Please note that there are no list view.
Only static view for lines and checkboxes and text views are there.
why dont you categorize these fields and keep in different activity so it will improve performance and application structure will be good.
Why not make a list view that has a custom check box row layout? It'll be much faster as it only loads what's on the screen. If you hide the dividers its practically the same assuming its a vertical list.

MonoTouch Memory Use High

I have monodevelop 2.8 on top of monotouch 5 agains the Xcode 4.2 SDK. I have been having memory issues with my iPhone app. I have been struggling with identifying the cause, so I created a test app with a master detail view. I made a minor modification to the rootcontroller to have it show 5 root items instead of the default 1. Each click of the root item adds a new DetailViewController into the navigation controller.
controller.NavigationController.PushViewController (DetailViewController, true);
In my detail view controller I've added logic that simply take an input that governs the number of times a loop happens, and then a button to trigger the loop to occur and make a call to a REST based service. Very minimal code changes from the default.
Just running the example and looking at it in instruments I seem to be up to 1.2 MB of live bytes. I think launch the detail view by touching items in the root view controller and I get up over 2 MB. Rotating the display or triggering the keyboard to open gets memory up near 3 MB. I navigate back in the controller and open a different view from the rootviewcontroller and I can see the memory grow some more. Just moving in and out of views without even triggering my custom code I can get the memory use in instruments over 3 MB. I've seen my app receive memory warnings when being up over 3 MB before. My test detail view is very basic with a text box, a label, and a button that all have outlets on them. I was under the impression I don't need to do anything special to have them cleanup. However, I don't see live bytes drop in instruments.
As an additional test, I added a Done button. When the done button is pressed I go and use RemoveFromSuperview() on each outlet, Dispose(), and then set it to null. I see the live bytes drop. But that doesn't do anything for me if the back navigation is used instead.
I'm curious if anyone can verify my expectations of seeing memory drop. Not sure if using instruments to look at live bytes is even valid or not. I'd like to determine if my testing is even valid and if there are tips for reducing memory foot print. Any links to best practices on reducing the memory foot print are appreciated as I seem to be able to get the memory to climb and my app to start getting memory warnings just by navigating around between screens.
It's hard to comment without seeing the code for the test app. Is there any way you could submit a bug report to http://bugzilla.xamarin.com and attach your test project?
There's a developer on MonoTouch working hard to add additional smarts to the GC for MonoTouch for 5.2 that I'm sure would love to have more test cases.
I would also be very interested in looking over your test case.

Resources