OpenGL ES has a maximum speed limit? - multithreading

I implemented a loop with up to 100fps and can not spend more than 63fps.
What I believe is that the thread that runs the method of drawing opengl has a speed limit.
"(
#Override
public void onDrawFrame(GL10 gl)
)"

It depends on whether or not your rendering context has vertical sync enabled. Most LCD devices refresh at 60hz, and it may be waiting for the next refresh to call onDrawFrame(). That's one reason you'd be seeing that number.
The other possibility is that your draw is just taking long enough that it can't run any faster.

You should read the spec, for eglSwapInterval. It has to be implemented in your driver (I presume this is an Android device) to be able to see the effect. You can use it in any OpenGLES2 based application.
http://www.khronos.org/registry/egl/sdk/docs/man/xhtml/eglSwapInterval.html
A gist showing the usage here:
https://gist.github.com/prabindh/8467984

Related

Why GBuffers need to be created for each frame in D3D12?

I have experience with D3D11 and want to learn D3D12. I am reading the official D3D12 multithread example and don't understand why the shadow map (generated in the first pass as a DSV, consumed in the second pass as SRV) is created for each frame (actually only 2 copies, as the FrameResource is reused every 2 frames).
The code that creates the shadow map resource is here, in the FrameResource class, instances of which is created here.
There is actually another resource that is created for each frame, the constant buffer. I kind of understand the constant buffer. Because it is written by CPU (D3D11 dynamic usage) and need to remain unchanged until the GPU finish using it, so there need to be 2 copies. However, I don't understand why the shadow map needs to do the same, because it is only modified by GPU (D3D11 default usage), and there are fence commands to separate reading and writing to that texture anyway. As long as the GPU follows the fence, a single texture should be enough for the GPU to work correctly. Where am I wrong?
Thanks in advance.
EDIT
According to the comment below, the "fence" I mentioned above should more accurately be called "resource barrier".
The key issue is that you don't want to stall the GPU for best performance. Double-buffering is a minimal requirement, but typically triple-buffering is better for smoothing out frame-to-frame rendering spikes, etc.
FWIW, the default behavior of DXGI Present is to stall only after you have submitted THREE frames of work, not two.
Of course, there's a trade-off between triple-buffering and input responsiveness, but if you are maintaining 60 Hz or better than it's likely not noticeable.
With all that said, typically you don't need to double-buffered depth/stencil buffers for rendering, although if you wanted to make the initial write of the depth-buffer overlap with the read of the previous depth-buffer passes then you would want distinct buffers per frame for performance and correctness.
The 'writes' are all complete before the 'reads' in DX12 because of the injection of the 'Resource Barrier' into the command-list:
void FrameResource::SwapBarriers()
{
// Transition the shadow map from writeable to readable.
m_commandLists[CommandListMid]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_DEPTH_WRITE, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
}
void FrameResource::Finish()
{
m_commandLists[CommandListPost]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_DEPTH_WRITE));
}
Note that this sample is a port/rewrite of the older legacy DirectX SDK sample MultithreadedRendering11, so it may be just an artifact of convenience to have two shadow buffers instead of just one.

Programmatic access to a sound played through OpenAL

I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)

Custom player using NDK/C++/MediaCodec - starvation/buffering in decoder

I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.

How can I determine the length of time since the last screen refresh on X11?

I'm trying to debug a laggy machine vision camera by writing text timestamps to a terminal window and then observing how long it takes for the camera to 'detect' the screen change. My monitor has a 60hz refresh rate, so the screen is updated every ~17ms. Is there a way to determine at what point within that 17ms window the refresh timer currently is for an X11 application.
EDIT: After wrestling with the problem for nearly a day, I think the real question I should have asked was how to generate a visual signal that was sufficiently fast to test the camera images. My working hypothesis was that the camera was buffering frames before transmitting them, as the video stream seemed to lag behind other synchronised digital events (in this case, output signals to a robotic controller)
'xrefresh' is a tool which can trigger a refresh event on an X server. It does this by painting a global window of a specified color and then removing it, causing all subsequent windows to repaint. Even with this, I was still getting very inconsistent results when trying to correlate the captured frames against the monitor output, no matter what I tried to do, the video stream seemed to lag behind what I expected the monitor state to be. This could mean that either the camera was slow to capture or the monitor was slow to update. Fortunately, I eventually hit upon the idea of using the keyboard leds to verify the synchronicity of the camera frames. ('xset led' and 'xset -led'). This showed me immediately that in fact my computer monitor was slow to update, instead of the camera lagging behind.

Memory leak through IClipboardDataPasteEventImpl

I noticed an odd memory increase in one of my Activities. Hence I ran a little test: I opened the dialog multiple times (open - close - open - close ....) and the memory kept increasing. So I used the DDMS to dump an HPROF file and opened it in MAT (Memory analyzer). The leak suspect report indicated, that the main reason for the growing memory consumption was this:
So I did a histogramm, to check that dialog I ran my tests on and what's keeping it alive. Turns out, it's kept alive by it's AutoCompleteTextViews, which in turn are kept alive by android.widget.TextView$IClipboardDataPasteEventImpl. However there are no immediate dominators for IClipboardDataPasteEventImpl (except of course the GC Root). I tried to find that IClipboardDataPasteEventImpl on the internet and I searched grepcode (the android source), but the only thing I could come up with was this blog entry. I can't read whatever language that is, but what I could read are the English words thrown in, which indicates, that it might be a bug on the Samsung Galaxy SII (the phone I am using, running android 2.3.x), related to the ClipboardManager. However I am unsure of this (I want to fix this, hence I am disinclined to simply accept it to be an unfixable bug) and I have no clue, where this Clipboard is spawned and why. I would greatly appreciate any pointers/ideas on the matter.
Investigation
Here're my research results:
It happens to any Activity whose content view consists of an EditText. finish()ing the Activity does not get it garbage collected as it is being referenced, like this:
activity com.example.MyActivity
<- mContext android.widget.TextView
<- this$0 android.widget.TextView$IClipboardDataPasteEventImpl
<- this$1 android.widget.TextView$IClipboardDataPasteEventImpl$1
<- referent java.lang.ref.FinalizerReference
It happens on my Samsung Galaxy Tab GT-P7300 running Android 4.0.4, but not on my Samsung Galaxy Mini GT-S5570 running Android 2.2.1.
The IClipboardDataPasteEventImpl objects eventually get freed, actually, but only at times which seems to be unpredictable.
Since they are referenced by java.lang.ref.FinalizerReference, I believe that the IClipboardDataPasteEventImpl objects are waiting to be finalize()'d, which happens only when the JVM feels like to. For details, check out these SO questions:
is memory leak? why java.lang.ref.Finalizer eat so much memory
Very strange OutOfMemoryError
Solution / Workaround
Sorry, no solution, but here's my best workaround:
In onDestroy() of your Activity, free as many references to other objects as possible (especially the big ones, such as bitmaps, collections, and child views of your activity), like this:
#Override
protected void onDestroy()
{
// Free reference to large objects.
m_SomeLargeObject = null;
m_AnotherLargeObject = null;
// For ArrayList, if you are a paranoid to null, you may call clear() and then trimToSize().
m_SomeLargeArrayList.clear();
m_SomeLargeArrayList.trimToSize();
// Free child views.
m_MyButton = null;
// Free adapters.
m_ListViewAdapter = null;
... etc.
// Don't forget to chain the call to the superclass.
super.onDestroy();
}
This way, we can at least reduce the casualties and hopefully won't go out of memory before the JVM has the mood to finialize and collect all those evil IClipboardDataPasteEventImpl objects.
In an ideal world of garbage collected environment, this would be unnecessary, but I guess we should all realize that our world is not perfect, and we just have to live with the flaws.
Below is my translation of the original blog entry (in Chinese) as mentioned in the question. Hopefully this can give everybody a better understanding about the issue.
Galaxy S2 memory leak with TextView
不知道是不是哪邊弄錯
Not sure where it went wrong
但是galaxy s2的textview會產生memory leak
But the textview of galaxy s2 causes memory leaks
leak是發生在android.widget.TextView$IClipboardDataPasteEventImpl這個interface上
Leak happens on the interface android.widget.TextView$IClipboardDataPasteEventImpl
它會抓住mContext造成整個activity沒辦法被gc
It holds the mContext, stopping the activity from being gc'ed
同樣的程式在htc sensation(2.3.4)跟se xperia arc(2.3.4)和acer liquid(2.1)都沒有問題
No such problem with the same app on htc sensation(2.3.4), se xperia arc(2.3.4) and acer liquid(2.1)
而且網路上完全找不到android.widget.TextView$IClipboardDataPasteEventImpl相關的資料
And, I can't find anything related to android.widget.TextView$IClipboardDataPasteEventImpl on the web at all
android source code裡也找不到 看起來應該是samsung自己加的東西...
Not even in the android source code, so it seems to be something added by samsung themselves...
之前的opengl viewport bug 已經夠頭痛了 接下來soundpool相關bug也搞累很多人
The opengl viewport bug was a headache already, and the soundpool related bug had frustrated many
現在這個memory leak又來攪局...
And now, here comes a memory leak messing around...
看來手機外型還是比較重要 /_\... 外型好先吸到人來買 bug再慢慢修就好
Seems that the appearance of mobile phones are more important /_\... good appearance attracts customers; bugs could be fixed later
[後記]
[P.S.]
經過一些試驗發現 只要按HOME button回到桌面,那些leak就會被釋放掉...
After some tests, I found out that the leaks will be freed by pressing the HOME button to go back to the desktop...
logcat會顯示一行Hide Clipboard dialog at Starting input: finished by someone else... !
It shows Hide Clipboard dialog at Starting input: finished by someone else... ! in logcat
看起來galaxy s2裡面有偷偷對clipboard作一些操作...
It seems that galaxy s2 is operating on the clipboard under the hood...
但如果一直保持在app裡面運作的話,那些leak還是會存在...最後應該會發生OOM exception
But if we stay in the app, those leaks remain... eventually an OOM exception would occur
現在只能期望galaxy s2 的ics版會修掉這個怪問題了...
Now we can only hope that this strange problem would be resolved in the ics version of galaxy s2...
My memleak investigation also brought me here. Im having problems with Activity leaking over EditText. android.widget.TextView$IClipboardDataPasteEventImpl object is holding the EditText which is holding the activity. This happens on Samsung Galaxy Tab 10.1, and Galaxy Tab 2 10.1, 7.0. I wasn't able to reproduce it on other non Samsung devices (Asus, Acer).
The bad thing is that I didn't find a solution for it yet :)

Resources