Environment
Windows XP SP3 x32
Visual Studio 2005 Standard Edition
Honeywell Dolphin 9500 Pocket PC/Windows Mobile 2003 Platform
Using the provided Honeywell Dolphin 9500 VS2005 SDK
.NET Framework 1.1 and .NET Compact Framework 1.0 SP3
Using VC#
Problem
When I save an image from the built in camera and Honeywell SDK ImageControl to the device's storage card or internal memory, it takes 6 - 7 seconds.
I am currently saving the image as a PNG but have the option of a BMP or JPG as well.
Relevant lines in the code: 144-184 and 222, specifically 162,163 and 222.
Goal
I would like to reduce that time down to something like 2 or 3 seconds, and even less if possible.
As a secondary goal, I am looking for a profiling suite for Pocket PC 2003 devices specifically supporting the .NET Compact Framework Version 1.0. Ideally free but an unfettered short tutorial would work as well.
Things I Have Tried
I looked into asynchronous I/O via System.Threading a little bit but I do not have the experience to know whether this is a good idea, nor exactly how to implement threading for a single operation.
With threading implemented as it is in the code below, there seems to be a trivial speed increase of maybe a second or less. However, something on the next Form requires the image, possibly in the act of being saved, and I do not know how to mitigate the wait or handle that scenario at all, really.
EDIT: Changing the save format from PNG to BMP or JPG, with the threading, seems to reduce the save time considerably..
Code
http://friendpaste.com/3J1d5acHO3lTlDNTz7LQzB
Let me know if the code should just be posted here in code tags. It is a little long (~226 lines) so I went ahead and friendpasted it as that seemed to be acceptable in my last post.
By changing the save format from PNG to BMP and including the Threading code shown in the Code link, I was able to reduce the save time to ~1 second.
You're at the mercy of the Honeywell SDK for this one, since their control is doing the actual saving of the image. Calling this on a separate thread (i.e. not the UI thread) isn't going to help at all (as you've found out), and it will actually make things more difficult for you since you need to wait until the save task is completed before moving on to the next form.
The only suggestion I can make is to make sure you're saving the image to internal memory (and not to the SD card), since writing to an SD card usually takes significantly longer than writing to memory. Or see if you can get technical support from Honeywell - 6-7 seconds seems way too long for a task like this.
Or see if the Honeywell SDK lets you get the image as a byte array (instead of saving to disk). If this call returns in less than 6-7 seconds, you can handle persisting it yourself.
Related
My question is about the delay between calling the present method in DirectX9 and the update appearing on the screen.
On a Windows system, I have a window opened using DirectX9 and update it in a simple way (change the color of the entire window, then call the IDirect3DSwapChain9's present method). I call the swapchain's present method with the flag D3DPRESENT_DONOTWAIT during a vertical blank interval. There is only one buffer associated with the swapchain.
I also obtain an external measurement of when the CRT screen I use actually changes color through a photodiode connected to the center of the screen. I obtain this measurement with sub-millisecond accuracy and delay.
What I found was that the changes appear exactly in the third refresh after the call to present(). Thus, when I call present() at the end of the vertical blank, just before the screen refreshing, the change will appear on the screen exactly 2*screen_duration + 0.5*refresh_duration after the call to present().
My question is a general one:
in how far can I rely on this delay (changes appearing in the third refresh) being the same on different systems ...
... or does it vary with monitors (leaving aside the response times of LCD and LED monitors)
... or with graphics-cards
are there other factors influencing this delay
An additional question:
does anybody know a way of determining, within DirectX9, when a change appeared on the screen (without external measurements)
There's a lot of variables at play here, especially since DirectX 9 itself is legacy and is effectively emulated on modern versions of Windows.
You might want to read Accurately Profiling Direct3D API Calls (Direct3D 9), although that article doesn't directly address presentation.
On Windows Vista or later, once you call Present to flip the front and back buffers, it's passed off to the Desktop Windows Manager for composition and eventual display. There are a lot of factors at play here including GPU vendor, driver version, OS version, Windows settings, 3rd party driver 'value add' features, full-screen vs. windowed mode, etc.
In short: Your Mileage May Vary (YMMV) so don't expect your timings to generalize beyond your immediate setup.
If your application requires knowing exactly when present happens instead of just "best effort" as is more common, I recommend moving to DirectX9Ex, DirectX 11, or DirectX 12 and taking advantage of the DXGI frame statistics.
In case somebody stumbles upon this with a similar question: I found out the reason why my screen update appears exactly on the third refresh after calling present(). As it turns out, the Windows OS by default queues exactly 3 frames before presenting them, and so changes appear on the third refresh. As it stands, this can only be "fixed" by the application starting with Directx10 (and Directx9Ex); for Directx9 and earlier, one has to either use the graphics card driver or the Windows registry to reduce this queueing.
How to get screenshot of graphical application programmatically? Application draw its window using EGL API via DRM/KMS.
I use Ubuntu Server 16.04.3 and graphical application written using Qt 5.9.2 with EGLFS QPA backend. It started from first virtual terminal (if matters), then it switch display to output in full HD graphical mode.
When I use utilities (e.g. fb2png) which operates on /dev/fb?, then only textmode contents of first virtual terminal (Ctrl+Alt+F1) are saved as screenshot.
It is hardly, that there are EGL API to get contents of any buffer from context of another process (it would be insecure), but maybe there are some mechanism (and library) to get access to final output of GPU?
One way would be to get a screenshot from within your application, reading the contents of the back buffer with glReadPixels(). Or use QQuickWindow::grabWindow(), which internally uses glReadPixels() in the correct way. This seems to be not an option for you, as you need to take a screenshot when the Qt app is frozen.
The other way would be to use the DRM API to map the framebuffer and then memcpy the mapped pixels. This is implemented in Chromium OS with Python and can be translated to C easily, see https://chromium-review.googlesource.com/c/chromiumos/platform/factory/+/367611. The DRM API can also be used by another process than the Qt UI process that does the rendering.
This is a very interesting question, and I have fought this problem from several angles.
The problem is quite complex and dependant on platform, you seem to be running on EGL, which means embedded, and there you have few options unless your platform offers them.
The options you have are:
glTexSubImage2D
glTexSubImage2D can copy several kinds of buffers from OpenGL textures to CPU memory. Unfortunatly it is not supported in GLES 2/3, but your embedded provider might support it via an extension. This is nice because you can either render to FBO or get the pixels from the specific texture you need. It also needs minimal code intervertion.
glReadPixels
glReadPixels is the most common way to download all or part of the GPU pixels which are already rendered. Albeit slow, it works on GLES and Desktop. On Desktop with a decent GPU is bearable up to interactive framerates, but beware on embedded it might be really slow as it stops your render thread to get the data (horrible framedrops ensured). You can save code as it can be made to work with minimal code modifications.
Pixel Buffer Objects (PBO's)
Once you start doing real research PBO's appear here and there because they can be made to work asynchronously. They are also generally not supported in embedded but can work really well on desktop even on mediocre GPU's. Also a bit tricky to setup and require specific render modifications.
Framebuffer
On embedded, sometimes you already render to the framebuffer, so go there and fetch the pixels. Also works on desktop. You can enven mmap() the buffer to a file and get partial contents easily. But beware in many embedded systems EGL does not work on the framebuffer but on a different 'overlay' so you might be snapshotting the background of it. Also to note some multimedia applications are run with UI's on the EGL and media players on the framebuffer. So if you only need to capture the video players this might work for you. In other cases there is EGL targeting a texture which is copied to the framebuffer, and it will also work just fine.
As far as I know render to texture and stream to a framebuffer is the way they made the sweet Qt UI you see on the Ableton Push 2
More exotic Dispmanx/OpenWF
On some embedded systems (notably the Raspberry Pi and most Broadcom Videocore's) you have DispmanX. Whichs is really interesting:
This is fun:
The lowest level of accessing the GPU seems to be by an API called Dispmanx[...]
It continues...
Just to give you total lack of encouragement from using Dispmanx there are hardly any examples and no serious documentation.
Basically DispmanX is very near to baremetal. So it is even deeper down than the framebuffer or EGL. Really interesting stuff because you can use vc_dispmanx_snapshot() and really get a snapshot of everything really fast. And by fast I mean I got 30FPS RGBA32 screen capture with no noticeable stutter on screen and about 4~6% of extra CPU overhead on a Rasberry Pi. Night and day because glReadPixels got was producing very noticeable framedrops even for 1x1 pixel capture.
That's pretty much what I've found.
I am not sure if I am posting to the right StackOverFlow forum but here goes.
I have a C# desktop app. It receives images from 4 analogue cameras and it tries to detect motion and if so it saves it.
When I leave the app running say over a 24hr cycle I notice the Private Working Set has climbed by almost 500% in Task manager.
Now, I know using Task Manager is not a good idea but it does give me an indication if something is wrong.
To that end I purchase dotMemory profiler from JetBrains.
I have used its tools to determine that the Heap Generation 2 increases a lot in size. Then to a lesser degree the Large Object Heap as well.
The latter is a surprise as the image size is 360x238 and the byte array size is always less than 20K.
So, my issues are:
Should I explicitly call GC.Collect(2) for instance?
Should I be worried that my app is somehow responsible for this?
Andrew, my recommendation is to take memory snapshot in dotMemory, than explore it to find what retains most of the memory. This video will help you. If you not sure about GC.Collect, you can just tap "Force GC" button it will collect all available garbage in your app.
So this particular libspotify error has me a little stumped. My app basically loads a users playlists, and allows users to go into those playlists at which time it populates the track data.
So the problem lies in the function:
sp_image* image = sp_image_create(session, image_id);
(after calling)
const byte* image_id = sp_album_cover(album, SP_IMAGE_SIZE_SMALL);
Now this works fine some of the time, but quite often it will pop up with a 'Corrupt memory passed to dlfree(), SIGSEGV' error, which cause. So first thing I looked for is a memory error but theres plenty of free memory and there are no null pointers when this occurs. The call occurs from with the library to libc.so so its much deeper into the library than i can access.
It obviously has something to do with memory but its odd that it could happen after loading 10 tracks or after 400 tracks and stranger yet, from my testing devices, it only happens on the Nexus 4 and Nexus 7 and not on the Galaxy S3, or HTC sensation. First thing that springs to mind is the fact that the N4 & N7 are Qualcomm devices but thats all i have to go on, and probably has nothing to do with anything!
Any help is much appreciated!
It's probably libspotify's fault, not yours. This library has never been particularly stable on Android (in fact, Spotify still calls it "beta"), however they will be replacing it soon with a new library similar to the Spotify iOS SDK.
My advice to you would be to not use libspotify unless you are under pressure to ship something right away. The new SDK will likely solve many of the headaches familiar to developers working with libspotify on Android.
Edit: The new Spotify Android SDK is out! You should use it instead of libspotify, it will save you much headache.
I've got a WinCE device powered over ethernet (PoE) and I want to prevent File-System corruption following a potential power loss, e.g. user pulling the plug.
As a side note, I'm already using TexFAT which is supposed to prevent FS corruptions. While the later certainly does help reducing FS corruptions (over using plain old FAT), it doesn't entirely prevent some to still occur from time to time... So, I'm considering using a small rechargeable backup battery that would give WinCE enough time to cleanly shut down. Now, I can't find any info on the shutdown process: how to trigger it, how long it takes, and so on... MSDN is pretty quiet on this topic. Any idea?
The powerdown sequence is totally platform dependent.
The following answer is relevant to Windows CE 6. It may be different for previous versions of CE.
If you include the power manager component in your system, then the sequence is plus minus this:
Send go to D4 to all the drivers that are powermanageable and that reported they support this state. Otherwise, the driver gets the lowest powerstate it supports.
XXX_PowerDown is called, but it is not commonly used in Windows CE 6.
In between the registry is flushed in case you have a Hive Based registry and you enabled the registry flush thread. You should disable this in a fragile system such as yours
OEMPowerOff
device down
Just found a post by Bruce Eitman on what happens when Suspending. He puts it better than I do.
The Suspend sequence is what you'd do before loosing power.