Dynamic calls to allocateCapacity / deallocateCapacity from other components - frontend

I'm developing a FRONTEND compliant Redhawk device on RHEL 5 with Redhawk 1.9.0.  After reading through the documentation, I'm still having a little bit of trouble understanding whether it's possible to dynamically allocate tuners at runtime from components that use the device.  My current understanding of the allocation property paradigm is that the Application Factory is the one that holds a direct reference to the Device and calls allocateCapacity on it directly at the time of instantiation of the dependent component.  This still leaves me with a few questions:
 Is it possible for a component, during its lifetime after instantiation, to request further allocation of tuners dynamically?  If so, how?  Is there a way to get a reference to the FRONTEND device at runtime, or should this be accomplished through messaging?
 When using the allocation property dependency strategy, how does the dependent component know at runtime what allocationId was used?  Is this queryable somehow?
 I'm having trouble setting up an allocation property dependency using the Redhawk IDE.  The "Dependency Wizard" in the IDE doesn't seem to allow for specifying property references that have struct values -- am I going about this the wrong way?

In the newest version of REDHAWK, 1.10, there are tools in the IDE that assist in the allocation and deallocation of Frontend devices. This is possible by expanding the device in the the SCA Explorer view, and choosing to perform an allocation on that specific device. In 1.9 however the proposed solution is listed below.
There are ways of accomplishing dynamic allocations, and establishing device relationships outside of the Application Factory. The Application Factory performs the following operations when launching waveforms/components:
Calling load and execute on the executable / loadable device, and fulfilling any co-location requirements.
Allocating against device resources, and keeping track of the necessary relationships when an allocation succeeds.
Establishing connections between components.
Establishing connections between components and their associated devices, services, eventchannels, etc.
Initializing overloaded properties by calling configure() on the components.
Creating the waveform object responsible for tracking waveform lifecycle, and waveform specific attributes.
The Application factory parses the sad.xml files which is not very dynamic in nature. In 1.9 the dynamic entity controlling the device/component interactions must be an external resourse, such as a python script, or a component. In order to create your own tasking mechanism for Frontend Interface enabled devices you can create an asset that performs the following steps. These steps can be done in any language, but for simplicity, python snippets are shown using the redhawk python utility:
Launch your waveforms/components as normal, or reference something that's running.
from ossie.utils import redhawk
dom = redhawk.attach('REDHAWK_DEV') #OR your Domain Nameapplication = dom.createApplication('APPNAME')
Create the property for allocation.
prop = Your_Property_used_for_Allocation
Cycle through all your devices in the domain and try allocating on them.
myDevice = None
for ii in dom.devMgrs:
for jj in ii.devs:
if (jj.allocateCapacity(prop)):
myDevice = jj
break
if not myDevice:
print 'Could Not Allocate... Exiting'
exit()
Make a connection between the device that fulfilled allocation and waveform/component.
outPort = myDevice.getPort('NAME OF USES PORT')
inPort = application.getPort('NAME OF PROVIDES PORT')outPort.connectPort(inPort, 'connection name based off allocation id')
Keep track of these relationships so connections, etc, can be brought down when deallocating.

Related

Bootloader and main application to share common code/functionalities

I'm struggling with a question in which I would like to get some help...
I have a Bootloader which can upload another application into my chip. In my case the Bootloader and the main application share a lot of the same functionality, and I would like to create 3 partitions in my chip's memory - one for the BL, one for common FW and one for the actual App.
I had a little experiment in which I flashed a function into a specific location in the memory and then in an actual application "jumped" to that function by using its hard-coded address, so as a POC, I guess it would work...
The thing is that I have a lot of function/classes and it would get very difficult to handle, so my question is - is there a neat way to "bind" all those 3 "applications" together?
I'm working on a Cortex-M4FP cpu and using KEIL uVision 5 as my IDE.
Thanks for your help :)
You could for example decide that the bootloader is the one providing the services is has in common with your application. It would implement an SVC handler that would be responsible for providing those common services to the application.
References - should apply to Cortex-M4 as well:
Developing software for Cortex-M3 - Supervisor Calls (SVC)
Cortex-M3 supervisor call (SVC) using GCC
ARM: How to write an SVC function
I am stating the obvious I guess, but please note that sharing code trough the application instead of your bootloader would be very dangerous in the case the new common services implementation would be buggy. Updating the shared code should always require updating the bootloader.

Any possibility to disable some SMX in a GPU?

In a single GPU such as P100 there are 56 SMs(Streaming Multiprocessors), and different SMs may have little correlation .I would like to know the application performance variation with different SMs.So it there any way to disable some SMs for a certain GPU. I know CPU offer the corresponding mechanisms but have get a good one for GPU yet.Thanks!
There are no CUDA-provided methods to disable a SM (streaming multiprocessor). With varying degrees of difficulty and behavior, some possibilities exist to try this using indirect methods:
Use CUDA MPS, and launch an application that "occupies" fully one or more SMs, by carefully controlling number of blocks launched and resource utilization of those blocks. With CUDA MPS, another application can run on the same GPU, and the kernels can run concurrently, assuming sufficient care is taken for it. This might allow for no direct modification of the application code under test (but an additional application launch is needed, as well as MPS). The kernel duration will need to be "long" so as to occupy the SMs while the application under test is running.
In your application code, effectively re-create the behavior listed in item 1 above by launching the "dummy" kernel from the same application as the code under test, and have the dummy kernel "occupy" one or more SMs. The application under test can then launch the desired kernel. This should allow for kernel concurrency without MPS.
In your application code, for the kernel under test itself, modify the kernel block scheduling behavior, probably using the smid special register via inline PTX, to cause the application kernel itself to only utilize certain SMs, effectively reducing the total number in use.

SWT uses operating system resources, but what is the limit and how to profile that?

SWT uses operating system resources so the SWT memory consumption doesn't depend on the heap (xms xms) and on the non-heap (metaspace). correct me if I'm wrong on that point.
is there any limit to the resources used by SWT components (colors, fonts, images...)?
how to know if the limit is reached and how to profile that?
if this limit is reached, a Java RCP application can crash without a java OutOfMemoryError (just a pid file)?
P.S.: I use Sleak to tack the amount of graphic objects currently used by the application
As Baz mentioned, the maximum number of GDI handles, which is specific to Windows, influences how many resources a (SWT) process can allocate. See the this question for more details. On other platforms, the limit may be different.
As you already found out, SWT Sleak is the right tool to monitor graphics resource usage of SWT applications.
If an application runs out of handles, it does not just 'crash'. An SWTError is raised when the limit is reached and an attempt is made to create a new resource.
In general, the exception can be handled like any other exception and after releasing unused resources, new ones can be created.
IIRC, the default exception handler of RCP applications will open a dialog that asks the user to exit the application gracefully in this case.
However, running out of handles in an SWT application is usually a sign that your strategy of how widgets are created/used is wrong.
A common strategy to reduce the number of resources is to use lazy/virtual Trees and Tables and to create widgets only when they become visible and dispose of them when no longer needed. In a TabFolder, for example, you would defer populating the TabItem unless it gets selected.

How to know if another program is reading your memory?

I'd like to know if you can detect that another application is reading the memory of your own program using ReadProcessMemory.
My question is related to the fact that Blizzard's games are protected by a "warden", that is able to detect cheats and bots injecting memory.
I get how they can check if the memory is injected, but can they also detect if it's only getting read by another program?
In the context of detecting external hacks which may use ReadProcessMemory() an anticheat can scan all external processes using signatures and heuristics to detect if something could be a cheat.
In the context of internal hacks, they don't use ReadProcessMemory but are injected into the game process and can read or modify the memory directly. An anticheat can simply detect injection methods or any allocation of memory to detect an internal cheat.
An anticheat which is running as administrator can also get a list of all the open process handles and detect what processes have permissions to interact with the game process. Code which can show you how that works is available here
A combination of these techniques only gives you an indicator of risk, further investigation is required to detect if the process is legitimate or not.

Asynchronous image downloader for blackberry

I have a need of creating a image gallery. The images are saved in the remote server. The Blackberry client need to download and render it to the UI (Gallery View).
I have used "UniversalImageDownloaer" library for android. But now for I am looking for any such freeware/open source lib that will help me to serve my purpose for BlackBerry. Can Anyone help me guiding me to a resource.
I need to look into the following things
Async image download
Gallery view
Image caching
Edit-1
From my earlier experience, I understand that Blackberry has the restriction for creating maximum of 250 (Many be 5+/-) number of threads at run time. And per application it is restricted to 17 number of threads. So I must look into the thread pool and thread safety for my requirements.
I don't know of any library for lazy loading in BB. You could try to port that library to BlackBerry, or DIY. Let's see how could you achieve this:
You can code a consumer thread, which will download an image at a time (in Blackberry, you won't get much performance improvement downloading in parallel). This consumer could take URLs from a stack. The UI (screen, list) will submit a request to the consumer each time it needs an image. The request is just passing the resource URL to the consumer, so that it puts it at the top of the stack. In the meantime, the GUI should display a default image or loading message. There are plenty of good books and manuals in Java on how to design a consumer-producer scheme in a thread-safe manner, but it goes beyond the scope of this answer.
Starting in OS 5.0, you have the PictureScrollField class that allows you to display a row of scrolling images, and can be customized to some extent. There's a sample demo app in the samples folder in the SDK, I think.
If multiple requests to the same image are likely to be made during the program execution, caching is an interesting enhancement. You could just keep them in RAM in the consumer stack, or even save them to a folder in SDCard. the consumer will then look first in the cache, and only in case it doesn't exist will it initiate a remote download.

Resources