I'm struggling with a question in which I would like to get some help...
I have a Bootloader which can upload another application into my chip. In my case the Bootloader and the main application share a lot of the same functionality, and I would like to create 3 partitions in my chip's memory - one for the BL, one for common FW and one for the actual App.
I had a little experiment in which I flashed a function into a specific location in the memory and then in an actual application "jumped" to that function by using its hard-coded address, so as a POC, I guess it would work...
The thing is that I have a lot of function/classes and it would get very difficult to handle, so my question is - is there a neat way to "bind" all those 3 "applications" together?
I'm working on a Cortex-M4FP cpu and using KEIL uVision 5 as my IDE.
Thanks for your help :)
You could for example decide that the bootloader is the one providing the services is has in common with your application. It would implement an SVC handler that would be responsible for providing those common services to the application.
References - should apply to Cortex-M4 as well:
Developing software for Cortex-M3 - Supervisor Calls (SVC)
Cortex-M3 supervisor call (SVC) using GCC
ARM: How to write an SVC function
I am stating the obvious I guess, but please note that sharing code trough the application instead of your bootloader would be very dangerous in the case the new common services implementation would be buggy. Updating the shared code should always require updating the bootloader.
Related
In a single GPU such as P100 there are 56 SMs(Streaming Multiprocessors), and different SMs may have little correlation .I would like to know the application performance variation with different SMs.So it there any way to disable some SMs for a certain GPU. I know CPU offer the corresponding mechanisms but have get a good one for GPU yet.Thanks!
There are no CUDA-provided methods to disable a SM (streaming multiprocessor). With varying degrees of difficulty and behavior, some possibilities exist to try this using indirect methods:
Use CUDA MPS, and launch an application that "occupies" fully one or more SMs, by carefully controlling number of blocks launched and resource utilization of those blocks. With CUDA MPS, another application can run on the same GPU, and the kernels can run concurrently, assuming sufficient care is taken for it. This might allow for no direct modification of the application code under test (but an additional application launch is needed, as well as MPS). The kernel duration will need to be "long" so as to occupy the SMs while the application under test is running.
In your application code, effectively re-create the behavior listed in item 1 above by launching the "dummy" kernel from the same application as the code under test, and have the dummy kernel "occupy" one or more SMs. The application under test can then launch the desired kernel. This should allow for kernel concurrency without MPS.
In your application code, for the kernel under test itself, modify the kernel block scheduling behavior, probably using the smid special register via inline PTX, to cause the application kernel itself to only utilize certain SMs, effectively reducing the total number in use.
I have a process that needs to be extensible by loading shared libraries. Is there a way to run the shared library code in a sandbox environment (other than an external process) so that if it segfaults it doesn't crash the process and has limitations on how much memory it can allocate, the cpu cycles it can use, etc.
I don't think there is a clean way to do it. You could try:
Catching segfaults and recovering from them (tricky, architecture specific, but doable)
Replacing calls to malloc/calloc for the library with an instrumented version that would count the allocated space (how to replace default malloc by code)
Alternatively use malloc hooks (http://www.gnu.org/software/libc/manual/html_node/Hooks-for-Malloc.html)
CPU cycles are accounted for the whole process, so I don't think there is any way you could get the info for just the library. Only viable option - manuallty measure ticks for every library call that your code makes.
In essence - this would be fun to try, but I recommend you go with the separate process approach and use RPC, quotas, ulimits etc.
No. If the shared library segfaults your process will segfault (the process is executing the library code). If you run it as an external process and use an RPC mechanism then you'd be okay as for crashing, but your program would need to detect when the service wasn't available (and something would need to restart it). Tools like chroot can sandbox processes, but not the individual libraries that an executable links to.
I remember reading somewhere that it isn't advisable to use "exec" within the C code, compiled by NDK.
What is the recommended approach? Do we try and push the EXEC code up to the Java-space; that is, so the JNI (or application) spawns the new process, (and where relevant passes the results back down to the NDK)?
First off, it's not recommended to use either fork or exec. All of your code is generally supposed to live in a single process which is your main Android application process, managed by the Android framework. Any other process is liable to get killed off by the system at any time (though in practice that doesn't happen in present Android versions as far as I have seen).
The rationale as I understand it is simply that the Android frameworks can't properly manage the lifetime and lifecycle of your app, if you go to spawn other processes.
Exec
You have no real alternative here but to avoid launching other executables at all. That means you need to turn your executable code into a library which you link directly into your application and call using normal NDK function calls, triggered by JNI from the Java code.
Fork
Is more difficult. If you really need a multi-process model, and want to fit within the letter of the rules, you need to arrange for the Android framework to fork you from its Zygote process. To do this, you should run all your background code in a different Service which is stated to run in a different process within the AndroidManifest.xml.
To take this to extremes, if you need multiple identical instances of the code running in different processes for memory protection and isolation reasons, you can do what Android Chrome does:
Run all your background/forked code in a subclass of Service
Create multiple subclasses of that
List each of these subclasses as a separate service within your AndroidManifest.xml each with a different process attribute
In your main code, remember exactly which services you've fired up and not, and manage them using startService/stopService.
Of course, if you've turned your native code into a library rather than an executable, you probably don't need fork anyway. The only remaining reason to use fork is to achieve memory protection/isolation.
In practice
In practice quite a lot of apps ignore all this and use fork/exec within their native code directly. At the moment, it works, at least for short-running tasks.
We are working on a rich client application in which many threads are running as well third party controls are used, after running application for 1 hour it starts giving error of 'System.OutOfMemoryException' unless and until we restart the application, i have search many sites for help but no particular and specified reason is giving.
Thanks.
It sounds pretty self-explanatory, you're system doesn't have enough memory. If you're still running the application as 32-bit then moving to 64-bit might solve the problem. I had exactly that problem on a server-2008-r2 recently, and moving to 64 bit did solve my problem. But if you're already 64 bit then perhaps the server doesn't have enough physical memory. In which case, you need to add more memory, or work out how to make your application less memory hungry. There could be objects that could be discarded that it's keeping references to, etc, and if that's the case you should try profiling to try and identify what's hogging the most memory. Beyond that, does the application use any unmanaged DLLs, e.g. COM objects written in C++ or similar. Maybe there's a memory leak outside of the managed framework?
I recommend using a profiler to identify and find where does the high memory consumption come from.
Hoi.
I am working on an experiment allowing users to use 1% of my CPU. That's like your own Webserver; but a big dynamic remote execution framework (dont ask about that), and I dont want users to use API functions like create files, no sockets, no threads, no console output, nothing.
Update1: People will be sending me binaries, so interrupt 0x80 is possible. Therefore... Kernel?
I need to limit a process so it cannot do anything but use a single pipe. Through that pipe the process will use my own wrapped and controlled API.
Is that even possible? I thought like a Linux kernel module.
The issues with limiting RAM and CPU are not primary here, for that there's something on google.
Thanks in advance!
The ptrace facility will allow your program to observe and control the operation of another process. Using the PTRACE_SYSCALL flag, you can stop the child process before every syscall, and make a decision about whether you want to allow that system call to proceed.
You might want to look at what Google is doing with their Native Client technology and the seccomp sandbox. The Native Client (NaCl) stuff is intended to let x86 binaries supplied by a web site run inside a user's local browser. The problem of malicious binaries is similar to what you face, so most of the technology/research probably applies directly.