Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is there any way to find out the power consumed by an application. Like if i have some ten user apps running on my laptop and would like to know how much power each application is consuming in Linux environment?
The PowerTop tool might have something for you. Lookup the section "Power usage". If the tool itself is not what you want, you can research, where the tool retrieves its information and evaluate them in the way you want.
That's an interesting question and does not have a easy answer that I've heard of.
Presuming that you have a way of metering the minute to minute consumption of the machine. You can get a crude approximation by examining the amount of CPU time used. Either by watching things in top, or by examining the output of time (1). Compare the machine's total power consumption in various states of idleness and load to the amount of work done by each process---with enough statistics you should have a solvable system...possibly even a over-constrained one that calls for some kind of best-fit solution.
The only way that occurs to me to do it to high precision would be to use
Instrumented virtual machine that accumulated statistics on what parts of the CPU were activated. (Do such things exist at this time?!?)
The manufacturers documentation for the chip-n-board you are running on to total up the power implied.
which would be a horribly complicated mess.
Sorting out which bits were needed just to provide the environment and which could be unambiguously attributed to the program won't be easy.
I have to ask...why?
I don't know if there's really a "good way" to do this. But here's a suggestion for a generic approach that would work regardless of operating system: Remove the battery from your laptop, and hook up its power adapter to a high-precision current meter. Note the draw when no "normal" applications are running. Then run each application on its own and note the differences in current draw.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I would like to know what Docker on VM implies regarding the performances, will I have issue ?
To me, adding a "layer" would decrease the performances. Is it right or wrong and most importantly, why ?
I want to be able to know what is the best way to deal with new projects when containers are on the line.
Thanks in advance :)
Every part of the system stack has some performance cost, but it’s probably close to immeasurable. In what you describe the cost of the VM will probably be greater than the cost of Docker, but the cost of either will be dwarfed by the cost of any database I/O you do. As always database tuning and algorithmic tuning will probably make the biggest difference.
An additional layer in a Docker image has approximately zero performance impact. It’s mildly “nicer” to have fewer layers but it doesn’t really matter that much.
If your program is in an interpreted language like Ruby or Python, or if you’re frequently starting JVMs, the performance difference from using a virtual machine or not is noise compared to the sheer overhead of these systems.
As always, the real answer is to run real benchmarks, and profile the system/application if it’s too slow. The sorts of questions you’re asking aren’t things you need to optimize for early and often aren’t things you need to optimize for at all.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm currently confused in incremental software methodology
what is the main difference between incremental development which adopt plan driven approach and the one that adopt agile approach ?
can anyone explain to me what is the difference between those two and if my choice was good for the project?
Learning is at the core of the agile approaches. It embraces the fact that it is almost impossible to have enough information to make detailed plan up front. Instead implementing, or possibly trying to implement, your first feature will trigger very valuable learnings. Both about your implementation and the usage and actual needs in the field.
I'm not sure what "documentations are really important" actually means, but dividing implementation along module boundaries will cause a number of unwanted effects:
you can only learn about the usage of the complete system after all modules are done, a.k.a. Too late. That will drive unknown remaining amount of work after you thought you were done.
how do you know that the first module is done? Presumably based on some guesswork about what it should do, which might be right but most probably is at least slightly wrong, which causes unknown late modifications
integration problems will also show up after the third module was supposed to be finished
All three drive late realizations about problems and unknown amount of work left to the end.
Agile focuses on driving out these learnings and information by forcing early feedback, such as early integration (as soon as there is a skeleton for the three modules), user feedback by forcing implementation of one user level feature at a time with demos of them as soon as hty are ready.
It is a strategy for minimizing risks in all software endeavours.
In my mind, you should have gone for an agile aproach.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to measure instaneous power comsumption by processor on ARM-Cortex-A9/Ubuntu 12.04 platform.
Does anyone know how to do this?
There are 4 obvious approaches to this:
Estimate it from other measurable parameters (e.g. CPU load)
Measuring current sense resistors in the on-board power supplies
Measuring entire-system power draw using an external supply with some kind of data-logging [a low value resistor and a voltmeter can also be used]
[If measuring power draw by a certain application] run the code on some other device that does have this functionality. [Apple's dev-tools and iOS provide incredible levels of support for this. Also fantastic for profiling too].
Since you're using the OMAP4460 (Pandaboard per chance?) it'll probably be paired with the TWL6030 power supply IC. A quick look at the datasheet suggests that it's capable of measuring current draw when running from battery (this is how the battery level indicator is implemented). There will be driver support for this.
The OMAP4430 (and probably by extension 4460) doesn't have power supply monitoring of its own.
Might also be worth looking on TI's website for white-papers. This is a common enough thing to do.
I think it mainly depends on your processor or SOC manufacturer. ARM defines processor core, manufacturer defines everything around it (like peripherials etc.).
Also when Ubuntu is ported on your platform, maybe there just be some power measuring application which also supports that platform.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
You know, these things. I assume they run on some old computer language/framework, anyone know what that might be?
The displays themselves are pretty basic, they (in most cases) just have a microcontroller with some firmware that allows them to convert commands they get serially into patterns and/or characters. The more recent ones also give feedback regarding broken LEDs for example. Typically these firmwares are written either in assembly or C.
The real intelligence of these systems is often located in a central control system that coordinates an entire city or even a state. These control systems can perform intelligent tasks on entire groups of signs like given the location of an accident, they add the correct distance to the accident to the warning message, automatically divert traffic, and so on.
I know of such systems written in C, C++, Java, G2, ... Depends on the moment they were designed. So no, they're not by definition outdated and antique! They do tend to have a longer lifespan than your average desktop app though which often leads to the oldest parts being swapped out for more recent developments and these newer modules will in many cases be based on more recent technologies.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When a large system developed by Agile process requires a sudden large-scale change that affects most everything, what is the best way to go about it using Agile? Does the iterative part change at this point?
For example, what if a decision is made to make a centralized system a distributed one? Or choose another large pervasive example.
Arguably large changes should have been planned for, but it's never a perfect world which is one of the reasons Agile exists, so assume that suddenly a major change is introduced that shakes the foundation.
Edit to summarize solutions:
It's incremental all the way no matter how large or small the change may be.
"Does the iterative part change at this point?"
Never.
No matter how "pervasive" the change appears to be, you still have to work incrementally, in iterations you can manage.
You still have to prioritize the changes and make them in a way that will continue to pass unit tests and can be released when needed.
You may, for example, find that fixing 80% of the system is sufficient, and you may release. Or may be required to fix 100% of the system before releasing.
You still work incrementally. In sprints. Irrespective of when you release.
Agile has no magic answers.
There's a number of approaches :-
Plot a path of reasonably incremental changes to change the system from one archtecture to another. If you have reasonably well factored code, you should be ditching the code that is made redundant by the change and keeping stuff thats independent of the change.
Another approach if things are really different, start a parallel development of components for the new system.
Or, start new and steal as much as you can from the old project.
Depends how BIG the change really is.