Docker on a linux VM, performances? [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I would like to know what Docker on VM implies regarding the performances, will I have issue ?
To me, adding a "layer" would decrease the performances. Is it right or wrong and most importantly, why ?
I want to be able to know what is the best way to deal with new projects when containers are on the line.
Thanks in advance :)

Every part of the system stack has some performance cost, but it’s probably close to immeasurable. In what you describe the cost of the VM will probably be greater than the cost of Docker, but the cost of either will be dwarfed by the cost of any database I/O you do. As always database tuning and algorithmic tuning will probably make the biggest difference.
An additional layer in a Docker image has approximately zero performance impact. It’s mildly “nicer” to have fewer layers but it doesn’t really matter that much.
If your program is in an interpreted language like Ruby or Python, or if you’re frequently starting JVMs, the performance difference from using a virtual machine or not is noise compared to the sheer overhead of these systems.
As always, the real answer is to run real benchmarks, and profile the system/application if it’s too slow. The sorts of questions you’re asking aren’t things you need to optimize for early and often aren’t things you need to optimize for at all.

Related

Incremental Development - Agile or Plan Driven [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm currently confused in incremental software methodology
what is the main difference between incremental development which adopt plan driven approach and the one that adopt agile approach ?
can anyone explain to me what is the difference between those two and if my choice was good for the project?
Learning is at the core of the agile approaches. It embraces the fact that it is almost impossible to have enough information to make detailed plan up front. Instead implementing, or possibly trying to implement, your first feature will trigger very valuable learnings. Both about your implementation and the usage and actual needs in the field.
I'm not sure what "documentations are really important" actually means, but dividing implementation along module boundaries will cause a number of unwanted effects:
you can only learn about the usage of the complete system after all modules are done, a.k.a. Too late. That will drive unknown remaining amount of work after you thought you were done.
how do you know that the first module is done? Presumably based on some guesswork about what it should do, which might be right but most probably is at least slightly wrong, which causes unknown late modifications
integration problems will also show up after the third module was supposed to be finished
All three drive late realizations about problems and unknown amount of work left to the end.
Agile focuses on driving out these learnings and information by forcing early feedback, such as early integration (as soon as there is a skeleton for the three modules), user feedback by forcing implementation of one user level feature at a time with demos of them as soon as hty are ready.
It is a strategy for minimizing risks in all software endeavours.
In my mind, you should have gone for an agile aproach.

New operating system security [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So I was messing around looking at different ways to operate a computer under total security. I found ways people were using specialized operating systems like Tails and that got me thinking, could a computer be secured by running an operating system that nobody has ever seen?
Obviously this would take a lot of work to make an OS from the ground up without any help, but would that be safe? Could having no information available about an OS make it invulnerable to attack?
P.S. I am talking about anti-hacking and anti-malware, not private web browsing.
What you're suggesting sounds a lot like security through obscurity.
Firstly, there's the issue that if you write your own operating system from ground up, it won't have exposure to close scrutiny and it's very likely you would have undiscovered exploitable bugs and vulnerabilities. A lot like cryptography, anyone can design a secure operating system that they, themselves, can't break into. Unfortunately, there's always someone in the world that's smarter than you who will be able to break in.
Secondly (and following up on the first point), the entire security of your architecture will essentially rely on the secrecy of your implementation. The moment someone manages to get a copy of your operating system or source code, you can be sure the security of your whole system will come crashing down like a ton of bricks before you can finish saying "oops". This is a very fragile defence against attack.
Lastly, there's no provable 'invulnerable to attack'. The closest thing to it is to have as many people using it as possible and hope the good guys find the vulnerabilities before the bad guys. But then you'd be back to square one since this is pretty much what most major operating systems already do.

measuring precision and recall [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
We are building a text search solution and want a way to measure precision and recall of the system every time we add new document types. From reading some of the posts here it sounds like a machine learning based solution is the way to go. Can a expert comment on this? We will then look to add machine learning folks to our team.
The only way to get the F1-score require knowledge about the correct class, rank of all samples obtains by evaluation querys, and you also need thoses evaluation querys.
Any machine learning will need a large quantity of manual work to provided thoses samples and/or querys. So large that it wont save you any time.
Another bad aspect of this evaluation is through to learning-related intrinsic errors. It will go with the growing size of the index of the search engine and the number of examples required. You never get a good evaluation.
Forget machine-learning for the evaluation of search engine.
Build by hand your tests querys and sample, by the time it will become big and reliable.
If you really want machine-learning in your system, you should look at query pre-processing. Getting some meta-information about the query by another way (you say SVN, why not?) is generaly a good for performance and while it did'nt change the result, you can use the same sample for an end-to-end evaluation.
That what I have done few years ago, but with naive baye classifier on natural langage analysis.

Power Consumption of an Application [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is there any way to find out the power consumed by an application. Like if i have some ten user apps running on my laptop and would like to know how much power each application is consuming in Linux environment?
The PowerTop tool might have something for you. Lookup the section "Power usage". If the tool itself is not what you want, you can research, where the tool retrieves its information and evaluate them in the way you want.
That's an interesting question and does not have a easy answer that I've heard of.
Presuming that you have a way of metering the minute to minute consumption of the machine. You can get a crude approximation by examining the amount of CPU time used. Either by watching things in top, or by examining the output of time (1). Compare the machine's total power consumption in various states of idleness and load to the amount of work done by each process---with enough statistics you should have a solvable system...possibly even a over-constrained one that calls for some kind of best-fit solution.
The only way that occurs to me to do it to high precision would be to use
Instrumented virtual machine that accumulated statistics on what parts of the CPU were activated. (Do such things exist at this time?!?)
The manufacturers documentation for the chip-n-board you are running on to total up the power implied.
which would be a horribly complicated mess.
Sorting out which bits were needed just to provide the environment and which could be unambiguously attributed to the program won't be easy.
I have to ask...why?
I don't know if there's really a "good way" to do this. But here's a suggestion for a generic approach that would work regardless of operating system: Remove the battery from your laptop, and hook up its power adapter to a high-precision current meter. Note the draw when no "normal" applications are running. Then run each application on its own and note the differences in current draw.

How are Agile development practices affected by a pervasive system change? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When a large system developed by Agile process requires a sudden large-scale change that affects most everything, what is the best way to go about it using Agile? Does the iterative part change at this point?
For example, what if a decision is made to make a centralized system a distributed one? Or choose another large pervasive example.
Arguably large changes should have been planned for, but it's never a perfect world which is one of the reasons Agile exists, so assume that suddenly a major change is introduced that shakes the foundation.
Edit to summarize solutions:
It's incremental all the way no matter how large or small the change may be.
"Does the iterative part change at this point?"
Never.
No matter how "pervasive" the change appears to be, you still have to work incrementally, in iterations you can manage.
You still have to prioritize the changes and make them in a way that will continue to pass unit tests and can be released when needed.
You may, for example, find that fixing 80% of the system is sufficient, and you may release. Or may be required to fix 100% of the system before releasing.
You still work incrementally. In sprints. Irrespective of when you release.
Agile has no magic answers.
There's a number of approaches :-
Plot a path of reasonably incremental changes to change the system from one archtecture to another. If you have reasonably well factored code, you should be ditching the code that is made redundant by the change and keeping stuff thats independent of the change.
Another approach if things are really different, start a parallel development of components for the new system.
Or, start new and steal as much as you can from the old project.
Depends how BIG the change really is.

Resources